find a location for property in a new city

Sunday 11 November 2012

Upgrading Azure Storage Client Library to v2.0 from 1.7

I upgraded Azure Storage to version 2.0 from 1.7 and I've found a number of differences when using storage. I thought how I'd document how I upgraded these more awkward bits of Azure Storage in version 2.0.

DownloadByteArray has gone missing

For whatever reason DownloadByteArray has been taken from me. So has DownloadToFile, DownloadText, UploadFromFile, UploadByteArray, and UploadText

Without too much whinging I'm just going to get on and fix it. This is what was working PERFECTLY FINE in v1.7:

public byte[] GetBytes(string fileName)
{
    var blob = Container.GetBlobReference(fileName);
    return blob.DownloadByteArray();
}

And here is the code modified to account for the face that DownloadByteArray no longer exists in Azure Storage v2.0:

public byte[] GetBytes(string fileName)
{
    var blob = Container.GetBlockBlobReference(fileName);
    using (var ms = new MemoryStream())
    {
        blob.DownloadToStream(ms);
        ms.Position = 0;
        return ms.ToArray();
    }
}

How to get your CloudStorageAccount

Another apparently random change is that you can't get your storage account info in the same way as you used to. You used to be able to get it like this in Storage Client v1.7:

var storageAccountInfo = CloudStorageAccount.FromConfigurationSetting(configSetting);
var tableStorage = storageAccountInfo.CreateCloudTableClient();

But in Azure Storage v2.0 you must get it like this:

var storageAccountInfo = CloudStorageAccount.Parse(
            CloudConfigurationManager.GetSetting(configSetting));
var tableStorage = storageAccountInfo.CreateCloudTableClient();

Why?.. not sure. I have had problems with getting storage account information before so maybe this resolve that.

What happened to CreateTableIfNotExist?

Again, it's disappeared but who cares.. Oh you do? Right well let's fix that up. So, in Azure Storage Client v1.7 you did this:

var tableStorage = storageAccountInfo.CreateCloudTableClient();
tableStorage.CreateTableIfNotExist(tableName);

But now in Azure Storage Client Library v2.0 you must do this:

var tableStorage = storageAccountInfo.CreateCloudTableClient();
var table = tableStorage.GetTableReference(tableName);
table.CreateIfNotExists();

Attributes seem to have disappeared and LastModifiedUtc has gone

Another random change that possibly doesn't achieve anything other than making you refactor your code. This was my old code from Storage Library Client v1.7:

var blob = BlobService.FetchAttributes(FileName);
if (blob == null || blob.Attributes.Properties.LastModifiedUtc < DateTime.UtcNow.AddHours(-1))
{
    ...
}

But now it should read like this because thought it looks prettier (which it does in fairness).

var blob = BlobService.FetchAttributes(FileName);
if (blob == null || blob.Properties.LastModified < DateTimeOffset.UtcNow.AddHours(-1))
{
    ...
}

Change your development storage connection string

This is just a straight bug so that's excellent. I was getting a useless exception stating "The given key was not present in the dictionary" when trying to create a CloudStorageAccount reference. To resolve this change your development environment connection string from UseDevelopmentStorage=true to UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://127.0.0.1 then it will magically work.

Bitch and moan

Apologies for the whingy nature of this post, I'm quite a fan of Azure but I have wasted about 3-4 hours with this "upgrade" from Azure Storage Client Library 1.7 to 2.0. It's been incredibly frustrating particularly since there seems to be no obvious reason why these changes were made. I just can't believe the amount of breaking changes when I haven't really written that much Azure storage code.

Randomly taking out nice methods like DownloadByteArray and DownloadText is surely a step backwards no? Or randomly renaming CreateIfNotExist() to CreateIfNotExists()... what is the point in that?!

I remember when upgrading to ASP.NET 4 from 3.5, I spent very little time working through breaking changes and I have 100 times more .NET code than I do Azure Storage code. As well as that, I was well aware of the many improvements with that .NET version update, with this Azure Storage update I have no idea what I'm getting. No matter the improvements, it is just an Azure storage API and this number of breaking changes, often for the benefit of syntax niceties is just unnacceptable.

Oh, if you are still in pain doing this I have found a complete list of breaking changes in this update along with minimal explanations here.

Follow britishdev on Twitter

Saturday 27 October 2012

Unexpected "string" keyword after "@" character. Once inside code, you do not need to prefix constructs like "string" with "@"

After upgrading an ASP.NET MVC 3 project to MVC 4 I noticed a change in the Razor parser that threw a Parser Error saying: 'Unexpected "string" keyword after "@" character. Once inside code, you do not need to prefix constructs like "string" with "@"'

Firstly this has always working when it was MVC 3 and Razor v1. I may have been getting the syntax wrong all along but if the syntax allowed it was it really wrong?

What I was doing was trying to put some server code in a Razor helper with no surrounding HTML tags, like this example:

@helper Currency1000s(int? value)
{
    if(value == null)
    {
        -
    }
    else
    {
        @string.Format("{0:C0}k", value / 1000.0)
    }
}

Interestingly if I were to replace line 9 with @value all would be fine. Anyway, it is an easy enough fix, you just need to wrap the string.Format in HTML tags or the text tags as I did here:

@helper Currency1000s(int? value)
{
    if(value == null)
    {
        -
    }
    else
    {
        @string.Format("{0:C0}k", value / 1000.0)
    }
}

Follow britishdev on Twitter

Tuesday 23 October 2012

Build Visual Studio solutions without Visual Studio

I have a project that has four separate Visual Studio solutions. It is a bit annoying when I do an update from SVN and have to open each solution in Visual Studio just so I can build them. Visual Studio is hardly a lightweight program so surely there's a simpler way?

Most sys admins or developers accustomed to automated builds will probably start to titter at this point, but you can do it easily using NotePad (and the various stuff already installed on your dev machine). I don't want a mega complex system to deploy in a special way to a different server using expensive software, I just want to build everything without having to load many Visual Studio instances. So here is how.

Simple way to build all solutions with a batch file

First open NotePad and write the following code:

@echo off
CALL "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat"
MSBUILD /v:q C:\Projects\Forums\Forums.sln
MSBUILD /v:q C:\Projects\MainSite.sln
MSBUILD /v:q C:\Projects\Users\UserMgmt.sln
MSBUILD /v:q C:\Projects\Core\Global.sln
PAUSE

Save this as something like BuildAll.bat then whenever you want to build everything just double click this file.

What was that?! Explain (a bit)

To use MSBUILD you must be running the Visual Studio command prompt but by default batch files run in normal compand prompt so line 2 enables all the Visual Studio-ness. Also, I added the /v:q parameter so that MSBUILD wont output every little detail about the build, just the important bits (PASS/FAIL).

Follow britishdev on Twitter

Thursday 18 October 2012

To encodeURI or to encodeURIComponent?

Solved a bug today whilst encoding in JavaScript a URL that contained a certain special character: the hash character #.

I found a generated querystring was not working correctly because many of the params were not being passed to the server somehow. I was generating a link like so: var link = 'http://www.britishdeveloper.co.uk?link=' + encodeURI(url) + '&userID=232';

I was quite pleased with myself remembering to URL Encode params that are going into a querystring to get around special characters but I had not thought of something... the hash character #!

Say the url being placed in the querystring is '/my page.aspx#comment3' my link was coming out as http://www.britishdeveloper.co.uk?link=/my%20page.aspx#comment3&userID=232.

What's wrong with that?

Well it's not what I was expecting really but it was working up until I got urls with hashes in. As you may or may not know everything proceeding a # in a url is for browsers so your server will ignore everything after it. In this case the userID param was not seen by my server.

encodeURIComponent not encodeURI

For this particular scenario encodeURIComponent was what I needed since:

encodeURI('/my page.aspx#comment3')
//output: /my%20page.aspx#comment2

encodeURIComponent('/my page.aspx#comment3')
//output: %2Fmy%20page.aspx%23comment2

encodeURI is for encoding non-URI special characters from a string so the above example of encodeURI would have been great for appending it like so 'http://www.britishdeveloper.co.uk' + encodeURI(url)

encodeURIComponent is for encoding a string to be fit for a querystring param where characters such as . and # shouldn't really be included as in my above example.

So know your Javascript encoding!

Follow britishdev on Twitter

Monday 28 May 2012

Could not load file or assembly 'msshrtmi' or one of its dependencies

Sometimes after publishing my Azure solution I get a yellow screen of death giving a "System.BadImageFormatException" exception with a description of "Could not load file or assembly 'msshrtmi' or one of its dependencies. An attempt was made to load a program with an incorrect format."

I tried everything to get rid of this msshrtmi complaint: building it again, rebuilding it, cleaning the solution, restarting my computer even and that fixes everything in Windows ever!

Very strange as my newly published Azure solution is fine but my development site is now at the mercy of this mysterious msshrtmi assembly.. whatever the hell that is.

Kill msshrtmi! Kill it with fire!

So it's is an assembly right? Assemblies go in the bin folder... let's look there... There it is! msshrtmi.dll. DELETE IT!

I don't know what it is and I didn't put it there so I deleted it and all is back to normal. Excellent.

Follow britishdev on Twitter

How to turn off logging on Combres

I have been using Combres for my .NET app to minify, combine and compress my JavaScript and CSS. So far I have found it awesome.

It really does do everything it claims. Minifing your scripts and CSS, then combines them all so that all the code is sent via one HTTP request (well one for CSS and one for JavaScript). It also handles GZip or deflate compression should the request accept it. It is easy to set up since it is now available through NuGet and using it will make YSlow smile when it scans your site.

Logging

One thing that is annoying though is that Combres seems to log everything it does. "Use content from content's cache for Resource x...Use content from content's cache for Resource y" This is fine in development but unnecessary in production so I wanted to turn it off but couldn't find how to do this in any documentation.

The way I managed to turn off logging was to find the line in the web.config that combres put in that looks like this:

<combres definitionUrl="~/App_Data/combres.xml" logProvider="Combres.Loggers.Log4NetLogger" />

You simply need to remove the logProvider so it looks like this:

<combres definitionUrl="~/App_Data/combres.xml" />

If you still want logging on your development environment you can simply remove the logProvider attribute in a web.config transform.

Follow britishdev on Twitter

Tuesday 22 May 2012

Export and back up your SQL Azure databases nightly into blob storage

With Azure I have always believed, if you can do it with the Azure Management Portal then you can do it with a REST API. So I thought it would be a breeze to make an automated job to run every night to export and back up my SQL Azure database into a BACPAC file in blob storage. I was suprised to find scheduling bacpac exports of your SQL Azure databases is not documented in the Azure Service Management API. Maybe it is because the bacpac exporting and importing is in beta? Nevermind. I successfully have a worker role backing up my databases and here's how:

It is a REST API so you can't use nice WCF to handle all your POST data for you but there is a trick still to avoid writing out all your XML parameters by hand and instead strong typing a few classes.

Go to your worker role or console application and add a service reference to your particular DACWebService (it varies by region):

  • North Central US: https://ch1prod-dacsvc.azure.com/DACWebService.svc
  • South Central US: https://sn1prod-dacsvc.azure.com/DACWebService.svc
  • North Europe: https://db3prod-dacsvc.azure.com/DACWebService.svc
  • West Europe: https://am1prod-dacsvc.azure.com/DACWebService.svc
  • East Asia: https://hkgprod-dacsvc.azure.com/DACWebService.svc
  • Southeast Asia: https://sg1prod-dacsvc.azure.com/DACWebService.svc

Once you import this Service Reference you will have some new classes that will come in handy in the following code:

//these details are passed into my method but here is an example of what is needed
var dbServerName = "qwerty123.database.windows.net";
var dbName = "mydb";
var dbUserName = "myuser";
var dbPassword = "Password!";

//storage connection is in my ServiceConfig
//I know these the CloudStorageConnection can be obtained in one line of code
//but this way is necessary to be able to get the StorageAccessKey later
var storageConn = RoleEnvironment.GetConfigurationSettingValue("Storage.ConnectionString");
var storageAccount = CloudStorageAccount.Parse(storageConn);

//1. Get your blob storage credentials
var credentials= new BlobStorageAccessKeyCredentials();
//e.g. https://myStore.blob.core.windows.net/backups/mydb/2012-05-22.bacpac
credentials.Uri = string.Format("{0}backups/{1}/{2}.bacpac",
    storageAccount.BlobEndpoint,
    dbName,
    DateTime.UtcNow.ToString("yyyy-MM-dd"));
credentials.StorageAccessKey = ((StorageCredentialsAccountAndKey)storageAccount.Credentials)
                                   .Credentials.ExportBase64EncodedKey();

//2. Get the DB you want to back up
var connectionInfo = new ConnectionInfo();
connectionInfo.ServerName = dbServerName;
connectionInfo.DatabaseName = dbName;
connectionInfo.UserName = dbUserName;
connectionInfo.Password = dbPassword;

//3. Fill the object required for a successful POST
var export = new ExportInput();
export.BlobCredentials = credentials;
export.ConnectionInfo = connectionInfo;

//4. Create your request
var request = WebRequest.Create("https://am1prod-dacsvc.azure.com/DACWebService.svc/Export");
request.Method = "POST";
request.ContentType = "application/xml";
using (var stream = request.GetRequestStream())
{
    var dcs = new DataContractSerializer(typeof(ExportInput));
    dcs.WriteObject(stream, export);
}

//5. make the POST!
using (var response = (HttpWebResponse)request.GetResponse())
{
    if (response.StatusCode != HttpStatusCode.OK)
    {
        throw new HttpException((int)response.StatusCode, response.StatusDescription);
    }
}

This code would run in a scheduled task or worker role to be set for 2am each night for example. It is important you have appropriate logging and notifications in the event of failure.

Conclusion

This sends off the request to start the back up / export of database into a bacpac file. The success of this is no indication that the back up was successful, only the request submition. If your credentials are wrong you will get a 200 OK response but it the back up will fail silently later.

To see if it has been successful you can check on the status of your exports via the Azure Management Portal, or by waiting a short while and having a look in your blob storage.

I have not covered Importing because, really, exporting is the boring yet important activity that must happen regularly (such as nightly). Importing is the one you do on the odd occassion when there has been a disaster and the Azure Management Portal is well suited to such an occassion.

Follow britishdev on Twitter

Friday 18 May 2012

How to get Azure storage Account Key from a CloudStorageAccount object

I have a CloudStorageAccount object and I want to get the AccountKey out of it. Seems like you should be able to get the Cloud Storage Key out of a CloudStorageAccount but I did struggle a bit at first.

I first used the CloudStorageAccount.FromConfigurationSetting(string) method at first and then played about in debug mode to see if I could find it.

I then found that that method doesn't return the correct type of object which left me unable to find my Azure storage access key. I then tried the same thing but using CloudStorageAccount.Parse(string) instead. This did have access to the Azure storage access key.

//this method of getting your CloudStorageAccount is no good here
//var account = CloudStorageAccount.FromConfigurationSetting("StorageConnectionStr");

//these two lines do...
var accountVal = RoleEnvironment.GetConfigurationSettingValue("StorageConnectionStr");
var account = CloudStorageAccount.Parse(accountVal);

//and then you can retrieve the key like this:
var key = ((StorageCredentialsAccountAndKey)account.Credentials)
          .Credentials.ExportBase64EncodedKey();

It is strange that CloudStorageAccount.FromConfigurationSetting doesn't give you access to the account key in your credentials but CloudStorageAccount.Parse does. Oh well, hope that helps.

Follow britishdev on Twitter

Thursday 10 May 2012

Set maxlength on a textarea

It's annoyed me for quite a while that you can set a maximum length on an <input type="text" /> but not on a textarea. Why? Are they so different?

I immediately thought I was going to have to write some messy JavaScript but then I learned that HTML5 implements maxlength on text areas now and I'm only considering modern browsers! Wahoo!

Then I learnt that IE9 doesn't support it so JavaScript it is...

JavaScript textarea maxlength limiter

What I have done that works well is bind events keyup and blur on my text area to a function that removes characters over the maxlength provided. The code looks like this:

$('#txtMessage').bind('keyup blur', function () {
    var $this = $(this);
    var len = $this.val().length;
    var maxlength = $this.attr('maxlength')
    if (maxlength && len > maxlength) {
        $this.val($this.val().slice(0, maxlength));
    }
});

Conclusion

It works quite well because if the browser already supports maxlength on a textarea there will be no interruption because the value of the textarea will not go over that maxlength.

The keyup event doesn't fire when the user pastes text in using the mouse but that is where the blur event comes in. Also, an enter click makes a new line on a textarea so the user has to click the submit button (blurring the textarea).

Beware though that this is for usability only; a nasty user could easily bypass this so ensure you are checking the length server side if it is important to you.

Follow britishdev on Twitter

Wednesday 18 April 2012

jQuery - Don't use bind and definitely don't use live!

If you use jQuery you have almost certainly used the .click(fn) event. You may know that just calls .bind('click', fn). You may have used live(). You may know that live() is weird and buggy and inefficient. You may have heard you should not use it in favour of .delegate(). You may have heard of the new .on(). But which is better? bind vs live vs delegate vs on?

Take the following html:

<div id="myContainer">
    <input type="button" id="myBtn" value="Click me!" />
</div>
<script type="text/javascript>
    $('#myBtn').click(doSomething);
</script>

I have a button that has a javascript function called doSomething attached to it. This works fine until I dynamically replace the inner HTML of the div myContainer with a new button (with same ID etc). doSomething will no longer be attached to the new myBtn button so it will not be called when the button is clicked.

FYI, $('#myBtn').click(doSomething); is just shorthand for writing $('#myBtn').bind('click', doSomething);

So you can change the JavaScript code above to $('#myBtn').live('click', doSomething); and it will still work even when the button is brought in or generated dynamically. Up until now I have found that the live function uses magic to just make it all work even after ajax abuse and therefore it is better.

However, recently (in a more complex implementation) I found it was causing some undesired behaviour so I had a dig into it. Turns out live() is buggy and badly performing and has actually been deprecated as of jQuery v1.7 so do not use live! What should you be using for this type of functionality then?

Well, delegate has been the popular replacement since it attaches the method once to a selected container but still targets the inner element as bind or click would. The importance is two fold:

  1. The functionality is not wastefully attached to each element matched and instead just once to the containing element
  2. Dynamically added items that match the selector will still fire the event since the event is not attached to this element, it is attached to the container that has remained static

In fact live does the same as delegate but it's containing element cannot be defined, it is the whole document which, if you have a deep DOM, finding the originating element using the selector could cause it to search a long way. Lame... In fact, I found bugs with it and there are other details which mean you should use delegate instead. I say should because there is a new cool kid on the block.

So what is better than bind, live and delegate?

You should use .on(). All the cool kids are doing it. Want me to name drop users of on? Well, ME! Perhaps more famous in the JavaScript world is jQuery, and they use it! If you look at the jQuery library bind, live and delegate all just call on as of jQuery v1.7+

So my JavaScript code above should change to:

Bind and click are still fine in my opinion as nice little shortcuts but only when using it them to attach a function to one specific element and only when you are not attaching to dynamic objects. If you are attaching to, say, multiple <li>’s in a <ul> you should use on() instead to attach the event to the <ul> and target the event to the <li>’s. This way there is only one function and multiple references to it rather than creating the function once for each of the <li>’s.

Follow britishdev on Twitter

Tuesday 17 April 2012

What to do with LastIndexOf unexpected ArgumentOutOfRangeException behaviour

Using string.LastIndexOf(char value, int startIndex, int count); started giving me some behaviour I wasn't expecting when it started giving me an ArgumentOutOfRangeException Exception with a description of: "Count must be positive and count must refer to a location within the string/array/collection."

Let me give you an example:

var message = ".NET is unusually faultless!";
var maxSearch = 16;
var count = message.LastIndexOf('l', 0, maxSearch);
var newMsg = message.Substring(0, count);

This should work right? What could be wrong with this? Well according to IntelliSense "Reports the index position of the last occurrence of the specified Unicode character in a substring within this instance. The search starts at a specified character position and examines a specified number of character positions." Hmm.. Okay. And the count parameter that I am getting wrong? "The number of character positions to examine." Seems legit.

I decided to disassemble .NET to have a peep at what's going on internally... Have I found a bug in .NET?! Can't tell because the method is extern, so I cannot look at it. I instead did some experimenting and worked it out.

How to properly use LastIndexOf

Turns out I was just using it incorrectly and blaming my tools. Although I am convinced some blame lies with IntelliSense for forgetting to mention that LastIndexOf searches backwards through the string! Should I have known that? I don't know. Anyway this is what I should have written:

var message = ".NET is unusually faultless!";
var maxSearch = 16;
var count = message.LastIndexOf('l', maxSearch, maxSearch);
var newMsg = message.Substring(0, count);

The startIndex parameter should be at the last point you want to search backwards from. The count then specifies the amount of chars for your search to go back through.

Maybe I'm just being stupid or maybe it is just that Console.WriteLine(newMsg);

Follow britishdev on Twitter

Tuesday 27 March 2012

Reuse Select function in Linq

You can reuse Linq Select functions by defining a delegate Func field that can be used in multiple Linq queries.

I had a few Linq to Objects queries that were querying the same collection but had different Where clauses that were nested between various if statements. The similarity between all of these statements was the Select function. This was quite long itself and it was annoying seeing such repetition. DRY!

So I tried to create a Func parameter that I could just pass into my Select statement which previously, for example could look like this:

ReturningPerson person;
if (iFeelLikeIt)
{
    person = people.Where(person => person.IsNameFunny)
                .Select(person =>
                    new ReturningPerson
                    {
                        DateOfBirth = input.DateBorn,
                        Name = string.Format("{0} {1}",
                            input.FirstName, input.LastName),
                        AwesomenessLevel = input.KnowsLinq ? "High" : "Must try harder",
                        CanFlyJet = false
                    })
                .FirstOrDefault();
}
else
{
    person = people.Where(person => person.Toes == 10)
                .Select(person =>
                    new ReturningPerson
                    {
                        DateOfBirth = input.DateBorn,
                        Name = string.Format("{0} {1}",
                            input.FirstName, input.LastName),
                        AwesomenessLevel = input.KnowsLinq ? "High" : "Must try harder",
                        CanFlyJet = false
                    })
                .FirstOrDefault();
}

This could be modified to work like this:

Expression<Func<InputPerson, ReturningPerson>> selector = (input) =>
    new ReturningPerson
    {
        DateOfBirth = input.DateBorn,
        Name = string.Format("{0} {1}",
            input.FirstName, input.LastName),
        AwesomenessLevel = input.KnowsLinq ? "High" : "Must try harder",
        CanFlyJet = false
    };
ReturningPerson person;
if (iFeelLikeIt)
{
    person = people.Where(person => person.IsNameFunny)
                .Select(selector)
                .FirstOrDefault();
}
else
{
    person = people.Where(person => person.Toes == 10)
                .Select(selector)
                .FirstOrDefault();
}

Follow britishdev on Twitter

Tuesday 13 March 2012

Error during serialization or deserialization using the JSON JavaScriptSerializer. The length of the string exceeds the value set on the maxJsonLength property

Whilst creating a JsonResult for my web service in ASP.NET MVC I received a deserialisation error of type "InvalidOperationException" with the description "Error during serialization or deserialization using the JSON JavaScriptSerializer. The length of the string exceeds the value set on the maxJsonLength property."

In fairness I was sending quite a large JSON object at the time, largely due to there being 288 base64 embedded images totally ~15MB I'd guess... Whoops! Anyway, it may be a hilariously large amount of data but it is what I want so how to work around this?...

There is a web config setting that can resolve this which was my first discovery in my path to success:

<system.web.extensions>
  <scripting>
    <webServices>
      <jsonSerialization maxJsonLength="1000000000" />
    </webServices>
  </scripting>
</system.web.extensions>

This is the official word from Microsoft about this but unfortunately this only works when you are specifically serialising (or deserialising) things yourself. This has no relation on the inner workings of the framework such as my bit of MVC code which is currently as follows:

public JsonResult GetData()
{
    return Json(GetCrazyAmountOfJson(), JsonRequestBehavior.AllowGet);
}

So for those of you using the Json() method in ASP.NET MVC or some other similar .NET framework voodoo like me the work around is to write your own code and bypass the framework such as:

public ContentResult GetData()
{
    var data = GetCrazyAmountOfJson();
    var serializer = new JavaScriptSerializer();
    serializer.MaxJsonLength = int.MaxValue;
    var result = new ContentResult();
    result.Content = serializer.Serialize(data);
    result.ContentType = "application/json";
    return result;
}

The MaxJsonLength is an Int32 so the maximum length can only be the maximum int value (1.4billion or something), you cannot make it unlimited. I assume this limit is here to make you think twice before making crazy big JSON serlisations. Did it?

Follow britishdev on Twitter

Thursday 9 February 2012

Add Cloud and Local service configurations when there is only a Default

I would like a cloud and local versions of my service configurations to change some of the configuration settings in different deployments to either Local or Cloud. Sometimes you have only one service configuration called "Default" which is not enough. This guide shows how to add more.

So you only have one Service configuration file? You would be better off with more than one since it is likely you would want to have different configuration setting defined for different deployments e.g. storage emulator used for local deployments and Azure for cloud deployments.

Right click on one of your roles and go to properties. On the Service Configuration drop down select <Manage...>

On the Manage Service Configurations dialogue box you want to click the Default configuration (for example) and click 'Create copy' and rename it to 'Local'. Do the same again for 'Cloud' and then remove 'Default'.

You should now have two versions of the service configuration, both Local and Default.

These two versions of ServiceConfiguration, Local and Cloud, are now in existence which conforms with the standard set up when you have a new Azure Service. Of course, these service configurations will be duplicates of each other so you will need to make the appropriate changes in them to reflect that, e.g. set the storage account to point to Azure on Cloud and storage emulator on Local service configurations.

Follow britishdev on Twitter

Thursday 19 January 2012

Interesting little Azure points

I often discover little interesting facts about Azure while I am using it that I would blog about but would only last one line/paragraph so I don't bother. This is what I should tweet about I suppose but I get the feeling my tweets (@britishdev) get lost within 10mins surrounded by your little cousin Kate reporting, "@KatezLulz: OMG I just ate broccoli!!!1! :p" and I can't compete with that. So here is a blog post which I will try to keep updates with short and sharp points about Windows Azure.

#1 What happens to your Azure deployment if your machine is turned off halfway through?

It depends where you are in your deployment. As long as the uploading phase is done you will be fine. Sort of. Visual Studio and the management portal make it look a lot more simple than it is to deploy an Azure package but it does it in steps like: Upload, Create, Start instances etc (or along those lines anyway). So if you switch your computer off after you have uploaded you will not lose everything and have to start again but it wouldn't have deployed fully either. Next time you log in to the management portal you will find your instances are there but "Stopped" so you simply need to start them using the great big Start button in the Azure management portal and it is done. Trivial.

#2 You can filter your Azure Table Storage viewer in Visual Studio

Type filters into the bar such as Timestamp gt datetime'2012-01-19T10:00:00Z'. This will get every record after 19th Jan 2012 10am. Here is a full list of table filters.

Follow britishdev on Twitter

Friday 13 January 2012

Case sensitive Azure storage

Azure storage URLs are case sensitive as you may have noticed. If you have not noticed then: OMG AZURE STORAGE URL ARE CASE SENSITIVE!!1! This is most likely because URL specification states that URLs should be case sensitive so as to ensure different casings should represent different locations. Admittedly, this is rather odd when compared with IIS and subsequently web sites running in Azure.

I am a developer and architect by trade so I am more than aware of the importance in maintaining consistent URLs throughout a web site. Search engines index sites in a case sensitive way and so it is important not to accidentally display duplicate web content on differently cased URLs. To handle this in the past I have always insisted that my teams adopt a lower case policy where by every URL written in code, links, references etc are always written in lower case. This avoids such an SEO disaster. I also create a redirect rule using IIS Rewrite Module to 301 redirect any URLs containing uppercase characters to its lowercase version.

With this policy in mind it should not be difficult to now use Azure storage by simply maintaining these standards. It is wise to create a repository class that will handle all interaction with Blob, Tables and Queue storage that can abstract common rules away from the developer each time they wish to use them. One of these rules would be to ensure that when saving to these repositories the filenames and containers are lower cased using .ToLower() (in C#). Also when getting an object from storage you could also ensure that the file name requested is also lower cased in the same way.

This does not however prevent users accessing the URL using uppercase but really according to the URL specs they should not and using smart code you can most likely avoid this from happening. For example, if links are only ever displayed in lowercase someone if very unlikely to ever access it using uppercase.

Azure case sensitivity conclusion

So in summary, although it is an odd inconsistency between IIS and storage it is only a trivial programming exercise to enforce and a minor coding standard to communicate to your team. This will ensure that storage is always used as intended allowing you to reap the great benefits of using Windows Azure Storage.

Follow britishdev on Twitter

Wednesday 11 January 2012

Using the Azure API to see a deployment status using .NET

See the status of your Windows Azure deployments using the Windows Azure Service Management REST API. Since this is REST based you can use any framework or programming language that can make web requests. Python, Java etc here is .NET.

Here is a short bit of C# code that will allow you to call the part of the Windows Azure Service Management REST API that deals with getting the status of your hosted service in Azure.

You can see how the REST API is expected to be used here at Get Hosted Service Properties. This code accesses that API:

static void Main(string[] args)
{
    var subsctiptionId = "f62e5e87-5c76-4a94-9136-794fae3eff16";
    var hostedService = "colintest";
    //I have another post that details how GetCertificateByThumbprint method works:
    //http://www.britishdeveloper.co.uk/2012/01/adding-certificate-to-request-in-net.html
    var certificate = GetCertificateByThumbprint("23A43AE81F15CB000000000000000000000000000");

    var statusApiUrl = string.Format(
       "https://management.core.windows.net/{0}/services/hostedservices/{1}?embed-detail=true",
       subsctiptionId, hostedService);
    var hostedServiceStatus = new Uri(statusApiUrl);
    Console.WriteLine("Hosted service status");
    MakeApiRequest(hostedServiceStatus, certificate);
    
    Console.ReadKey();
}

private static void MakeApiRequest(Uri requestUri, X509Certificate2 certificate)
{
    var request = (HttpWebRequest)HttpWebRequest.Create(requestUri);
    request.Headers.Add("x-ms-version", "2011-10-01");
    request.Method = "GET";
    request.ContentType = "application/xml";
    request.ClientCertificates.Add(certificate);

    try
    {
        using (var response = (HttpWebResponse)request.GetResponse())
        {
            Console.WriteLine("Response status code: " + response.StatusCode);

            using (var responseStream = response.GetResponseStream())
            using (var reader = new StreamReader(responseStream))
            {
                Console.WriteLine("Response output:");
                Console.WriteLine(reader.ReadToEnd());
            }
            Console.WriteLine("");
        }
    }
    catch (Exception e)
    {
        Console.WriteLine(e.Message);
        throw e;
    }
}

Since you have used ?embed-detail=true in the querystring this add extra detail. From here you can get all sorts of useful information such as: Status e.g. Running or DeploymentSlot e.g. Production.

Note: The GetCertificateByThumbprint(string thumbprint) method I used is of course simplifying attaching a certificate to the request for the sake of conciseness. You can have a look at what this method is doing here at attaching a certificate to a WebRequest.

Follow britishdev on Twitter

Adding a certificate to a request in .NET

Sometimes you need to send a security certficate with your WebRequest to authenticate with the web service you are accessing, e.g. a REST API via .NET. To authenticate you may need to send a certificate with your web request to authenticate with the API. This guide shows you how to do that using .NET.

First you need to be able to find your certificate. Sometimes it is hard to remember where exactly your certificate is so I have created two methods which together will search all usual locations until it finds the certificate based on its thumbprint.

This bit of code finds and returns the certificate:

//Returns a certificate by searching through all likely places
private static X509Certificate2 GetCertificateByThumbprint(string thumbprint)
{
    X509Certificate2 certificate;
    //foreach likely certificate store name
    foreach (var name in new[] { StoreName.My, StoreName.Root })
    {
        //foreach store location
        foreach (var location in new[] {StoreLocation.CurrentUser, StoreLocation.LocalMachine})
        {
            //see if the certificate is in this store name and location
            certificate = FindThumbprintInStore(thumbprint, name, location);
            if (certificate != null)
            {
                //return the resulting certificate
                return certificate;
            }
        }
    }
    //certificate was not found
    throw new Exception(string.Format("The certificate with thumbprint {0} was not found",
                                       thumbprint));
}

private static X509Certificate2 FindThumbprintInStore(string thumbprint,
                                                      StoreName name, StoreLocation location)
{
    //creates the store based on the input name and location e.g. name=My
    var certStore = new X509Store(name, location);
    certStore.Open(OpenFlags.ReadOnly);
    //finds the certificate in question in this store
    var certCollection = certStore.Certificates.Find(X509FindType.FindByThumbprint,
                                                     thumbprint, false);
    certStore.Close();

    if (certCollection.Count > 0)
    {
        //if it is found return
        return certCollection[0];
    }
    else
    {
        //if the certificate was not found return null
        return null;
    }
}

With this method created that gets a certificate you can now easily add a certificate to an HttpWebRequest like so:

var request = (HttpWebRequest)HttpWebRequest.Create("https://mysecureapi.com/listofsecrets");
var certificate = GetCertificateByThumbprint("23A43AE81F15CB000000000000000000000000000");
request.ClientCertificates.Add(certificate);
Now when you set up the rest of your request and GetResponse you will be sending the certificate also.

Follow britishdev on Twitter

Saturday 7 January 2012

Do not use iisreset in Azure

So you have remote desktop in to one of you Azure instances and you are free to do anything you like right? Wrong! Do not change things! And as I found do not use IISReset.

I have heard many times that remoting in to an Azure instance is for looking and debugging only, NOT for changing things. In fact I have even given this advice to many clients I have spoken to about Azure. But who am I to practice what I preach?

Really, I give the advice of not changing things when RDPing into an Azure instance because any changes you make will at some point be lost when your instances are automatically updated for you and then redistributed to other machines. Any changes you wish to be permanent on your machine will need to be part of your package.

Anyway, you can see why I avoid making changes, because of a lack of persistence but that doesn't mean I shouldn't do a cheeky IIS reset when trying to fix an issue right? Hmm, wrong.

Do not restart IIS on an Azure instance

I do not know why, there must be some black magic Azure voodoo that goes on after IIS is initialised that doesn't happen when you restart it yourself manually. Anyway, I learnt that it will completely destroy your instance. The site will not respond any longer from that instance. The best way to effectively do an IIS Reset is to Reboot your instance from the Azure Management Portal.

Follow britishdev on Twitter

Friday 6 January 2012

How to run Crystal Reports on Azure

Here is a step by step guide on how to make an ASP.NET project that uses Crystal Reports run successfully on Azure. If you try to run a Crystal Report in your ASP.NET site without the Crystal Reports runtime installed you will receive a "System.Runtime.InteropServices.COMException" with description "The Report Application Server failed".

The problem is that you need to install the Crystal Reports runtime. This isn't a problem with regular hosting since you can just install Crystal Reports on each of your servers and off you go.

With Azure, though, if you remote into the machine and install it, it will work fine until your deployment is redistributed to another machine which it will do at some point due to the nature of cloud computing.

How to install Crystal Reports on your Azure web role

Fortunately it is still easy with Azure. Easy when you know how anyway. Here are the steps you will need to take:

First of all you will need to download the SAP Crystal Reports runtime engine for .NET Framework 4 (64-bit). This should extract as a msi file called CRRuntime_64bit_13_0_2.msi.

In your web application in Visual Studio you should paste this msi file at the route of your web project and include it in the project. Right click it in the Solution Explorer and set its 'Build Action' to 'None' and also set its 'Copy to Output Directory' to 'Always Copy'.

Next you will create a command file to execute this msi file. Create a new text file, call it StartUp.cmd and then save it in the root of your web project (next to the msi). In that file write the following:

@ECHO off

ECHO "Starting CrystalReports Installation" >> log.txt
msiexec.exe /I "CRRuntime_64bit_13_0_2.msi" /qn
ECHO "Completed CrystalReports Installation" >> log.txt

Set the properties of StartUp.cmd to 'Build Action' = 'None' and 'Copy to Output Directory' = 'Always Copy'.

Now in your ServiceDefinition.csdef make this cmd file a start up task by adding the following lines:

<WebRole name="Web" vmsize="Small">
  ...
  <Startup>
    <Task commandLine="StartUp.cmd" executionContext="elevated" taskType="background" />
  </Startup>
</WebRole>

You are now instructing each instance that starts up with your package to run the Crystal Reports msi file that installs the runtime on the instance ready for its use in Azure.

A few Crystal Reports on Azure tips

I ran into a few bits and bobs which caused me unnecessary pain along the seemingly clean process outlined above. I will share them with you in case you do too in no particular order.

  • Make sure your .rpt Crystal Report files are set to Build Action: Content and Copy to Output Directory: Copy always.
  • Don't be alarmed with how long it takes to deploy. It will take much longer to upload than usual because you are now uploading an extra 77MB of installation files. It took me an hour to deploy on my home connection!
  • Ignore all the warnings about how your web project is dependent on various Crystal Report assemblies since Azure will have them just as soon as your installation file runs.
  • Configure a remote desktop connection when you do your deployments since it will be invaluable should anything go wrong and at an hour per deployment you don't want to be messing about.
  • Visual Studio may have added a load of random assemblies in your web.config you are not aware of and don't need and may even cause problems like log4net.

That is all. Good luck - it's very satisfying when you get it going.

Follow britishdev on Twitter

Wednesday 4 January 2012

Azure AppFabric Cache billing

I have started using Windows Azure AppFabric Cache service recently and I was confused at how much money it was costing.

I started using a new subscription yesterday and I seem to have already used 53.68 MB of my 128MB cache. In one day?!

However, if I look at my cache in the Azure management portal I am not using it at all!

So where has this 53MB of AppFabric Cache come from? Since I've only put my site live for less than 24hrs I was worried I was blazing through my 128MB/month allowance but then I thought about it logically for a moment. You don't use up a cache you just use it. It doesn't go anywhere.

How AppFabric Cache usage is calculated and billed

With this in mind it becomes obvious what has happened. I set my AppFabric cache up before I got around to deploying my site. So really it has been used since 22nd December, which is 13 days ago. It is irrelevant that I have deployed a site onto it only yesterday.

Look at the maths: 128MB/31days = 4.13MB/day * 13days = 53.68MB. It is a confusing way of displaying it but still, it makes sense.

Follow britishdev on Twitter