find a location for property in a new city

Tuesday 13 December 2011

Understanding Windows Azure spending limits

If you have logged into the Azure billing portal since 12th December 2011 you will have noticed it has had a facelift as well as some changes to its functionality. Particularly a billing spending cap.

I just wanted to share my findings on the new feature of spending limits. Now there will be a $0 spending cap on your account when you sign up for a free trial. This ensures you will not be charged during your free trial. Beware though that once this cap is met your free trial service will stop working and you know that will happen just before you prototype demo.

Let me just clear up some things because I was a bit confused with exactly what had changed until I did some digging about. Mainly because my imagination just invented how the spending caps should work and partly because it is difficult to find documentation on this.

If you set up a new free trial FROM NOW you will get the $0 spending limit. If you had one before (presumably 12th Dec) you have a little note saying you have removed the spending limit. You didn't, you just never got the limit because you registered for a free trial before that the spending cap facility was available. Confusing notification at fault really.

So here is what I found on the payment caps:

  • If you are not on a free trial you have no spending cap and you cannot create one ($0 or otherwise)
  • If you are on a free trial since before 12th December 2011 it will say you removed the spending limit
  • If you joined a free trial on or after 12th December 2011 you will automatically have a $0 billing limit
  • If you remove the $0 spending charge limit you cannot put it back

My thoughts

I think it is a nice idea to keep people reassured that they will not be billed during a free trial. I understand some people felt hard done by when they were charged during this evaluation period (despite how well communicated and fair the free trial usage limits were... but anyway).

However, I think the Azure team has missed a trick here. Why not allow people to set custom spending limits on their accounts? It'd be very nice to be reassured that I would not be billed more than £x a month if I had a fixed budget. It seems like the infrastructure for such a feature is there but just missing the implementation at this time.

Maybe in time the ones of us willing to invest in Azure will get the same reassurances as the free trial users?

Follow britishdev on Twitter

Wednesday 7 December 2011

Where are my .NET certificates and their thumbprints? Ask Powershell

I ran into a confusing situation where I was trying to pass a certificate with a PowerShell command but found it incredibly confusing to find my certificates.

I was trying to use a certificate on my local machine that I created at an earlier time but where the hell are they? If you open Microsoft Management Console (type mmc into the start bar) and then add the certificates snap in. Choose either Current User or Local computer or whichever is most relevant for you.

Let's dispel some weirdness. My = Personal. For some hilarious reason there is no 'My' folder and it is actually called 'Personal'. This is the place you should be looking for for your own certificates you have created.

So if you have found the certificates you were after you will probably need the thumbprint... but it is encrypted. Awesome.

If you open up PowerShell and type the command

get-childitem -path cert:\CurrentUser\My
this will list all of the certificates in the Current User\Personal folder for you by thumbprint. Very handy. For the ones in Local machine simply replace CurrentUser with LocalMachine.

Hopefully this will save you some time when dealing with this confusing certificate nightmare.

Follow britishdev on Twitter

Tuesday 29 November 2011

Conflicting versions of ASP.NET Web Pages detected

I was going through the Windows Azure Platform Training Kit, Exercise 1 of the Service Bus Messaging lab and I ran into the following error message when trying to run my site, "Conflicting versions of ASP.NET Web Pages detected: specified version is "1.0.0.0", but the version in bin is "2.0.0.0". To continue, remove files from the application's bin directory or remove the version specification in web.config."

I checked the web.config and the assembly specified was indeed 1.0.0.0: <add assembly="System.Web.WebPages, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />.

When I checked the System.Web.WebPages reference in my project though it also said its version is 1.0.0.0. So what's the problem?

System.Web.WebPages.Deployment

The assembly System.Web.WebPages.Deployment was referenced in my project and on closer inspection it was version 2.0.0.0 so it must be this which is causing the problem. I do not need this assembly though so I deleted it and the error went away.

Follow britishdev on Twitter

Monday 28 November 2011

Service 'SQL Azure Data Sync Preview' failed to start

While trying to install and start the SQL Azure Data Sync Preview Service I ran across the error message "Service 'SQL Azure Data Sync Preview' (SQL Azure Data Sync Preview) failed to start. Verify that you have sufficient privileges to start system services." Here are the step I had to take to make it start working.

You have to get all of these right to be able to install and start the service successfully. Most are not very obvious and the final step I seem to be the only person on the internet with this problem (hence the blog post)! Anyway, here goes:

Get your username and domain right

Although it says you can enter the account with examples of "domain\user, .\localuser", you can't; it is a lie! You should ensure that you fill out the "User Name" field like MACHINENAME\Username. So MYMACHINE\Colin if you were me.

Verify you have sufficient permissions

Your user may not have permission to log in to Windows Services so you need to make sure you can.

Go to:

  • Control Panel > Administrative Tools > Local Security Policy
  • Open Local Policies > User Rights Assignment
  • Click on Log on as a service
  • Click "Add User or Group..."
  • Set location to your machine
  • Type your user name into the box and click OK then Apply

Ensure you have a password

This was the problem I ran into. The installation just didn't seem to complete because my Windows user did not have a password. I Ctrl + Alt + Del > Change password. I gave myself an actual password rather than leaving it blank which I had done in the past. It then installed without a hitch.

Conclusion

So quite a few snags you may run into when installing the Data Sync service but hopefully with my little check list it wont take you as long as it took me. I'm guessing these sort of bits are just a symptom of the service being in Preview not RTM so they are understandable really.

For what it's worth the preview of SQL Azure Data Sync is a great service, installing this Windows Service was the only issue I had in an otherwise simple process. The benefits of using the service are great, you can sync multiple databases both inside or outside of the Azure data centres. You will only get charged for data leaving the data centre too so any SQL Azure to SQL Azure synchronisation within the same data centre is free! Well, free for now anyway since it is still in preview so I'd keep an eye out when it becomes production ready. Happy syncing!

Follow britishdev on Twitter

Tuesday 22 November 2011

Share admin login with LiveID in the Azure Management Portal

Has this happened to you? Someone else signed up to use Windows Azure and you are the one who is working on it. You want to access this Azure Portal with a different LiveID? Your LiveID?

Let me guess - your CTO/director decided that Azure is awesome and signed up for its free trial. He had a click around and then quickly passed it on to you, the developer, to work on it. Now you are getting irritated having to remember the new weird LiveID and password and you are tired having to log in and out of your other applications that use your LiveID.

Then the way forward is for you to add a Co-Admin to the Azure Portal. This will allow you to log in with your LiveID that you are used to.

Add new Co-Admin to your Azure account

In your Azure Management portal:

  • Go to the "Hosted Services, Storage Accounts & CDN" tab.
  • Then click "User Management".
  • Click "Add New Co-Admin".
  • Type in your LiveID and select the subscription they can administrate.

Your new LiveID is now added to the list of users and should be a Co-Administrator which has all the power you need to manage the technical side of your available subscriptions.

Follow britishdev on Twitter

Monday 21 November 2011

Two host names appearing on Google for the same site

If you have an application hosted in Window's Azure your application will have a host name such as mysite.cloudapp.net. You will probably also have a CNAME that points www.mysite.com to the cloudapp. So what if both domains have been indexed by Google? How do you get cloudapp.net URL off of Google?

This is not a problem with Windows Azure; this can happen to any site that has two hostnames/IPs that point to the same site. It is just a recurring issue I see during my Azure consultancy work and worth a quick explanation and solution.

Both mysite.cloudapp.net and mysite.com are pointing to the exact same application and servers so there is no way you can switch one off since they are one and the same, just accessible through two paths. (Well, more than two since they will at least have IP address paths too.)

This issue can be solved in your application code. Your application will need to determine what host name the application has been accessed through and either display the page if the host name is correct or 301 redirect to the correct hostname if the host name is wrong. So how to do it?

301 Redirect cloudapp.net to your correct domain

This can be done in your application by setting a redirect rule using the IIS URL Rewrite Module. There is already a template in IIS's Rewrite Module for doing this called "Canonical domain name" under the SEO section, however I don't particularly like this one since it redirects everything that isn't the correct hostname (e.g. your development machine). Still, you should have a look at it to see what it is doing.

The key steps of what your rule should be doing are:

  1. Check to see if the host name is incorrect
  2. If it is it should issue a redirect to the correct domain name
  3. The redirect should be a 301
  4. The page should remain e.g. http://mysite.cloudapp.net/hello.html -> http://www.mysite.com/hello.html
  5. If the host name is not in violation the page rendering should continue without further action

So you can make your own however you feel is best but here is my example to get you going.

<rewrite>
    <rules>
        <rule name="CanonicalHostNameRule1">
            <match url="(.*)" />
            <conditions>
                <add input="{HTTP_HOST}" pattern="^mysite\.cloudapp\.net$" negate="false" />
            </conditions>
            <action type="Redirect" url="http://www.mysite.com/{R:1}" />
        </rule>
    </rules>
</rewrite>

This code goes in the <system.webServer>...</system.webServer> section of your web.config. Again this is only an example, you may wish to add another condition for handling your IP address for example.

This redirect will solve the problem of any users visiting the incorrect site. Also, because it is a 301 redirect, over a week or two Google will remove the cloudapp.net URL from its search results and attribute its existing value to the correct URL.

Follow britishdev on Twitter

Monday 14 November 2011

VIP swap with different endpoints in Azure

Attempting to Swap VIPs from the Azure Management Portal can return an message "Windows Azure cannot perform a VIP swap between deployments that specify different endpoint ports" when trying to Swap VIPs between a staging and production environment with different endpoints.

Say you have a web role deployed to Azure and it is currently in production. It has one instance and one endpoint of http on port 80. You then publish a new web role to staging that is also one instance but this time it has an https endpoint on port 443.

Both will work as usual however when you come to swap the VIPs over you will recieve the error "Windows Azure cannot perform a VIP swap between deployments that have a different number of endpoints" preventing you from putting your new site into production.

Is there a way around this?

I tried a few different tactics. I tried doing a straight upgrade to production (since I have tested the same deployment in staging) but this failed for the same reason as above.

I tried changing my new package to have the original http endpoint on port 80 as well as the new endpoint on port 443. This didn't work either. All this experimenting takes time so I thought I would save you the hassle and just tell you that neither of these work!

The only way(?)

The only way I found to get your new site into production is (you guessed it) delete the old site that is to be replaced (scary!), and then click "Swap VIP". You will experience down time for somewhere between 2 and 3 minutes so pick the least destructive time to do this.

Note: Please please test the new site that on staging thoroughly as you just deleted the production package and so it will take a long time to get back!

Follow britishdev on Twitter

Thursday 10 November 2011

Cannot create database 'DevelopmentStorageDb20110816' in Storage Emulator Azure SDK

I have uncovered this problem before and this time I thought I would spend the time to crack it. When running the Storage Emulator from the Azure SDK or running DSINIT.exe for the first time it needs to initialise; part of this involves creating a new database. This was not running correctly due to a permissions problem which stated "Cannot create database 'DevelopmentStorageDb20110816' : CREATE DATABASE permission denied in database 'master'".

The full report is as follows:

Added reservation for http://127.0.0.1:10000/ in user account MSFT-123\MyUser.
Added reservation for http://127.0.0.1:10001/ in user account MSFT-123\MyUser.
Added reservation for http://127.0.0.1:10002/ in user account MSFT-123\MyUser.

Creating database DevelopmentStorageDb20110816...
Cannot create database 'DevelopmentStorageDb20110816' : CREATE DATABASE permission denied in database 'master'.

One or more initialization actions have failed. Resolve these errors before attempting to run the storage emulator again. These errors can occur if SQL Server was installed by someone other than the current user. Please refer to http://go.microsoft.com/fwlink/?LinkID=205140 for more details.

Grant permissions

The user this program is running as must be a sa with full permissions on the database. If this is not the case you can either change the user or GRANT permissions to the current user.

To change the user run the Windows Azure SDK Command Prompt (as administrator) and type 'DSINIT /?'. This will give you details on how to change user, which is to use the /user: argument.

Alternatively you could GRANT the permissions to your default user like so:

USE master
GRANT CREATE DATABASE TO "MYDOMAIN\MyUser"

Either of these solutions should solve your permissions problem with DSINIT.

Cannot GRANT permissions

Unfortunately my problems went further than this still. I believe this is to do with how many times I had installed a SQL Server Express in the past. How irritating is that?! You own the machine and yet you don't seem to have the permissions that reflect that!

Anyway, the solution to make this work once again was (unfortunately) to uninstall SQL Server Express and reinstall it again. This way you will be the owner of the SQL Server Express database engine and you will be able to create all the databases you wish, including your long awaited Storage Emulator database.

A better way to regain admin access

A better way to regain admin access was pointed out to me by a colleague, Michael Coates. You can either solve this loss of administrator on a SQL server by following this troubleshooting guide from MSDN (this worked for one commenter). Or you can run a batch script that will magically do it for you (this didn't work for one commenter). I have not done either of these so take this advice at your own risk. Remember, my way was to delete and reinstall the server so this cannot be more risky, surely?

Follow britishdev on Twitter

Tuesday 8 November 2011

Cross database joins in SQL Azure

Currently cross database joins are not supported in SQL Azure. Also you cannot change database mid query so you cannot, for example, put a USE [MyDB] in your query either. As a side note, please vote for it to be a priority feature for the SQL Azure team to develop soon.

So, since cross database joins are not supported at this time you must find a workaround. I will give you two possible solutions I would recommend and you can hopefully choose the one that is best for your application.

Combine your databases

If you have tables that are frequently used together, i.e. they are joined in queries or the rows are inserted in the same transactions, then it would be a good idea to move the similar tables into the same database. This of course eliminates the need to traverse databases. SQL Azure has very recently increased the maximum database size from 50GB to 150GB, which potentially makes this a more viable option than perhaps it once was.

Join your data in your application

Two separate queries could be run on the two separate databases and then these results could be joined within that application. Obvious downsides to this will be the potential for large DB I/O, large network transfer and large memory usage in your app. This is not something to consider if the amount of data that is likely to be returned is large (e.g. 1000+ rows) but it is fine if the data will be manageable.

Conclusion

Personally I would much rather settle for bringing all my similar tables that are likely to be used within the same queries together into one database so there is no longer a need for cross database solutions. This makes for cleaner application code and more efficient use of your resources. However, if this option is not available to you then perhaps the second option may have to be the one you choose.

Surely soon in the future the SQL Azure team will address this issue though and your cross database code can stay clean! Although in fairness cross database querying isn't even available in Entity Framework yet (but still workaround-able) either. I wonder why this is so difficult for Microsoft? Just shows you should always try and combine similar DBs in any database designs where possible.

Follow britishdev on Twitter

Monday 7 November 2011

"The Web Platform Installer could not start" fix

I was trying to download the Azure SDK version 1.5 and I was experiencing issues preventing the install because the Windows Platform installer could not load up and run properly. It instead opened a error message saying "The Web Platform Installer could not start. Please report the following error on the Web Platform Installer Forum. http:/forums.iis.net/1155.aspx".

The problem was that the Web Platform Installer needed to update but could not seem to manage it. This was causing the error to display and then crash without allowing me to install the original Azure SDK I was actually trying to install.

Microsoft Fix It fixed it

Microsoft Fix It is a tool from Microsoft that automatically finds the particular one you want in this scenario is the Diagnose and fix program installing and uninstalling problems automatically.

To do this download it and use it as follows:

  • When the option to select "Detect problems and let me select the fixes to apply." appears, do that.
  • say you are having a problem with "Installing"
  • After a short wait scroll down to find "Microsoft Web Platform Installer 3.0" (or which ever version number you have) and select that.
  • Next you should click "Yes, try uninstall" then click next and follow the instructions.

This should solve your issue with the Microsoft Web Platform Installer and allow you to install the Azure SDK or whatever it is you're trying to install.

Follow britishdev on Twitter

Wednesday 26 October 2011

Remote development and staging environments from Google Analytics

I have created a new site which is currently only getting a low amount of traffic. There is a beta version of the site live at the moment but it is still in development too. My problem is that Google Analytics is still tracking page views on my local machine during development because the Google Analytics tracking code is on the local version too.

I do not want any hits that are a result of me testing my development site or staging site to count towards the total page views and unique visits recorded in Google Analytics. Since there are only low amounts of traffic at the moment my development will dramatically skew the analytics results.

I tried a few ways to control the analytics recordings by using filters in the Google Analytics dashboard but made some mistakes which destroyed all tracking before I finally got it right. Whoops... So to make sure you get it right first time here is how to do it.

How to filter page views from your local or development site

  1. Go to you Google Analytics dashboard by clicking on "Analytics Settings"
  2. Click "Filter Manager »"
  3. Click "Add Filter"
  4. Name the filter
  5. Select the "Custom filter" radio button
  6. Select "Include" radio button
  7. Choose "Hostname" as your Filter Field
  8. And type in "^www\.mysite\.com$" as your Filter Pattern (without the quotes)
  9. Select case-sensitivity. I chose "No"
  10. Add your website profile to the selected website profiles
  11. Click "Save Changes" button
And you're done. This will stop any sites like local.www.mysite.com or 127.1.1.1 every affecting your analytics. It will only register when the host name is exactly www.mysite.com. I hope that helps you get it right first time so you don't accidentally take your analytics down like I did!

Follow britishdev on Twitter

Thursday 20 October 2011

Unable to remove directory. Access to the path 'mswasri.dll' is denied when packaging an Azure project

When trying to build a package using the Azure SDK built into Visual Studio 2010 I sometimes get the error message "Unable to remove directory "bin\Release\CloudPackage.csx\". Access to the path 'mswasri.dll' is denied."

This stopped me from being able to build Azure cloud package ready for deployment. I tried changing my cloud package .csx file to read only but it just changed back. I tried deleting it but it was in use.

This gave me a clue. I run my local copy of my Azure site through my local IIS. I think this was locking the site and thus preventing it from being packaged.

Solution

The way to solve this problem that I found to work best was to find the Application Pool that is running the web application in my local development environment and stop it. This allows me to successfully package my web application ready from deployment to the Azure cloud!

Follow britishdev on Twitter

Monday 12 September 2011

Azure TableServiceContext does not contain CreateQuery Azure

I have done it loads of times before but this time when trying to call CreateQuery, Add Object, DeleteObject, UpdateObject or SaveChanges on my TableServiceContext I got an error saying "'Microsoft.WindowsAzure.StorageClient.TableStorageServiceContext' does not contain a definition for 'CreateQuery'".

I keep forgetting that as well as including the Microsoft.WindowsAzure.StorageClient assembly I also need to include System.Data.Services.Client.

This is because, although TableStorageServiceContext comes in Microsoft.WindowsAzure.StorageClient.dll, it inherits from DataServiceContext which is part of another assembly, System.Data.Services.Client.dll. So this must be included in you Azure storage project.

Follow britishdev on Twitter

Friday 9 September 2011

Not running in a hosted service or the Development Fabric - Azure

I got an InvalidOperationException with message "Not running in a hosted service or the Development Fabric" while trying to access my Azure site hosted on my local IIS. I have encountered this before and solved it but this time it confused be further so I thought this time I should document it.

The cause of this issue is that you are not running you Compute Emulator or your Storage Emulator. You can get these when you download and install the Windows Azure SDK (currently at v1.4). Once you have these you must have them running and, according to Michael Collier's blog post, they must be running as administrator too.

This can be done in two ways:

  • Going to Start > All Programs > Windows Azure SDK v1.4 > Compute Emulator and run it as administrator (which I can't find how to do).
  • Run the site in debug mode. This will cause the emulators to start running (since Visual Studio is running as administrator). Your IIS hosted site will then also be able to run.

I found the first way didn't work but that was because I didn't know I had to run them as administrator before I read Collier's post. However, after reading it I still can't figure out how to run it as administrator without using Visual Studio.

So usually I hit F5 to start debugging in the 127.0.0.1:{random port} page that starts up. I then close that window and continue using my IIS hosted site now that the emulators are running as administrator.

I have found one further complication though on the odd occasion. It seems that if you attempted to run your site before you had the emulators running, got the error and then started the emulators, you would still get the error. When this happens I found if I restart my site and application pool in IIS manager it will begin to work with the already running emulators correctly.

Conclusion

I now believe that you must have your Azure Compute Emulator(s) running as administrator before your site and application pool start up to run an Azure site successfully in IIS in a development environment.

Follow britishdev on Twitter

Thursday 8 September 2011

MD5 encryption in .NET

There is not set up needed for this one.

MD5 hash encryption

public static string Md5Encryption(string dataToEncrypt)
{
    var bytes = Encoding.Default.GetBytes(dataToEncrypt);
    using (var md5 = new MD5CryptoServiceProvider())
    {
        var cipher = md5.ComputeHash(bytes);
        return Convert.ToBase64String(cipher);
    }
}

Follow britishdev on Twitter

Triple DES encryption and decryption with .NET

TripleDes essentials

Start your class like this:
using System;
using System.IO;
using System.Security.Cryptography;
using System.Text;

namespace Utils
{
 public static class EncryptionHelper
 {
    static private byte[] key = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
                                            15, 16, 17, 18, 19, 20, 21, 22, 23, 24};
    static private byte[] iv8Bit = { 1,2,3,4,5,6,7,8 };

TripleDes Encryption

Add the following method for encrypting with Triple DES:
public static string TripleDesEncryption(string dataToEncrypt)
{
   var bytes = Encoding.Default.GetBytes(dataToEncrypt);
   using (var tripleDes= new TripleDESCryptoServiceProvider())
   {
    using (var ms = new MemoryStream())
    using (var encryptor = tripleDes.CreateEncryptor(key, iv8Bit))
    using (var cs = new CryptoStream(ms, encryptor, CryptoStreamMode.Write))
    {
     cs.Write(bytes, 0, bytes.Length);
     cs.FlushFinalBlock();
     var cipher = ms.ToArray();
     return Convert.ToBase64String(cipher);
    }
   }
}

Triple DES Decryption

The next method will allow decryption of your data
public static string TripleDesDecryption(string dataToDecrypt)
{
   var bytes = Convert.FromBase64String(dataToDecrypt);
   using (var tripleDes= new TripleDESCryptoServiceProvider())
   {
    using (var ms = new MemoryStream())
    using (var decryptor = tripleDes.CreateDecryptor(key, iv8Bit))
    using (var cs = new CryptoStream(ms, decryptor, CryptoStreamMode.Write))
    {
     cs.Write(bytes, 0, bytes.Length);
     cs.FlushFinalBlock();
     var cipher = ms.ToArray();
     return Encoding.UTF8.GetString(cipher);
    }
   }
}

Follow britishdev on Twitter

AES encryption and decryption in .NET

AES essentials

Start your class like this:
using System;
using System.IO;
using System.Security.Cryptography;
using System.Text;

namespace Utils
{
 public static class EncryptionHelper
 {
    static private byte[] key = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
                                            15, 16, 17, 18, 19, 20, 21, 22, 23, 24};
    static private byte[] iv16Bit = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 };

AES Encryption

Add the following method for encrypting with AES:
public static string AesEncryption(string dataToEncrypt)
{
   var bytes = Encoding.Default.GetBytes(dataToEncrypt);
   using (var aes = new AesCryptoServiceProvider())
   {
    using (var ms = new MemoryStream())
    using (var encryptor = aes.CreateEncryptor(key, iv16Bit))
    using (var cs = new CryptoStream(ms, encryptor, CryptoStreamMode.Write))
    {
     cs.Write(bytes, 0, bytes.Length);
     cs.FlushFinalBlock();
     var cipher = ms.ToArray();
     return Convert.ToBase64String(cipher);
    }
   }
}

AES Decryption

The next method will allow decryption of your data
public static string AesDecryption(string dataToDecrypt)
{
   var bytes = Convert.FromBase64String(dataToDecrypt);
   using (var aes = new AesCryptoServiceProvider())
   {
    using (var ms = new MemoryStream())
    using (var decryptor = aes.CreateDecryptor(key, iv16Bit))
    using (var cs = new CryptoStream(ms, decryptor, CryptoStreamMode.Write))
    {
     cs.Write(bytes, 0, bytes.Length);
     cs.FlushFinalBlock();
     var cipher = ms.ToArray();
     return Encoding.UTF8.GetString(cipher);
    }
   }
}

Follow britishdev on Twitter

Thursday 25 August 2011

Azure Table Storage One of the request inputs is out of range

While trying to insert an entity into a Windows Azure Table Storage table I got the rather ambiguous "An error occurred while processing this request." error message from a System.Data.Services.Client.DataServiceRequestException. The inner exception help a message of "One of the request inputs is out of range."

The error message wasn't terribly helpful but I finally found what was causing the problem. It was because of my Row Key. The RowKey property that comes from the TableServiceEntity abstract class has specific requirements which I was unintentionally breaching.

My RowKey was calculated from another property of my Entity. In this particular instance this meant the string included a '/' symbol. This is not allowed by Table Storage for RowKeys.

Solution

When calculating the RowKey from the other property I am now removing special characters that the Azure Storage does not allow. This includes forward slash (/) character, backslash (\) character, number sign (#) character, question mark (?) character.

Old code:
private string productName;
public string ProductName
{
   get { return productName; }
   //setting the RowKey and productName to the same value in one go
   set { RowKey = productName = value; }
}
New code:
private string productName;
public string ProductName
{
   get { return productName; }
   set { RowKey = Regex.Replace(productName = value, @"[\ /?#]", ""); }
}
I actually already knew about this from when I read Apress's Windows Azure Platform but the confusing error message threw me off on a tangent and hoped it hadn't done the same to anyone else.

Follow britishdev on Twitter

Saturday 13 August 2011

ASP.NET MVC Checkbox has a hidden input too

I noticed strange behaviour whilst coding and ASP.NET MVC CheckBox Html Helper. I noticed that the field which was in part of a GET form was creating the query string with the same field in it twice. When the box was checked one parameter was true and the other was false.

Say I have a boolean field, MyField, that I am creating a checkbox for. I would write this: @Html.CheckBox("MyField"). You would expect this to output a single check box but if you actually look at the HTML that is generated you will notice that is also creates a hidden field.



I found out the the reason for this is written within the ASP.NET MVC source code:
if (inputType == InputType.CheckBox) {
    // Render an additional  for checkboxes. This
    // addresses scenarios where unchecked checkboxes are not sent in the request.
    // Sending a hidden input makes it possible to know that the checkbox was present
    // on the page when the request was submitted.
    StringBuilder inputItemBuilder = new StringBuilder();
    inputItemBuilder.Append(tagBuilder.ToString(TagRenderMode.SelfClosing));

    TagBuilder hiddenInput = new TagBuilder("input");
    hiddenInput.MergeAttribute("type", HtmlHelper.GetInputTypeString(InputType.Hidden));
    hiddenInput.MergeAttribute("name", name);
    hiddenInput.MergeAttribute("value", "false");
    inputItemBuilder.Append(hiddenInput.ToString(TagRenderMode.SelfClosing));
    return inputItemBuilder.ToString();
}

I appreciate the reasoning behind this but in my scenario MyField is part of a ViewModel that is being sent with the form and of course boolean values are False by default so this is wasted on my scenario.

Another reason I do not like this is because I am using the GET method on my form the user will see this oddity in the querystring. Worse still, they be a developer and judge my code as rubbish not knowing it is the doing of ASP.NET MVC. I can't have that! ;)

Solution

Simple solution really. Write the HTML you want in HTML, forget about the HtmlHelper:
<input type="checkbox" name="MyField" value="true" id="MyField"
@Html.Raw((Model.MyField) ? "checked=\"checked\"" : "") />

Remember the value="true" because the default value to be sent is "on" if it is checked, which obviously can't be parsed to a Boolean.

Obviously, this doesn't look that clean but I will continue to do this on forms that use the GET method.

Follow britishdev on Twitter

Thursday 21 July 2011

RazorEngine TemplateCompilationException Unable to compile template

When using the RazorEngine for the first time for email templating I ran into a "TemplateCompilationException" with the error message "Unable to compile template. Check Errors list for details." Looking further into the error I found further details "error CS0103: The name 'model' does not exist in the current context."

It turns out that your Razor views are not to be writen in exactly the same way as when running within an ASP.NET web context. I suppose I should have guessed this from the way I was used to declaring the model for the WebFormsViewEngine.

Solution

Instead of using the @model declaration at the the top I should have been using @inherits. An example:

Regular Razor view for web:
@model Models.ContactDetailsViewModel
<!DOCTYPE html>
<html>
 <body>
  <div>
   <p>Hello @Model.FullName,</p>
   <p>We will call you at @Model.CallTime on @Model.PhoneNumber</p>
   <p>Thanks</p>
  </div>
 </body>
</html>

Razor view for use as a template:
@inherits RazorEngine.Templating.TemplateBase<Models.ContactDetailsViewModel>
<!DOCTYPE html>
<html>
 <body>
  <div>
   <p>Hello @Model.FullName,</p>
   <p>We will call you at @Model.CallTime on @Model.PhoneNumber</p>
   <p>Thanks</p>
  </div>
 </body>
</html>

Hope this helps and well done for using such an awesome feature of the RazorEngine!

Follow britishdev on Twitter

Wednesday 6 July 2011

Custom Model Binder for binding drop downs into a DateTime in MVC

I have a Nullable DateTime Property in my Model I want to bind painlessly with three drop down select boxes I have for Day Month and Year respectively.

First thing to do is to make a new shiny custom model binder. I have done mine by inheriting from DefaultModelBinder (which you will need to include System.Web.Mvc to access).
using System;
using System.ComponentModel;
using System.Web.Mvc;

namespace Infrastructure.OFP.ModelBinders
{
    public class MyUserModelBinder : DefaultModelBinder
    {
        protected override void BindProperty(ControllerContext controllerContext,
            ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor)
        {
            if (propertyDescriptor.PropertyType == typeof(DateTime?)
                && propertyDescriptor.Name == "DateOfBirth")
            {
                var request = controllerContext.HttpContext.Request;
                var prefix = propertyDescriptor.Name;
    
//remember I am a Brit so this will give me dd/mm/yyyy format - this may not work for you :)
//also since this is a binder specific for one particular form the hardcoding of
//    MyUser.DateOfBirth is suitable
//AND... I am using Value because I am using a Nullable
                var date = string.Format("{0}/{1}/{2}",
                           request["MyUser.DateOfBirth.Value.Day"],
                           request["MyUser.DateOfBirth.Value.Month"],
                           request["MyUser.DateOfBirth.Value.Year"]);

                DateTime dateOfBirth;
                if (DateTime.TryParse(date, out dateOfBirth))
                {
                    SetProperty(controllerContext, bindingContext,
                                propertyDescriptor, dateOfBirth);
                    return;
                }
                else
                {
                    bindingContext.ModelState.AddModelError("DateOfBirth",
                           "Date was not recognised");
                    return;
                }
            }
            base.BindProperty(controllerContext, bindingContext, propertyDescriptor);
        }
    }
}

This new Model Binder will need to be registered to be used when binding a MyUser object. You do this in the Global.asax Application_Start() like so:
protected void Application_Start()
{
    ModelBinders.Binders[typeof(MyUser)] = new MyUserModelBinder();
    AreaRegistration.RegisterAllAreas();
    RegisterRoutes(RouteTable.Routes);
}

For reference the drop downs I created to populate the nullable DateTime property of DateOfBirth is as follows:
@Html.DropDownListFor(m => m.MyUser.DateOfBirth.Value.Day,
                      new SelectList(Model.Days, "Key", "Value"))
@Html.DropDownListFor(m => m.MyUser.DateOfBirth.Value.Month,
                      new SelectList(Model.Months, "Key", "Value"))
@Html.DropDownListFor(m => m.MyUser.DateOfBirth.Value.Year,
                      new SelectList(Model.Years, "Key", "Value"))
@Html.ValidationMessageFor(m => m.MyUser.DateOfBirth)

This works as I expected it to. It validates the property as expected using the DataAnnotations that I have defined for it.

Follow britishdev on Twitter

Monday 20 June 2011

Azure SDK enable IIS 7.0 with ASP.NET support

Trying to install Windows Azure SDK and Windows Azure Tools for Microsoft Visual Studio on Windows 7 running IIS 7.5 throws an error that states, "... enable internet information services 7.0 with asp.net support ..."

I do not have IIS 7.0 - I am using IIS 7.5. Surely the Windows Azure SDK and Windows Azure Tools are compatible with 7.5 though? Well, they are! So why do I need IIS 7.0?

Well, you don't. The clue is in the message "with ASP.NET support." I needed to edit my IIS in control panel to do this.

Enabling IIS 7+ with ASP.NET

Go to Control Panel > Programs > Turn Windows features on or off.

Then find Internet Information Services. Expand it then go to World Wide Web Services, then Application Development Features and then check ASP.NET. This will check a few other dependent features and after a short wait should be able to run the installation for Windows Azure SDK again.

Follow britishdev on Twitter

Tuesday 14 June 2011

Two bugs in ASP.NET MVC 3 and a workaround for both

So I spent an hour today arsing about with a couple of ASP.NET MVC 3 bugs. One was a Routing issue that caused it to act differently to MVC 2. The second I found was a FormsAuthentication issue that insisting on sending me to /Account/Login.

Amazing how this crept in really given that it was community tested to death with such a massive ASP.NET MVC following so it is a wonder they weren't weeded out and fixed before RTM. Oh well, don't pretend you don't like a challenge.

Routing doesn't work the same in MVC3 from MVC2

Here is an example of this bug in action

My current route is:
routes.MapRoute("groups", "groups/{groupSlug}/{action}/{id}/{urlTitle}",
                new
                {
                    controller = "groups",
                    groupSlug = UrlParameter.Optional,
                    action = "index",
                    id = UrlParameter.Optional,
                    urlTitle = UrlParameter.Optional
                });

I am using this route in an ActionLink like this (well obviously I've changed it for clarity - but I was totally using T4MVC!):
<%: Html.ActionLink(item.Group.Title, "index", new { groupSlug = "something" })%>

In MVC2 it that produces the URL /groups/something. But in MVC3 it produces the URL /groups?groupSlug=something.

MVC3 Routing UrlParameter.Optional workaround

I struggled with this for a good few hours, many a time exclaiming that, "my route IS there! What the hell is wrong with you?!" I took it apart piece by piece to discover that MVC3 no longer likes multiple UrlParameter.Optional's in a row.

A rather ugly workaround for this is to put in a new route for each of these problem routes that have no reference to the following UrlParameter.Optional's. So AFTER the above route you I need to put in another route like this:
//workaround for MVC3 bug
routes.MapRoute("groupsFix", "groups/{groupSlug}/{action}",
                new
                {
                    controller = "groups",
                    groupSlug = UrlParameter.Optional,
                    action = "index"
                });

Since writing this I have discovered it is a known issue and Phil Haack has a much better explanation than me! Obviously.

FormsAuthentication.LoginUrl is always /Account/Login

I have setup my forms authentication in web.config to register the LoginUrl as "~/login" but since upgrading to MVC3 it has decided to ignore that and now FormsAuthentication.LoginUrl returns "~/Account/LogIn" regardless of what I set in web.config.

There is a workaround for this though. You simply need to add this line to your AppSettings:

and everything works how it used to. Strangeness indeed.

Conclusion

Live with it. Fix up your errors as explained above and continue staying up to speed with ASP.NET MVC. It's worth it if even just for Razor alone.

Follow britishdev on Twitter

Monday 13 June 2011

Adding MVC dependencies to a project for deployment

Deployment of new web applications has been a bit annoying since the birth of ASP.NET MVC. Production servers with .NET 4 or 3.5 installed will still be missing key assemblies such as System.Web.Mvc.dll. This will cause errors such as "Could not load file or assembly 'System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies" and "Could not load file or assembly 'System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies." when your project is deployed to production.

It is confusing for two reasons: Why does it work on your development machine? Well, when you install all the tools required to make MVC application you get all the necessary assemblies with it. Which assemblies are your production servers missing? Well there were 3 in ASP.NET 1, 1 in ASP.NET MVC 2 but now with ASP.NET MVC 3 + Razor there are loads. I've found out which ones I actually need before but it is still confusing.

With Visual Studio 2010 SP1 there is a new feature that does all the hard work for you. You can right click your ASP.NET MVC web application project and select 'Add Deployable Dependencies...':

Then you select the type of features you are using so Visual Studio can decide which references are needed:


It will then create a _bin_deployableAssemblies directory for you with all the assemblies included:


Now if you publish using Web Deploy it will include those files as expected and your deployed solution will be happy once more.

Important note:
If you are not using the 'Web Deploy' as your publish method though - this will NOT work. You will need to reference the assemblies and make them bin deployable yourself.

Follow britishdev on Twitter

Wednesday 8 June 2011

Why WatiN tests are slow with the IE browser

I found opening and closing of the IE browser in WatiN a massive overhead when I was having to dispose of IE at the end of each of NUnit's tests. I found a way though to speed the whole process up.

I really wanted to open one instance of IE in my NUnit TestFixture and use that for the duration. This would save the huge huge cost of opening and disposing of WatiN's IE browser. As a sidenote it was a lot quicker in FireFox but since most users use IE best stick to that!

There were a couple of hurdles to jump to enable me to use just one IE browser for all my NUnit TestFixture's WatiN tests though.

1. Initialise and Dispose of IE at either end of the TestFixture

This is fairly obvious if you are familiar with NUnit. You just need to utilise the TestFixtureSetUp and TestFixtureTearDown attributes to initialise and dispose of your IE browser like so:
[TestFixture]
public class MyTests : IDisposable
{
    IE Browser;
 
    [TestFixtureSetUp]
    public void Init()
    {
        Browser = new IE();
    }

    [TestFixtureTearDown]
    public void Dispose()
    {
        if (Browser != null)
        {
            Browser.Dispose();
        }
    }

    ...

TestFixtureTearDown will run regardless of any fails in Assertions or even Exceptions so you can be confident that the browser will close on any unmaned CI machines.

2. Solving the weird STA problem (in NUnit)

You need to add to your app.config to ensure the tests run in an single thread apartment (STA) so that the non thread safe Internet Explorer browser can run safely.

I have written separately about how to solve the STA threading problem in Internet Explorer for WatiN.

The overhead of opening and closing the Internet Explorer browser should now be reduced to a one off. I have seen tests times of my NUnit powered WatiN tests reduce from 12 mins down to just 51 secs!

Follow britishdev on Twitter

ThreadStateException thown in WatiN with Internet Explorer

I was using NUnit to Assert some responses from a suite of WatiN tests whilst using the IE browser. Running them caused a ThreadStateException with an error message of 'The CurrentThread needs to have its ApartmentState set to ApartmentState.STA to be able to automate Internet Explorer'

Internet Explorer is not thread safe so WatiN should use a single threaded apartment (STA). To ensure you abide by this WatiN will throw a ThreadStateException if it is not. This can be rectified by setting the ApartmentState to ApartmentState.STA as the error message suggests for Internet Explorer.

How set your CurrentThread to ApartmenState.STA for WatiN in NUnit

If you've haven't already create an App.config in your NUnit testing assembly. You will need to add the following to it:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
   <sectionGroup name="NUnit">
    <section name="TestRunner" type="System.Configuration.NameValueSectionHandler"/>
   </sectionGroup>
  </configSections>
  <NUnit>
   <TestRunner>
    <!-- Valid values are STA,MTA. Others ignored. -->
    <add key="ApartmentState" value="STA" />
   </TestRunner>
  </NUnit>
</configuration>

If you are having problems with different runners other than NUnit you can find tips on how to set STA on them too.

Follow britishdev on Twitter

Thursday 19 May 2011

Image resizing, cropping and compression using .NET

If you need to resize an image or crop an image or compress an image or even do all three you will find you can do all of those things using the .NET framework's System.Drawing classes. In this tutorial I will talk you through the steps required to do all three to transform an large image to an image fit for your website.

This is the example scenario. You have been asked to make a page that enables a user to upload any image they want so that it can be manipulated to ensure it is 200px x 200px and compressed to a smaller file size. I won't talk you though how to make the uploading part as there are a plethora of blogs discussing that already. What I will discuss is how to resize crop and compress that image.

Say you have an image that is 1000px x 800px. There are 3 main steps to converting it to a compressed 200px x 200px image:

Resizing an image

To resize it the key is that it needs to keep the same aspect ratio whilst being a lot smaller.

So the expected outcome of this stage will be that the shortest of the two dimensions will be 200px, the other will be larger and the aspect ratio will remain the same. Begin code:
private byte[] GetCroppedImage(byte[] originalBytes, Size size, ImageFormat format)
{
    using (var streamOriginal = new MemoryStream(originalBytes))
    using (var imgOriginal = Image.FromStream(streamOriginal))
    {
        //get original width and height of the incoming image
        var originalWidth = imgOriginal.Width; // 1000
        var originalHeight = imgOriginal.Height; // 800

        //get the percentage difference in size of the dimension that will change the least
        var percWidth = ((float)size.Width / (float)originalWidth); // 0.2
        var percHeight = ((float)size.Height / (float)originalHeight); // 0.25
        var percentage = Math.Max(percHeight, percWidth); // 0.25

        //get the ideal width and height for the resize (to the next whole number)
        var width = (int)Math.Max(originalWidth * percentage, size.Width); // 250
        var height = (int)Math.Max(originalHeight * percentage, size.Height); // 200

        //actually resize it
        using (var resizedBmp = new Bitmap(width, height))
        {
            using (var graphics = Graphics.FromImage((Image)resizedBmp))
            {
                graphics.InterpolationMode = InterpolationMode.Default;
                graphics.DrawImage(imgOriginal, 0, 0, width, height);
            }
        }
    }
}

Cropping an image

After the last step you are left with an image that is 250px x 200px. As you will notice that is still an aspect ratio of 5:4 so it does not look squashed. However this still isn't the right size so you will now need to crop it.

You are now intending to cut off the excess from both sides to reduce the width whilst leaving the height the same. This will leave you with a 200px x 200px image. Code on:
private byte[] GetCroppedImage(byte[] originalBytes, Size size, ImageFormat format)
{
    using (var streamOriginal = new MemoryStream(originalBytes))
    using (var imgOriginal = Image.FromStream(streamOriginal))
    {
        //get original width and height of the incoming image
        var originalWidth = imgOriginal.Width; // 1000
        var originalHeight = imgOriginal.Height; // 800

        //get the percentage difference in size of the dimension that will change the least
        var percWidth = ((float)size.Width / (float)originalWidth); // 0.2
        var percHeight = ((float)size.Height / (float)originalHeight); // 0.25
        var percentage = Math.Max(percHeight, percWidth); // 0.25

        //get the ideal width and height for the resize (to the next whole number)
        var width = (int)Math.Max(originalWidth * percentage, size.Width); // 250
        var height = (int)Math.Max(originalHeight * percentage, size.Height); // 200

        //actually resize it
        using (var resizedBmp = new Bitmap(width, height))
        {
            using (var graphics = Graphics.FromImage((Image)resizedBmp))
            {
                graphics.InterpolationMode = InterpolationMode.Default;
                graphics.DrawImage(imgOriginal, 0, 0, width, height);
            }

            //work out the coordinates of the top left pixel for cropping
            var x = (width - size.Width) / 2; // 25
            var y = (height - size.Height) / 2; // 0

            //create the cropping rectangle
            var rectangle = new Rectangle(x, y, size.Width, size.Height); // 25, 0, 200, 200

            //crop
            using (var croppedBmp = resizedBmp.Clone(rectangle, resizedBmp.PixelFormat))
            {
            }
        }
    }
}

Compressing the image

You now have the image to the correct size but for the web it is not really optimised.

You now want to compress the image down from the current 4 bytes per pixel (32bit) image, which would be ~156KB (for a 200x200 image!). Time to compress:
private byte[] GetCroppedImage(byte[] originalBytes, Size size, ImageFormat format)
{
    using (var streamOriginal = new MemoryStream(originalBytes))
    using (var imgOriginal = Image.FromStream(streamOriginal))
    {
        //get original width and height of the incoming image
        var originalWidth = imgOriginal.Width; // 1000
        var originalHeight = imgOriginal.Height; // 800

        //get the percentage difference in size of the dimension that will change the least
        var percWidth = ((float)size.Width / (float)originalWidth); // 0.2
        var percHeight = ((float)size.Height / (float)originalHeight); // 0.25
        var percentage = Math.Max(percHeight, percWidth); // 0.25

        //get the ideal width and height for the resize (to the next whole number)
        var width = (int)Math.Max(originalWidth * percentage, size.Width); // 250
        var height = (int)Math.Max(originalHeight * percentage, size.Height); // 200

        //actually resize it
        using (var resizedBmp = new Bitmap(width, height))
        {
            using (var graphics = Graphics.FromImage((Image)resizedBmp))
            {
                graphics.InterpolationMode = InterpolationMode.Default;
                graphics.DrawImage(imgOriginal, 0, 0, width, height);
            }

            //work out the coordinates of the top left pixel for cropping
            var x = (width - size.Width) / 2; // 25
            var y = (height - size.Height) / 2; // 0

            //create the cropping rectangle
            var rectangle = new Rectangle(x, y, size.Width, size.Height); // 25, 0, 200, 200

            //crop
            using (var croppedBmp = resizedBmp.Clone(rectangle, resizedBmp.PixelFormat))
            using (var ms = new MemoryStream())
            {
                //get the codec needed
                var imgCodec = ImageCodecInfo.GetImageEncoders().First(c => c.FormatID == format.Guid);

                //make a paramater to adjust quality
                var codecParams = new EncoderParameters(1);

                //reduce to quality of 80 (from range of 0 (max compression) to 100 (no compression))
                codecParams.Param[0] = new EncoderParameter(Encoder.Quality, 80L);

                //save to the memorystream - convert it to an array and send it back as a byte[]
                croppedBmp.Save(ms, imgCodec, codecParams);
                return ms.ToArray();
            }
        }
    }
}

There you go. Your massive image is looking good to the correct size whilst being a much leaner version of former self. Congratulations!

Update

In line 24, you can see I have set the InterpoltionMode of the graphics object to InterpolationMode.HighQualityBicubic before doing the initial resize. I previously had this set to Default but I noticed that straight diagonal lines in resized images were coming out jagged and they didn't look pretty. HighQualityBicubic is meant to be "the best" form of interpolation according to Microsoft so I sided with that. I can see a big improvement in the quality so I would recommend it too.

Follow britishdev on Twitter

Wednesday 11 May 2011

Why the ICO's new cookie directive is nonsense

As of the 26th May 2011 the law which applies to how you use cookies is changing and quite dramatically too as directed by the ICO. You are now being (asked / advised / ordered) to request your users' permission before storing any "non-essential" cookies on their machine. The given example of an "essential" cookie is one which remembers what is in their shopping cart. So, my thoughts follow.

Chances are you’re here because you heard about the changes to the rules on using cookies and similar technologies for storing information and are now trawling through Google trying to find the second part of the directive saying, "Just kidding!!!!1! lolllzzzzzzzzz, luv ICO xx" but getting progressively concerned as you can't find it anywhere.

I'm also guessing that you can't quite believe what you are hearing. Cookies? The cookies that have been around since early 90s? The ones that have evolved over almost 20 years? The ones that are now integral to your users' experience, your analytics, your business intelligence and possibly tracking click-throughs necessary for your revenue? Yes, those things. Oh PS you have about a week to change how you use them.

Development cycle

Does the ICO have any idea how development cycles work? You can't expect every website available in the EU to suddenly change something as integral as cookies that quickly whilst putting all other development work on hold. Also, since this will be such a degradation of user experience there will surely be a "well, you first" effect where no one will comply until their competitors also comply.

Example of why this isn't fair

Website1 has a link promoting Website2. Website2 sells several products. Website2 has promised that if a user who was referred from Website1 purchases a product it will give a proportion of its revenue to Website1.

This system could be implemented by checking the users' referrer on entry to Website2 and then storing it in a cookie. After clicking around the site a user purchases a product, a cookie is checked to see that they came from Website1 to determine whether or not to share revenue for that purchase.

In this example you can see how the use of this cookie is fundamental to the monetisation of Website1 and the relationship => traffic => monetisation of Website2. You can also see how that cookie is of absolutely no interest of the user and therefore "non-essential". So, when asked, would that user accept that cookie? Probably not.

Hypocrisy

So it hasn't come into effect yet, sure, but even still wouldn't you lead by example if you thought it was such a great idea? Find out by visiting the ICO homepage and viewing your cookies. Looks like analytics cookies tracking my behaviour to me. Update: almost a year later they have updated their site to not send cookies without asking first. Wonder how happy their analysts are with the lack of traffic statistics? Regardless I should concede this point.

Communication

Can you seriously change the law that significantly which such poor communication? I only found out about this through word of mouth (i.e Twitter) and then reading the "guidance" made me surprised to discover developers are now expected to be law graduates. I was expecting to read a clear directive not a rambling 10 page pdf of legal speak.

You can't stop tracking of information

Developers are skilled in the art of workarounds. We'll find a way of storing information about users regardless of this directive. For example if we're allowed a session cookie, we'll use that cookie to reference a plethora of information about the user in a database. Oh look, workaround is already there. So what are you achieving now other than making my DBA less happy?

Conclusion

If this is actually taken seriously you can get used to clicking a lot more often as websites ask you if you'd like to allow cookies or not. Possibly as some websites suffer from losing money streams from both essential tracking and poor user experience they may struggle to exist.

That or we will all be arrested (including the ICO).

Follow britishdev on Twitter

Friday 6 May 2011

MVC3 deploy - Could not load file or assembly 'System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies

Whilst deploying my newly upgraded ASP.NET MVC 3 web application to the production environment I started receiving a FileNotFoundException with the error message "Could not load file or assembly 'System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified." I have encountered this before so I know it is about bin-deploying that assembly. But where is the System.Web.WebPages.Razor assembly?

Since you have installed ASP.NET MVC 3 on your development machine you have all these MVC 3 assemblies installed in your GAC. Your production machine does not!

So you need to find this file and make it bin deployable however this cheeky assembly isn't in you solution so how can you do that? Well... add it. Go to Add Reference on your root web project (e.g. Web.csproj) and find System.Web.WebPages.Razor and add.

Now right click this assembly in your references directory and click properties.

Now you can make it bin deployable by setting Copy Local to true like so:

Note this is how to deploy the System.Web.WebPages.Razor assembly withe your ASP.NET MVC 3 web application. To deploy a typical setup of an ASP.NET MVC 3 app you will need to do the same for all of the following assemblies:
  • System.Web.Mvc
  • System.Web.Helpers
  • System.Web.Razor
  • System.Web.WebPages
  • System.Web.WebPages.Deployment
  • System.Web.WebPages.Razor as detailed above
  • Microsoft.Web.Infrastructure this will also need a reference added as above
  • WebMatrix.Data this also needs the reference

Of course you only need to add the reference to these assemblies if they are not already there - in most cases it will be. This was mainly a blog post for the slightly more complicated System.Web.WebPages.Razor.dll

Update

There is an easier way to do this if you are using Visual Studio 2010 Service Pack 1 and using Web Deploy as your publish action when publishing. You can add deployable dependencies to your ASP.NET MVC project that will choose the necessary assemblies for you. Even easier!

Follow britishdev on Twitter

How to convert an ASP.NET Web Forms web application into ASP.NET MVC 3

So, you have a traditional web forms web application and you are inspired by the new ASP.NET MVC 3 are you? Well this is definitely doable. In this how to guide I will tell you step by step exactly how you can add your new ASP.NET MVC 3 work into your existing ASP.NET web forms project.

1. Add references

Right click on your web root project in Visual Studio (2010 in my case) and add the following references:
  • System.Web.Routing
  • System.Web.Abstractions
  • System.Web.Mvc
  • System.WebPages
  • System.Web.Helpers

2. Configuration

Make your web.config look like this. Obviously not exactly like this, don't just paste this over the top of your glorious web.config. I'm just highlighting the necessary points that need to be fitted into your existing web.config:
<appSettings>
  <add key="webpages:Version" value="1.0.0.0"/>
  <add key="ClientValidationEnabled" value="true"/>
  <add key="UnobtrusiveJavaScriptEnabled" value="true"/>
</appSettings>

<system.web>
  <compilation debug="true" targetFramework="4.0">
    <assemblies>
      <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
      <add assembly="System.Web.Helpers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
      <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
      <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
      <add assembly="System.Web.WebPages, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
    </assemblies>
  </compilation>
  <pages>
    <namespaces>
      <add namespace="System.Web.Helpers" />
      <add namespace="System.Web.Mvc" />
      <add namespace="System.Web.Mvc.Ajax" />
      <add namespace="System.Web.Mvc.Html" />
      <add namespace="System.Web.Routing" />
      <add namespace="System.Web.WebPages"/>
    </namespaces>
  </pages>
</system.web>
<system.webServer>
  <modules>
    <remove name="UrlRoutingModule-4.0" />
    <add name="UrlRoutingModule-4.0" type="System.Web.Routing.UrlRoutingModule" preCondition="" />
  </modules>
</system.webServer>
<runtime>
  <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
    <dependentAssembly>
      <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
      <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="3.0.0.0" />
    </dependentAssembly>
  </assemblyBinding>
</runtime>

Update: I have changed my recommendation against using <modules runAllManagedModulesForAllRequests="true"> in favour of adding the UrlRoutingModule-4.0 module with a blank precondition as I explain in my article Don't use runAllManagedModulesForAllRequests="true" for MVC routing.

However, if you disagree with me then you can make the modules section simply look like this:
<system.webServer>
  <modules runAllManagedModulesForAllRequests="true"/>
</system.webServer>

But be warned with that setting (or at least read the above blog post link to understand what you are doing)!

3. Routing

You will need to add a global.asax file to your web application if you haven't already got one. Right click > Add > Add New Item > Global Application Class:


Now in your global.asax add these usings:
using System.Web.Mvc;
using System.Web.Routing;

Now add these lines:
public static void RegisterRoutes(RouteCollection routes)
{
    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

    //ignore aspx pages (web forms take care of these)
    routes.IgnoreRoute("{resource}.aspx/{*pathInfo}");

    routes.MapRoute(
        // Route name
        "Default",
        // URL with parameters
        "{controller}/{action}/{id}",
        // Parameter defaults
        new { controller = "home", action = "index", id = "" }
        );
}

protected void Application_Start(object sender, EventArgs e)
{
    RegisterRoutes(RouteTable.Routes);
}

4. Add some standard folders to the solution

Add a folder named 'Controllers' to your web project.
Add a folder named 'Views' to your web project.
Add a folder named 'Shared' to that Views folder.
Add a web configuration file to the Views folder (web.config).
Open this web.config in the Views folder and ensure make its contents as follows:
<?xml version="1.0"?>

<configuration>
  <configSections>
    <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
      <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
      <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
    </sectionGroup>
  </configSections>

  <system.web.webPages.razor>
    <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
    <pages pageBaseType="System.Web.Mvc.WebViewPage">
      <namespaces>
        <add namespace="System.Web.Mvc" />
        <add namespace="System.Web.Mvc.Ajax" />
        <add namespace="System.Web.Mvc.Html" />
        <add namespace="System.Web.Routing" />
      </namespaces>
    </pages>
  </system.web.webPages.razor>

  <appSettings>
    <add key="webpages:Enabled" value="false" />
  </appSettings>

  <system.web>
    <httpHandlers>
      <add path="*" verb="*" type="System.Web.HttpNotFoundHandler"/>
    </httpHandlers>

    <!--
        Enabling request validation in view pages would cause validation to occur
        after the input has already been processed by the controller. By default
        MVC performs request validation before a controller processes the input.
        To change this behavior apply the ValidateInputAttribute to a
        controller or action.
    -->
    <pages
        validateRequest="false"
        pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
        pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
        userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
      <controls>
        <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />
      </controls>
    </pages>
  </system.web>

  <system.webServer>
    <validation validateIntegratedModeConfiguration="false" />

    <handlers>
      <remove name="BlockViewHandler"/>
      <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" />
    </handlers>
  </system.webServer>
</configuration>


5. Get Visual Studio 2010 to recognise your MVC skills

If you right click the Controllers folder and select Add, you'll notice there is no Controller class to add. You need to make a change to your web project file.

Using Windows Explorer find the web project file (Web.csproj in my case) and open it in a text editor. You will need to add "{E53F8FEA-EAE0-44A6-8774-FFD645390401};" to the ProjectTypeGuids element.

E.g. mine now looks like this:
<ProjectTypeGuids>{E53F8FEA-EAE0-44A6-8774-FFD645390401};{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids>

6. Optional: Add a Controller (for a laugh)

Back to Visual Studio 2010 and right click the Controllers folder > Add > Controller... call it HomeController.

using System.Web.Mvc;

namespace Web.Controllers
{
    public class HomeController : Controller
    {
        public ActionResult Index()
        {
            ViewBag.Message = "This is MVC";
            return View();
        }
    }
}

7. Optional: Add a View (just to show off)

Right click where it says return View(); and select Add View... then click Add.

<%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<dynamic>" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Index</title>
</head>
<body>
    <div>
        <%: ViewBag.Message %>
    </div>
</body>
</html>

8. Optional: Try it (just to self-indulge your skills)

Now run you site and go to http://{localhost}/home

I hope this saved you the hours it took me! Please comment if you have any feedback. It would be much appreciated

More reading here if you would like to use an existing WebForms MasterPage in your new MVC project.

Follow britishdev on Twitter

Friday 15 April 2011

How to connect to a SQL Azure database

Connecting to a SQL Azure database is very new, confusing and uncomfortable territory for me and I found it tricky. However I managed to get my Entity Framework model connected to it after quite some faffing around. Here's how to do it:

Create you SQL Azure server and database

Go to the Windows Azure portal. Click Database at the bottom. Setting up you SQL server and database is fairly self explanatory and you're a clever person so I won't waste your time telling you step how to do this.

Keep your portal open since you will be setting the firewall up soon.

Create your entity model

Right click the project that you plan to have your Entity Model in and select Add then New Item. Then chose and ADO.NET Entity Data Model and name it something. Choose Generate from database and click Next.

Now time for some connection fun.
  1. Click "New Connection..."
  2. Your servername will be servername.database.windows.net Your servername will be the one seen from the Windows Azure portal e.g. ab12abcedef
  3. Choose Use SQL Server Authentication
  4. Username will be DBusername@servername
  5. Password will be the password for this database that goes with the "DBusername" from above
  6. Click Test Connection and....
FAIL! You will get a dialogue box saying that you have been blocked by a firewall and tells you your IP address that just got rejected. Keep all of this open as we will be coming back.

Make an exception in your firewall

Go to your Windows Azure portal and find your database server. You can then add a rule to its firewall. The range can begin and end on the IP address that Visual Studio complained about a minute ago. Give your rule a name and you're done. Well, you will be in about 5 minutes when the exception has propagated.

Connection strings

Back to your connection setup in Visual Studio. Clicking Test Connection should now test successfully (5minutes after adding the firewall rule). You will now have a connection string similar to this:

Data Source=ab12abcdef.database.windows.net;Initial Catalog=MyDatabase;Persist Security Info=True;User ID=username@ab12abcdef;Password=MyPassword;MultipleActiveResultSets=False

Follow britishdev on Twitter

Monday 28 March 2011

How to delete an entity by ID in Entity Framework

It may seem unnatural to delete Entities in Entity Framework. In the days of stored procedures I used to just pass an ID where as now you need to get the entity before deleting it. It seems inefficient - you're basically making two database hits when surely you can do it in one database call, right? Well it is difficult but entirely possible.

Since you do not have the entity you will need you will effectively be updating a detached entity which is a technique in itself. The trick here is that we are making a fake entity that shares the Entity Key (EF version of a primary key) with the one that needs deleting. Then we mark it as deleted and SaveChanges.

public void DeleteByID(object id)
{
    //make fake entity and set the ID to the id passed in
    var car = new Car();
    car.ID = id;

    using (var context = new MyDataContext())
    {
        //add this entity to object set
        context.CreateObjectSet<Car>().Attach(car);
        //mark it as deleted
        context.ObjectStateManager.ChangeObjectState(car, EntityState.Deleted);
        //save changed - effectively saying delete the entity with this ID
        context.SaveChanges();
    }
}

It is worth noting that I am following my own twist on the Repository pattern for data access so I have an "EntityRepository" that manages everything EF related. This is where I will make my generic "DeleteByID" method. My EntityRepository uses generics to specify the type of entities that are dealt with so this has to be completely reusable by any entity set (TEntity) or object context (TContext).

How to Delete by ID in a generic way

public void DeleteByID(object id)
{
    var item = new TEntity();
    var itemType = item.GetType();

    using (var context = new TContext())
    {
        //get the entity container - e.g. MyDataContext - this will have details of all its entities
        var entityContainer = context.MetadataWorkspace.GetEntityContainer(context.DefaultContainerName, DataSpace.CSpace);
        //get the name of the entity set that the type T belongs to - e.g. "Cars"
        var entitySetName = entityContainer.BaseEntitySets.First(b => b.ElementType.Name == itemType.Name).Name;
        //finding the entity key of this entity - e.g. Car.ID
        var primaryKey = context.CreateEntityKey(entitySetName, item).EntityKeyValues[0];
        //using Reflection to get and then set the property that acts as the entity's key
        itemType.GetProperty(primaryKey.Key).SetValue(item, id, null);
        //add this entity to object set
        context.CreateObjectSet<TEntity>().Attach(item);
        //mark it as deleted
        context.ObjectStateManager.ChangeObjectState(item, System.Data.EntityState.Deleted);
        //save changed - effectively saying delete the entity with this ID
        context.SaveChanges();
    }
}

These both translate to the following SQL:
delete from [dbo].[Cars]
where  ([ID] = 123)
And there it is - a reusable delete by ID function with no SELECT!

Follow britishdev on Twitter

Friday 18 March 2011

How to update a detached entity in Entity Framework

If you are using Entity Framework and closing or disposing of your DataContexts (read database connections) as soon as your Linq to Entities query have finished, which I believe you should, you will run into the situation where saving changes to an updated entity will be ineffectual.

I've discussed before that I think deliberately keeping database connections open is bad in Entity Framework but when practicing this technique you will find saving changes to an updated entity no longer works as it used to when the DataContext was left open as long as it liked.

This is because when you open a DataContext you are also starting up change tracking. Change tracking will keep a note of if your entity has been changed. This is important when you call SaveChanges on that DataContext because it needs to know if there have been any changes to save.

If, however, you open your DataContext, do you Linq to Entities query to get an entity and then dispose of your DataContext your entity is now detached. To SaveChanges you will need to open an new DataContext, which will have no idea that this entity has been changed. So you need a way to update a detached Entity.

Update a detached entity with your DataContext

You can do this by using the Attach method to attach this entities to the DataContext (so it is now managing it). After you've attached it it will be in the "Unchanged" state so will not do anything on SaveChanges. You can tell it it has been updated by changing its object state to Modified. Then simply save changes.

Here is an example. (This is not my actual code - it is more explicit to aid understanding):

public void UpdateObject(T obj)
{
    using (var context = new MyDataContext())
    {
        context.CreateObjectSet<T>().Attach(obj);
        context.ObjectStateManager.ChangeObjectState(obj, EntityState.Modified);
        context.SaveChanges();
    }
}

Updating a detached model when using EF Code First

Coming back to my blog years later (Feb 2013) to reuse the above code I found it didn't work in Entity Framework Code First Model. This is because the context mentioned above is a different from the DbContext used in Code First and so has different properties and methods. Here is the code I used to make this work in EF Code First:

public void UpdateObject(T obj)
{
    using (var context = new MyDataContext())
    {
        context.Set<T>().Attach(obj);
        context.ChangeTracker.Entries<T>().First(e => e.Entity == obj)
                                              .State = EntityState.Modified;
        context.SaveChanges();
    }
}

Follow britishdev on Twitter

Firefox loses Request headers after redirect

Firefox does not remember some request headers after a redirect. This can cause problems with AJAX requests that use the X-Requested-With: XmlHttpRequest header.

I noticed a little problem with Firefox. In my scenario I have added the SEO optimising IIS Url Rewriting module pattern that redirects URL with upper case in to a lower case version.

For all browsers the AJAX request comes in with the X-Requested-With: XmlHttpRequest request header, a 301 status code is returned with the lower cased location, that new location then is requested with the X-Requested-With: XmlHttpRequest header in tact.

Firefox, however, does not honour this header in its redirect. This causes problems with my ASP.NET MVC code that checks for the request being via AJAX using IsAjaxRequest(). This method checks if X-Requested-With: XmlHttpRequest is in the headers (or in other exciting places).

Work around

I was going to try and do something clever like detect the browser and this header and then not redirect under some circumstances because (brace yourself SEO consultants) functionality is more important than SEO. I decided to instead make sure that one troublesome link was lowercase and so avoided the redirect.

Just hope this answers your questions if you come across this oddity in Firefox.

Follow britishdev on Twitter

Wednesday 16 March 2011

WCF REST: How to get a Request header

Again WCF REST is proving secretive when it comes to getting at your Request Headers.

You can get at any custom Request headers that you may be expecting via the IncomingMessageProperties off of the OperationContext in the same way as before when you were adding custom Headers to your response.

To do this you you can write something like this:
var request = OperationContext.Current.IncomingMessageProperties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty;
var version = request.Headers["ApiVersion"];

Update:

Actually, there is an easier way I found. Since this is in a REST service it is possible to use WebOperationContext which seems a lot more intuitive and concise:
var version = WebOperationContext.Current.IncomingRequest.Headers["ApiVersion"];

You will then have access to any expected Request Headers as a string.

More WCF REST:

Follow britishdev on Twitter

WCF REST: How to add a response header

I can't believe how difficult it is to add a response header to a response to a WCF REST service. It is so round the houses it really quite difficult.

Whatever happened to Response.Headers.Add from the friendly web code I'm so used to... Oh well, after scratching around for ages I finally found how:
var prop = new HttpResponseMessageProperty();
prop.Headers.Add("ApiVersion", "v1.0");
OperationContext.Current.OutgoingMessageProperties.Add(HttpResponseMessageProperty.Name, prop);

I know you're kicking yourself, right? "That's so obvious..." Ridiculous - I'm starting to thing WCF REST wasn't made with the developer in mind.

Update:

Actually, there is an easier way I found. Since this is in a REST service it is possible to use WebOperationContext which seems a lot more intuitive and concise:
WebOperationContext.Current.OutgoingResponse.Headers.Add("ApiVersion", "v1.0");

More WCF REST:

Follow britishdev on Twitter

WCF REST: How to get the requested URL

Finding the Request.Url is a lot different from standard web work when working in a WCF REST context. It can be quite difficult to find the URL of the request in your WCF REST service.

I found you can do it with the following code:
var context = OperationContext.Current;
var requestedUrl =  context.IncomingMessageHeaders.To.PathAndQuery;

So there you are... enjoy.

More WCF REST:

Follow britishdev on Twitter

WCF REST: How to get an IP address of a request

It seems to be quite difficult to find the IP of the request when in the Context of a WCF REST service.

I found you can do it with the following code:
var props = OperationContext.Current.IncomingMessageProperties;
var endpointProperty = props[RemoteEndpointMessageProperty.Name] as RemoteEndpointMessageProperty;
if (endpointProperty != null)
{
    var ip = endpointProperty.Address;
}

This seems overly verbose so you may want to consider putting this code into helper methods or a base class for re-use.

More WCF REST:

Follow britishdev on Twitter