find a location for property in a new city

Wednesday, 7 December 2016

Reference object with ref keyword on a method. What's the point?

I wondered the other day what the point is in making a method parameter a ref when it is already a reference type. After playing around for a bit I came up with some concise code to illustrate the difference.

The difference is that, although you can modify the reference object that was passed in with both methods. Changing the reference entirely (i.e. pointing to a different object) only has an effect outside the scope of the method if the object was passed through with the ref keyword.

This code should explain better than English:

void Main()
{
 var t1 = new Thing();
 Test(t1);
 t1.Write(); // Chair
 
 var t2 = new Thing();
 Test(ref t2);
 t2.Write(); // Table
}

void Test(Thing t)
{
 t.Name = "Chair";
 t = new Thing { Name = "Table" };
}

void Test(ref Thing t)
{
 t.Name = "Chair";
 t = new Thing { Name = "Table" };
}

class Thing
{
 public string Name { get; set; }
 public void Write() => Console.WriteLine(Name);
}

Follow britishdev on Twitter

Tuesday, 22 December 2015

Stop processing OPTIONS requests for CORS in ASP.NET Web API

I was attempting to allow some particular origins to access my ASP.NET Web API from a client side single page application. I was using the EnableCorsAttribute that comes with the Microsoft.AspNet.WebApi.Cors NuGet package.

I managed to set up CORS using the following code in my WebApiConfig:

var origins = ConfigurationManager.AppSettings["AllowedOrigins"];
var cors = new EnableCorsAttribute(origins, "accept,content-type,origin,customId", "GET,POST,PUT");
config.EnableCors(cors);

There is quite a lot to CORS but essentially, (some browsers) send a pre-flight request recognised with its HTTP method OPTIONS. This basically asks the application who is allowed to access this URL with the attempted headers and HTTP method. Your Web API will respond saying which origins are allowed or if there are any errors. The browser then decides if it is one of those allowed origins and sends the request if it is.

The problem I found is that on this initial OPTIONS request my IoC container, Unity, was constructing a whole dependency chain of classes. Some of which access the database and some check HTTP headers. This was throwing an error since bits were missing from the HTTP headers that would be with normal requests and unnecessarily hitting the database. So really, I wanted to stop these requests in their tracks whilst making sure they did their intended pre-flight work.

The best way I found to do this was to ignore routes based on an HTTP constraint for "OPTIONS". Basically shove this in your routing:

var constraints = new { httpMethod = new HttpMethodConstraint(HttpMethod.Options) };
config.Routes.IgnoreRoute("OPTIONS", "{*pathInfo}", constraints);

More info on Enabling CORS in Web API.

Follow britishdev on Twitter

Thursday, 6 August 2015

Outbound IP address from Azure WebJobs or WebSite

I need to find the outbound IP address of an Azure Website so that I can whitelist this IP address in a service I wish to call. However there are concerns with this practice as I will explain.

Azure WebJobs are completely awesome and they should be used more. I have project however that I am using to pre-process a large amount of data. As part of this process, my Web Job will need to call a third party web service that operates an IP whitelist. So to call it successfully I need to find the IP address of my Azure WebJob.

That is simple enough, I confirmed the IP address using two sources, one is a given list of IP's as documented by Microsoft. Details on how to find yours are here: Azure Outbound IP restrictions. I also wrote a little code to make sure this matches from a WebJob like so:

public async static Task ProcessQueueMessage(
    [QueueTrigger("iptest")] string message,
    TextWriter log)
{
    using (var client = new HttpClient())
    {
        var response = await client.GetAsync("http://ip.appspot.com/");
        var ip = await response.Content.ReadAsStringAsync();
        await log.WriteAsync(ip);
    }
}

Popping something in the "iptest" queue kicked off the job and checking the logs confirmed the WebJobs are in fact consistent with the IP ranges documented.

There is a problem however, if you read that link you will discover that although it is roughly static it is not unique. You will share your outbound IP with other users of Azure WebSites that are hosted in the same region as you and the same scale unit as you. What is a scale unit? Who cares but there are 15 of them in the North Europe data centre for example so not a lot. Now how secure do you think IP whitelisting a shared IP is? Not very!

Workaround

Don't give up hope! The work arounds I can see are to ask the service provider to not rely on only IP whitelisting, have another form of authentication, an API key over SSL would work for example. Have it as well as IP Whitelisting if it makes them happy.

If they can't be controlled you can do your own magic. There are proxy providers out there that will provide your calls with a unique static IP address. Try QuotaGuard. Or make your own - if you already have a Cloud Service running in Azure you can proxy the service via that as they can have static and unique outbound IP addresses.

Follow britishdev on Twitter

Skipping unit tests is a false economy

You only need unit tests if you write buggy code but why would I write bugs? I don't need them.

For a long time I thought unit tests was just vanity code. Oooh look, I've made this repository unit testable and I've used my fav' IoC container to inject dependencies now so in my tests I mock them to create and elegant suite of unit tests. No one will ever run the tests but it is cool because all the top dev bloggers write about it.

That is probably a lot of people's motives, that and the lead dev told them to. You can tell when people don't fully understand why they are writing unit test when they are running short of time to complete their work and the first thing they compromise is the unit tests. Unit tests are always the first to be dropped in high pressure environment.

Let me sell unit tests to you

They save you time! How can that be right? They take so long to write and refactor every time you change your code. Unit tests save you time because you no longer need to trawl through mountains of code to work out in you head any possibility for causing a bug. You don't need to step through every line of code in your application finding potential for unexpected consequences when you have decent unit test. Which saves so much time!

Worse still is the people that wouldn't have trawled through the code to find a potential bug and just committed their code anyway and broke something. Sooner or later it will get noticed and someone is going to spend a very long time tracking it down and then fixing the root cause.

What about deployment time as well, when you come to merge and commit a branch or do a deployment, how long would it take you to review every line in the merge or click about every section of the site? 30mins? Hours? Or maybe you wouldn't bother and as mentioned that is when the even harder to find bugs creep in. It is such a waste of time!

Writing unit test from the beginning will slash all this wasted time!

Unit tests should be the first things your write!

This is called TDD (Test Driven Development) and it is the most effective way of ensuring your tests get written as you can't drop them out to save time as they are already written. It also forces you to really think about your code before writing it.

But TDD or not, please PLEASE write them. Lots of them. Use NCrunch that has continuous test runners for your test suite and shows the lines of code covered and the state of that test. Aim for 80%+ code coverage from the beginning and you will save so much time in the future doing all that boring clicking about or line by line reviewing of your code merges.

Not writing unit tests is a false economy

And developers are expensive so don't waste their time.

Please send this on to your team including the project manager. Everyone must know the importance of getting unit tests done and written well.

Follow britishdev on Twitter

Friday, 17 May 2013

SQL MERGE that changes the ON constraint as it runs

What happens when your MERGE statement INSERTs a row that now satisfies the ON constraint. If a new row comes along that satisfies the ON constraint will it fall into WHEN MATCHED or WHEN NOT MATCHED?

Let's use an example piece of code to decide what happens. Here I have two table variables created and seeded with some random data:

DECLARE @t1 Table (id int, name varchar(12))
INSERT INTO @t1 VALUES(1, 'hi')

DECLARE @t2 Table (id int, name varchar(12))
INSERT INTO @t2 VALUES(1, 'bye')
INSERT INTO @t2 VALUES(3, 'colin')
INSERT INTO @t2 VALUES(3, 'sarah')

Now I will attempt to MERGE @t2 into @t1. If you have not come across MERGE before it is basically a more efficient way of saying UPDATE if it exists or INSERT if it is new.

MERGE @t1 AS t1
 USING(SELECT * FROM @t2) AS t2
 ON t2.id = t1.id
WHEN MATCHED THEN
 UPDATE SET t1.name = t2.name
WHEN NOT MATCHED THEN
 INSERT(id, name)
 VALUES(t2.id, t2.name);

So, what we are saying is:

  • USING this source of data (SELECT * FROM @t2)
  • ON this decider (matching the ID's to judge if this row already exists)
  • WHEN the ON clause is MATCHED update this row with new values
  • WHEN NOT MATCHED we should INSERT the new row

But, given the values in @t2, what will happen?

  1. The 1st row will match on ID 1 and so will do an UPDATE
  2. The 2nd row doesn't have a match for an ID of 3 so will INSERT
  3. The 3rd row didn't have a match for an ID of 3 before but since the last INSERT it now does. So INSERT or UPDATE?

A SELECT * FROM @t1 will show you that it has INSERTed twice...

idname
1bye
3colin
3sarah

So watch out for this. The constraint is decided about the source and destination once at the beginning and not again so you should be sure that the source table being MERGED is complete in itself. You can do with using a GROUP BY or DISTINCT on the USING table. Whichever is most appropriate for your scenario.

Follow britishdev on Twitter

How to solve Introducing FOREIGN KEY constraint may cause cycles or multiple cascade paths

I was writing my Entity Framework 5 Code First data models one day when I received a dramatic sounding error: "Introducing FOREIGN KEY constraint 'FK_dbo.Days_dbo.Weeks_WeekID' on table 'Days' may cause cycles or multiple cascade paths." I was instructed to "Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints." And then simply the message, "Could not create constraint." So what happened with my Foreign Key that makes it cyclic?"

First a spot of code that can nicely demonstrate this scenario:

public class Year
{
    public int ID { get; set; }
    public string Name { get; set; }

    public ICollection<Month> Months { get; set; }
}
public class Month
{
    public int ID { get; set; }
    public string Name { get; set; }

    public Year Year { get; set; }
    public ICollection<Day> Days { get; set; }
}
public class Day
{
    public int ID { get; set; }
    public string Name { get; set; }

    public Month Month { get; set; }
    public Week Week { get; set; }
}

//problem time
public class Week
{
    public int ID { get; set; }
    public string Name { get; set; }

    public Year Year { get; set; }
    public ICollection<Day> Days { get; set; }
}

So, let's explain this. A Year has Months and a Month has Days - all is well at this point and it will build and generate all your tables happily. The problem comes when you add the highlighted parts to add Weeks into the data model.

By making those Navigation Properties you have implicitly instructed EF Code First to create Foreign Key constraints for you and each of these will have cascading deletes on them. This means, upon deleting a Year, the database will cascade that delete to that Year's Months and in turn those Month's Days. This is to, quite rightly, hold referential integrity; you will not have Yearless Months.

So if you think about it, now that we have added a Week entity that also has a relationship with Days this too will cascade deletes. So as well as the cascading deletes that occur when you delete a Year as I described above, now deleting a Year will also delete its Weeks and those Weeks will delete the Days. But those Days would have already been deleted by the cascade that went via Months. So who wins? Who gets there first? Don't know... that's why you need to design an answer to this conundrum.

Removing the multiple cascade paths

The way I found to solve this issue is you need to remove one of the cascades, since there is nothing wrong with the Foreign Keys, it is only the multiple cascade paths. So lets remove the cascading delete on the Week -> Days relationship since that delete will be taken care of by each Month cascading its deletes to their Days.

public class CalenderContext : DbContext
{
    public DbSet<Year> Years{ get; set; }
    public DbSet<Month> Months { get; set; }
    public DbSet<Day> Days { get; set; }
    public DbSet<Week> Weeks { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Day>()
            .HasRequired(d => d.Week)
            .WithMany(w => w.Days)
            .WillCascadeOnDelete(false);

        base.OnModelCreating(modelBuilder);
    }
}

If you already have a ForeignKey field explicitly written into your code model (as I usually do - I removed them from the above example for clarity) you will need to specify this existing Foreign Key in the code just written. So if your Day entity looked like this:

public class Day
{
    public int ID { get; set; }
    public string Name { get; set; }

    [ForeignKey("Month")]
    public int MonthID { get; set; }
    public Month Month { get; set; }

    [ForeignKey("Week")]
    public int WeekID { get; set; }
    public Week Week { get; set; }
}

You will need your OnModelCreating method to look like this:

modelBuilder.Entity<Day>()
    .HasRequired(d => d.Week)
    .WithMany(w => w.Days)
    .HasForeignKey(d => d.WeekID)
    .WillCascadeOnDelete(false);

Have fun deleting safely!

Follow britishdev on Twitter

Thursday, 28 February 2013

The request filtering module is configured to deny a request that contains a double escape sequence

In IIS 7.5 I have a site that contains a page that takes an encrypted part of a URL. This encrypted string includes a plus sign '+' which causes IIS to throw a "HTTP Error 404.11 - Not Found" error stating "The request filtering module is configured to deny a request that contains a double escape sequence."

The problem is that a + sign used to be acceptable in earlier versions of IIS so these URLs need to remain for legacy reasons. So, I need to make them allowed again in IIS.

The quick fix

This can be easily achieved with a simple web.config change:

<system.webServer>
    <security>
        <requestFiltering allowDoubleEscaping="true" />
    </security>
</system.webServer>

This allows URLs to contain this plus symbol '+'.

The warning

There are consequences to this which unsurprisingly are security related so please read Double Encoding to familiarise yourself with the risk for your situation. If it is a risk to you maybe the best solution is to redesign those URLs?

Follow britishdev on Twitter

Saturday, 19 January 2013

Handler “ExtensionlessUrlHandler-Integrated-4.0” has a bad module “ManagedPipelineHandler” in its module list

How to fix the error 'Handler “ExtensionlessUrlHandler-Integrated-4.0” has a bad module “ManagedPipelineHandler” in its module list'.

This error occurred when I moved my moved my ASP.NET 4.5 from an old Windows 7 machine to a brand new Windows 8 machine running IIS 8.

I was thinking I was in need of an aspnet_regiis -I but this threw back a message at me saying "Start installing ASP.NET (4.0.30319.17929). This option is not supported on this version of the operating system. Administrators should instead install/uninstall ASP.NET 4.5 with IIS8 using the "Turn Windows Features On/Off" dialog, the Server Manager management tool, or the dism.exe command line tool. For more details please see http://go.microsoft.com/fwlink/?LinkID=216771. Finished installing ASP.NET (4.0.30319.17929)."

So I turned to turning windows features on/off (Win+X then hit F). I had already turned IIS on but then I had found I hadn't done enough.

Previously when I had turned IIS on I had left it with the default but actually I needed more to get ASP.NET 4.5 to run. You need to go to:

  1. Internet Information Services
  2. World Wide Web Services
  3. Application Development Features
  4. Then check ASP.NET 4.5

This will select a few other bits for you and you should be good to go.

Follow britishdev on Twitter

Thursday, 17 January 2013

Finding Outlook Web Access (OWA) URL from Outlook 2010

I was struggling to find out my Outlook webmail address from Outlook 2010 on my machine so that I can access my work email from home or on my mobile. The OWA URL is hidden quite deep into Outlook 2010 so I have written down the location of the URL so I can remember for next time.

It is rather hidden away, probably because it is assumed you would just ask IT instead of finding out for yourself but that isn't like you is it. That's why you're here!

Finding your web mail address in Outlook 2010

  1. Click the File tab near the top left
  2. Click the Info tab
  3. Click the big "Account Settings" button and then "Account Settings" again on the pop out menu
  4. On E-mail tab ensure your address is selected and click the "Change..." button just above it
  5. Click the "More Settings ..." button at the bottom
  6. Go to the Connection tab and click "Exchange Proxy Settings..." button at the bottom
  7. It is the first URL on this window. Something like https://mail.mydomain.com

Follow britishdev on Twitter

Monday, 14 January 2013

Web forms button click event not firing in IE

There are probably many reasons why a button would not fire in an ASP.NET web forms application. Some can be fixed with deleting cookies and history, some are to do with when the event is registered. I found a new one though.

In my scenario I had a text box that was submitted with a button next to it. We didn't want to display the button though but instead wanted the form submitted with a click of the Enter button. The button was therefore wrapped in a display:none span, like this:

<input type="text" id="txtInput" onchange="btnGo_Click" runat="server"/>
<asp:Button style="display:none;" runat="server" OnClick="btnGo_Click" />
<asp:Label ID="lblOutput" runat="server" />

It turns out that this worked just fine in Chrome and Firefox but not in IE9. This was because not all browsers post data that is hidden with CSS. Security possibly? You could see this by looking at the request in Fiddler. There was no value for the button field in IE but there was for the others.

How to submit a form by clicking Enter and hiding the button

The best way I can think of is to hide the button without hiding it... How? You can push it miles of the page to hide the button with some CSS like this:

<input type="text" id="txtInput" onchange="btnGo_Click" runat="server"/>
<asp:Button style="position:absolute;left:-9999px;" runat="server" OnClick="btnGo_Click" />
<asp:Label ID="lblOutput" runat="server" />

This will put the button miles off to the left never to be seen by the user.

Follow britishdev on Twitter

Friday, 11 January 2013

Add rel="nofollow" to all external links with one line of jQuery

You may want to change all external links on a page to do something different such as add a target="_blank" to each one or add rel="nofollow" to every external link. This post will show you how this can be done in one line of jQuery!

SEO SEO SEO... How tiresome a concept... Anyway, people make their living with it apparently and while that is the case I get all kinds of weird requirements like this. This time I had to add rel="nofollow" to all external links for latest whim of an SEO consultant.

Since SEO requirements seem to chop and change fairly unpredictably I wanted to add rel="nofollow" to all external links in the quickest and least interfering way possible. I managed it with one line of jQuery so that is fairly innocuous and I can just remove it if it needs to be undone at some point. This can be used to add target="blank" to also.

Add rel="nofollow" to all external links

$("div.content a[href^='http']:not([href*='mysite.co.uk'])").attr("rel", "follow");

Add target="_blank" to all external links

$("div.content a[href^='http']:not([href*='mysite.co.uk'])").attr("target", "_blank");

Hope this helps you spend as little time on this as possible :)

Update:

I did some reasearch into whether adding a nofollow in this way will work on Google and came to the conclusion that it probably wont. Colin Asquith commented similar thoughts. So this should probably be considered if using this for adding rel="nofollow" to links but technically this is a good way of going about it or anything similar like adding target="_blank" for example.

Follow britishdev on Twitter

Sunday, 11 November 2012

Upgrading Azure Storage Client Library to v2.0 from 1.7

I upgraded Azure Storage to version 2.0 from 1.7 and I've found a number of differences when using storage. I thought how I'd document how I upgraded these more awkward bits of Azure Storage in version 2.0.

DownloadByteArray has gone missing

For whatever reason DownloadByteArray has been taken from me. So has DownloadToFile, DownloadText, UploadFromFile, UploadByteArray, and UploadText

Without too much whinging I'm just going to get on and fix it. This is what was working PERFECTLY FINE in v1.7:

public byte[] GetBytes(string fileName)
{
    var blob = Container.GetBlobReference(fileName);
    return blob.DownloadByteArray();
}

And here is the code modified to account for the face that DownloadByteArray no longer exists in Azure Storage v2.0:

public byte[] GetBytes(string fileName)
{
    var blob = Container.GetBlockBlobReference(fileName);
    using (var ms = new MemoryStream())
    {
        blob.DownloadToStream(ms);
        ms.Position = 0;
        return ms.ToArray();
    }
}

How to get your CloudStorageAccount

Another apparently random change is that you can't get your storage account info in the same way as you used to. You used to be able to get it like this in Storage Client v1.7:

var storageAccountInfo = CloudStorageAccount.FromConfigurationSetting(configSetting);
var tableStorage = storageAccountInfo.CreateCloudTableClient();

But in Azure Storage v2.0 you must get it like this:

var storageAccountInfo = CloudStorageAccount.Parse(
            CloudConfigurationManager.GetSetting(configSetting));
var tableStorage = storageAccountInfo.CreateCloudTableClient();

Why?.. not sure. I have had problems with getting storage account information before so maybe this resolve that.

What happened to CreateTableIfNotExist?

Again, it's disappeared but who cares.. Oh you do? Right well let's fix that up. So, in Azure Storage Client v1.7 you did this:

var tableStorage = storageAccountInfo.CreateCloudTableClient();
tableStorage.CreateTableIfNotExist(tableName);

But now in Azure Storage Client Library v2.0 you must do this:

var tableStorage = storageAccountInfo.CreateCloudTableClient();
var table = tableStorage.GetTableReference(tableName);
table.CreateIfNotExists();

Attributes seem to have disappeared and LastModifiedUtc has gone

Another random change that possibly doesn't achieve anything other than making you refactor your code. This was my old code from Storage Library Client v1.7:

var blob = BlobService.FetchAttributes(FileName);
if (blob == null || blob.Attributes.Properties.LastModifiedUtc < DateTime.UtcNow.AddHours(-1))
{
    ...
}

But now it should read like this because thought it looks prettier (which it does in fairness).

var blob = BlobService.FetchAttributes(FileName);
if (blob == null || blob.Properties.LastModified < DateTimeOffset.UtcNow.AddHours(-1))
{
    ...
}

Change your development storage connection string

This is just a straight bug so that's excellent. I was getting a useless exception stating "The given key was not present in the dictionary" when trying to create a CloudStorageAccount reference. To resolve this change your development environment connection string from UseDevelopmentStorage=true to UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://127.0.0.1 then it will magically work.

Bitch and moan

Apologies for the whingy nature of this post, I'm quite a fan of Azure but I have wasted about 3-4 hours with this "upgrade" from Azure Storage Client Library 1.7 to 2.0. It's been incredibly frustrating particularly since there seems to be no obvious reason why these changes were made. I just can't believe the amount of breaking changes when I haven't really written that much Azure storage code.

Randomly taking out nice methods like DownloadByteArray and DownloadText is surely a step backwards no? Or randomly renaming CreateIfNotExist() to CreateIfNotExists()... what is the point in that?!

I remember when upgrading to ASP.NET 4 from 3.5, I spent very little time working through breaking changes and I have 100 times more .NET code than I do Azure Storage code. As well as that, I was well aware of the many improvements with that .NET version update, with this Azure Storage update I have no idea what I'm getting. No matter the improvements, it is just an Azure storage API and this number of breaking changes, often for the benefit of syntax niceties is just unnacceptable.

Oh, if you are still in pain doing this I have found a complete list of breaking changes in this update along with minimal explanations here.

Follow britishdev on Twitter

Saturday, 27 October 2012

Unexpected "string" keyword after "@" character. Once inside code, you do not need to prefix constructs like "string" with "@"

After upgrading an ASP.NET MVC 3 project to MVC 4 I noticed a change in the Razor parser that threw a Parser Error saying: 'Unexpected "string" keyword after "@" character. Once inside code, you do not need to prefix constructs like "string" with "@"'

Firstly this has always working when it was MVC 3 and Razor v1. I may have been getting the syntax wrong all along but if the syntax allowed it was it really wrong?

What I was doing was trying to put some server code in a Razor helper with no surrounding HTML tags, like this example:

@helper Currency1000s(int? value)
{
    if(value == null)
    {
        -
    }
    else
    {
        @string.Format("{0:C0}k", value / 1000.0)
    }
}

Interestingly if I were to replace line 9 with @value all would be fine. Anyway, it is an easy enough fix, you just need to wrap the string.Format in HTML tags or the text tags as I did here:

@helper Currency1000s(int? value)
{
    if(value == null)
    {
        -
    }
    else
    {
        @string.Format("{0:C0}k", value / 1000.0)
    }
}

Follow britishdev on Twitter

Tuesday, 23 October 2012

Build Visual Studio solutions without Visual Studio

I have a project that has four separate Visual Studio solutions. It is a bit annoying when I do an update from SVN and have to open each solution in Visual Studio just so I can build them. Visual Studio is hardly a lightweight program so surely there's a simpler way?

Most sys admins or developers accustomed to automated builds will probably start to titter at this point, but you can do it easily using NotePad (and the various stuff already installed on your dev machine). I don't want a mega complex system to deploy in a special way to a different server using expensive software, I just want to build everything without having to load many Visual Studio instances. So here is how.

Simple way to build all solutions with a batch file

First open NotePad and write the following code:

@echo off
CALL "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat"
MSBUILD /v:q C:\Projects\Forums\Forums.sln
MSBUILD /v:q C:\Projects\MainSite.sln
MSBUILD /v:q C:\Projects\Users\UserMgmt.sln
MSBUILD /v:q C:\Projects\Core\Global.sln
PAUSE

Save this as something like BuildAll.bat then whenever you want to build everything just double click this file.

What was that?! Explain (a bit)

To use MSBUILD you must be running the Visual Studio command prompt but by default batch files run in normal compand prompt so line 2 enables all the Visual Studio-ness. Also, I added the /v:q parameter so that MSBUILD wont output every little detail about the build, just the important bits (PASS/FAIL).

Follow britishdev on Twitter

Thursday, 18 October 2012

To encodeURI or to encodeURIComponent?

Solved a bug today whilst encoding in JavaScript a URL that contained a certain special character: the hash character #.

I found a generated querystring was not working correctly because many of the params were not being passed to the server somehow. I was generating a link like so: var link = 'http://www.britishdeveloper.co.uk?link=' + encodeURI(url) + '&userID=232';

I was quite pleased with myself remembering to URL Encode params that are going into a querystring to get around special characters but I had not thought of something... the hash character #!

Say the url being placed in the querystring is '/my page.aspx#comment3' my link was coming out as http://www.britishdeveloper.co.uk?link=/my%20page.aspx#comment3&userID=232.

What's wrong with that?

Well it's not what I was expecting really but it was working up until I got urls with hashes in. As you may or may not know everything proceeding a # in a url is for browsers so your server will ignore everything after it. In this case the userID param was not seen by my server.

encodeURIComponent not encodeURI

For this particular scenario encodeURIComponent was what I needed since:

encodeURI('/my page.aspx#comment3')
//output: /my%20page.aspx#comment2

encodeURIComponent('/my page.aspx#comment3')
//output: %2Fmy%20page.aspx%23comment2

encodeURI is for encoding non-URI special characters from a string so the above example of encodeURI would have been great for appending it like so 'http://www.britishdeveloper.co.uk' + encodeURI(url)

encodeURIComponent is for encoding a string to be fit for a querystring param where characters such as . and # shouldn't really be included as in my above example.

So know your Javascript encoding!

Follow britishdev on Twitter

Monday, 28 May 2012

Could not load file or assembly 'msshrtmi' or one of its dependencies

Sometimes after publishing my Azure solution I get a yellow screen of death giving a "System.BadImageFormatException" exception with a description of "Could not load file or assembly 'msshrtmi' or one of its dependencies. An attempt was made to load a program with an incorrect format."

I tried everything to get rid of this msshrtmi complaint: building it again, rebuilding it, cleaning the solution, restarting my computer even and that fixes everything in Windows ever!

Very strange as my newly published Azure solution is fine but my development site is now at the mercy of this mysterious msshrtmi assembly.. whatever the hell that is.

Kill msshrtmi! Kill it with fire!

So it's is an assembly right? Assemblies go in the bin folder... let's look there... There it is! msshrtmi.dll. DELETE IT!

I don't know what it is and I didn't put it there so I deleted it and all is back to normal. Excellent.

Follow britishdev on Twitter

How to turn off logging on Combres

I have been using Combres for my .NET app to minify, combine and compress my JavaScript and CSS. So far I have found it awesome.

It really does do everything it claims. Minifing your scripts and CSS, then combines them all so that all the code is sent via one HTTP request (well one for CSS and one for JavaScript). It also handles GZip or deflate compression should the request accept it. It is easy to set up since it is now available through NuGet and using it will make YSlow smile when it scans your site.

Logging

One thing that is annoying though is that Combres seems to log everything it does. "Use content from content's cache for Resource x...Use content from content's cache for Resource y" This is fine in development but unnecessary in production so I wanted to turn it off but couldn't find how to do this in any documentation.

The way I managed to turn off logging was to find the line in the web.config that combres put in that looks like this:

<combres definitionUrl="~/App_Data/combres.xml" logProvider="Combres.Loggers.Log4NetLogger" />

You simply need to remove the logProvider so it looks like this:

<combres definitionUrl="~/App_Data/combres.xml" />

If you still want logging on your development environment you can simply remove the logProvider attribute in a web.config transform.

Follow britishdev on Twitter

Tuesday, 22 May 2012

Export and back up your SQL Azure databases nightly into blob storage

With Azure I have always believed, if you can do it with the Azure Management Portal then you can do it with a REST API. So I thought it would be a breeze to make an automated job to run every night to export and back up my SQL Azure database into a BACPAC file in blob storage. I was suprised to find scheduling bacpac exports of your SQL Azure databases is not documented in the Azure Service Management API. Maybe it is because the bacpac exporting and importing is in beta? Nevermind. I successfully have a worker role backing up my databases and here's how:

It is a REST API so you can't use nice WCF to handle all your POST data for you but there is a trick still to avoid writing out all your XML parameters by hand and instead strong typing a few classes.

Go to your worker role or console application and add a service reference to your particular DACWebService (it varies by region):

  • North Central US: https://ch1prod-dacsvc.azure.com/DACWebService.svc
  • South Central US: https://sn1prod-dacsvc.azure.com/DACWebService.svc
  • North Europe: https://db3prod-dacsvc.azure.com/DACWebService.svc
  • West Europe: https://am1prod-dacsvc.azure.com/DACWebService.svc
  • East Asia: https://hkgprod-dacsvc.azure.com/DACWebService.svc
  • Southeast Asia: https://sg1prod-dacsvc.azure.com/DACWebService.svc

Once you import this Service Reference you will have some new classes that will come in handy in the following code:

//these details are passed into my method but here is an example of what is needed
var dbServerName = "qwerty123.database.windows.net";
var dbName = "mydb";
var dbUserName = "myuser";
var dbPassword = "Password!";

//storage connection is in my ServiceConfig
//I know these the CloudStorageConnection can be obtained in one line of code
//but this way is necessary to be able to get the StorageAccessKey later
var storageConn = RoleEnvironment.GetConfigurationSettingValue("Storage.ConnectionString");
var storageAccount = CloudStorageAccount.Parse(storageConn);

//1. Get your blob storage credentials
var credentials= new BlobStorageAccessKeyCredentials();
//e.g. https://myStore.blob.core.windows.net/backups/mydb/2012-05-22.bacpac
credentials.Uri = string.Format("{0}backups/{1}/{2}.bacpac",
    storageAccount.BlobEndpoint,
    dbName,
    DateTime.UtcNow.ToString("yyyy-MM-dd"));
credentials.StorageAccessKey = ((StorageCredentialsAccountAndKey)storageAccount.Credentials)
                                   .Credentials.ExportBase64EncodedKey();

//2. Get the DB you want to back up
var connectionInfo = new ConnectionInfo();
connectionInfo.ServerName = dbServerName;
connectionInfo.DatabaseName = dbName;
connectionInfo.UserName = dbUserName;
connectionInfo.Password = dbPassword;

//3. Fill the object required for a successful POST
var export = new ExportInput();
export.BlobCredentials = credentials;
export.ConnectionInfo = connectionInfo;

//4. Create your request
var request = WebRequest.Create("https://am1prod-dacsvc.azure.com/DACWebService.svc/Export");
request.Method = "POST";
request.ContentType = "application/xml";
using (var stream = request.GetRequestStream())
{
    var dcs = new DataContractSerializer(typeof(ExportInput));
    dcs.WriteObject(stream, export);
}

//5. make the POST!
using (var response = (HttpWebResponse)request.GetResponse())
{
    if (response.StatusCode != HttpStatusCode.OK)
    {
        throw new HttpException((int)response.StatusCode, response.StatusDescription);
    }
}

This code would run in a scheduled task or worker role to be set for 2am each night for example. It is important you have appropriate logging and notifications in the event of failure.

Conclusion

This sends off the request to start the back up / export of database into a bacpac file. The success of this is no indication that the back up was successful, only the request submition. If your credentials are wrong you will get a 200 OK response but it the back up will fail silently later.

To see if it has been successful you can check on the status of your exports via the Azure Management Portal, or by waiting a short while and having a look in your blob storage.

I have not covered Importing because, really, exporting is the boring yet important activity that must happen regularly (such as nightly). Importing is the one you do on the odd occassion when there has been a disaster and the Azure Management Portal is well suited to such an occassion.

Follow britishdev on Twitter

Friday, 18 May 2012

How to get Azure storage Account Key from a CloudStorageAccount object

I have a CloudStorageAccount object and I want to get the AccountKey out of it. Seems like you should be able to get the Cloud Storage Key out of a CloudStorageAccount but I did struggle a bit at first.

I first used the CloudStorageAccount.FromConfigurationSetting(string) method at first and then played about in debug mode to see if I could find it.

I then found that that method doesn't return the correct type of object which left me unable to find my Azure storage access key. I then tried the same thing but using CloudStorageAccount.Parse(string) instead. This did have access to the Azure storage access key.

//this method of getting your CloudStorageAccount is no good here
//var account = CloudStorageAccount.FromConfigurationSetting("StorageConnectionStr");

//these two lines do...
var accountVal = RoleEnvironment.GetConfigurationSettingValue("StorageConnectionStr");
var account = CloudStorageAccount.Parse(accountVal);

//and then you can retrieve the key like this:
var key = ((StorageCredentialsAccountAndKey)account.Credentials)
          .Credentials.ExportBase64EncodedKey();

It is strange that CloudStorageAccount.FromConfigurationSetting doesn't give you access to the account key in your credentials but CloudStorageAccount.Parse does. Oh well, hope that helps.

Follow britishdev on Twitter

Thursday, 10 May 2012

Set maxlength on a textarea

It's annoyed me for quite a while that you can set a maximum length on an <input type="text" /> but not on a textarea. Why? Are they so different?

I immediately thought I was going to have to write some messy JavaScript but then I learned that HTML5 implements maxlength on text areas now and I'm only considering modern browsers! Wahoo!

Then I learnt that IE9 doesn't support it so JavaScript it is...

JavaScript textarea maxlength limiter

What I have done that works well is bind events keyup and blur on my text area to a function that removes characters over the maxlength provided. The code looks like this:

$('#txtMessage').bind('keyup blur', function () {
    var $this = $(this);
    var len = $this.val().length;
    var maxlength = $this.attr('maxlength')
    if (maxlength && len > maxlength) {
        $this.val($this.val().slice(0, maxlength));
    }
});

Conclusion

It works quite well because if the browser already supports maxlength on a textarea there will be no interruption because the value of the textarea will not go over that maxlength.

The keyup event doesn't fire when the user pastes text in using the mouse but that is where the blur event comes in. Also, an enter click makes a new line on a textarea so the user has to click the submit button (blurring the textarea).

Beware though that this is for usability only; a nasty user could easily bypass this so ensure you are checking the length server side if it is important to you.

Follow britishdev on Twitter