find a location for property in a new city

Thursday, 6 August 2015

Outbound IP address from Azure WebJobs or WebSite

I need to find the outbound IP address of an Azure Website so that I can whitelist this IP address in a service I wish to call. However there are concerns with this practice as I will explain.

Azure WebJobs are completely awesome and they should be used more. I have project however that I am using to pre-process a large amount of data. As part of this process, my Web Job will need to call a third party web service that operates an IP whitelist. So to call it successfully I need to find the IP address of my Azure WebJob.

That is simple enough, I confirmed the IP address using two sources, one is a given list of IP's as documented by Microsoft. Details on how to find yours are here: Azure Outbound IP restrictions. I also wrote a little code to make sure this matches from a WebJob like so:

public async static Task ProcessQueueMessage(
    [QueueTrigger("iptest")] string message,
    TextWriter log)
{
    using (var client = new HttpClient())
    {
        var response = await client.GetAsync("http://ip.appspot.com/");
        var ip = await response.Content.ReadAsStringAsync();
        await log.WriteAsync(ip);
    }
}

Popping something in the "iptest" queue kicked off the job and checking the logs confirmed the WebJobs are in fact consistent with the IP ranges documented.

There is a problem however, if you read that link you will discover that although it is roughly static it is not unique. You will share your outbound IP with other users of Azure WebSites that are hosted in the same region as you and the same scale unit as you. What is a scale unit? Who cares but there are 15 of them in the North Europe data centre for example so not a lot. Now how secure do you think IP whitelisting a shared IP is? Not very!

Workaround

Don't give up hope! The work arounds I can see are to ask the service provider to not rely on only IP whitelisting, have another form of authentication, an API key over SSL would work for example. Have it as well as IP Whitelisting if it makes them happy.

If they can't be controlled you can do your own magic. There are proxy providers out there that will provide your calls with a unique static IP address. Try QuotaGuard. Or make your own - if you already have a Cloud Service running in Azure you can proxy the service via that as they can have static and unique outbound IP addresses.

Follow britishdev on Twitter

Skipping unit tests is a false economy

You only need unit tests if you write buggy code but why would I write bugs? I don't need them.

For a long time I thought unit tests was just vanity code. Oooh look, I've made this repository unit testable and I've used my fav' IoC container to inject dependencies now so in my tests I mock them to create and elegant suite of unit tests. No one will ever run the tests but it is cool because all the top dev bloggers write about it.

That is probably a lot of people's motives, that and the lead dev told them to. You can tell when people don't fully understand why they are writing unit test when they are running short of time to complete their work and the first thing they compromise is the unit tests. Unit tests are always the first to be dropped in high pressure environment.

Let me sell unit tests to you

They save you time! How can that be right? They take so long to write and refactor every time you change your code. Unit tests save you time because you no longer need to trawl through mountains of code to work out in you head any possibility for causing a bug. You don't need to step through every line of code in your application finding potential for unexpected consequences when you have decent unit test. Which saves so much time!

Worse still is the people that wouldn't have trawled through the code to find a potential bug and just committed their code anyway and broke something. Sooner or later it will get noticed and someone is going to spend a very long time tracking it down and then fixing the root cause.

What about deployment time as well, when you come to merge and commit a branch or do a deployment, how long would it take you to review every line in the merge or click about every section of the site? 30mins? Hours? Or maybe you wouldn't bother and as mentioned that is when the even harder to find bugs creep in. It is such a waste of time!

Writing unit test from the beginning will slash all this wasted time!

Unit tests should be the first things your write!

This is called TDD (Test Driven Development) and it is the most effective way of ensuring your tests get written as you can't drop them out to save time as they are already written. It also forces you to really think about your code before writing it.

But TDD or not, please PLEASE write them. Lots of them. Use NCrunch that has continuous test runners for your test suite and shows the lines of code covered and the state of that test. Aim for 80%+ code coverage from the beginning and you will save so much time in the future doing all that boring clicking about or line by line reviewing of your code merges.

Not writing unit tests is a false economy

And developers are expensive so don't waste their time.

Please send this on to your team including the project manager. Everyone must know the importance of getting unit tests done and written well.

Follow britishdev on Twitter

Friday, 17 May 2013

SQL MERGE that changes the ON constraint as it runs

What happens when your MERGE statement INSERTs a row that now satisfies the ON constraint. If a new row comes along that satisfies the ON constraint will it fall into WHEN MATCHED or WHEN NOT MATCHED?

Let's use an example piece of code to decide what happens. Here I have two table variables created and seeded with some random data:

DECLARE @t1 Table (id int, name varchar(12))
INSERT INTO @t1 VALUES(1, 'hi')

DECLARE @t2 Table (id int, name varchar(12))
INSERT INTO @t2 VALUES(1, 'bye')
INSERT INTO @t2 VALUES(3, 'colin')
INSERT INTO @t2 VALUES(3, 'sarah')

Now I will attempt to MERGE @t2 into @t1. If you have not come across MERGE before it is basically a more efficient way of saying UPDATE if it exists or INSERT if it is new.

MERGE @t1 AS t1
 USING(SELECT * FROM @t2) AS t2
 ON t2.id = t1.id
WHEN MATCHED THEN
 UPDATE SET t1.name = t2.name
WHEN NOT MATCHED THEN
 INSERT(id, name)
 VALUES(t2.id, t2.name);

So, what we are saying is:

  • USING this source of data (SELECT * FROM @t2)
  • ON this decider (matching the ID's to judge if this row already exists)
  • WHEN the ON clause is MATCHED update this row with new values
  • WHEN NOT MATCHED we should INSERT the new row

But, given the values in @t2, what will happen?

  1. The 1st row will match on ID 1 and so will do an UPDATE
  2. The 2nd row doesn't have a match for an ID of 3 so will INSERT
  3. The 3rd row didn't have a match for an ID of 3 before but since the last INSERT it now does. So INSERT or UPDATE?

A SELECT * FROM @t1 will show you that it has INSERTed twice...

idname
1bye
3colin
3sarah

So watch out for this. The constraint is decided about the source and destination once at the beginning and not again so you should be sure that the source table being MERGED is complete in itself. You can do with using a GROUP BY or DISTINCT on the USING table. Whichever is most appropriate for your scenario.

Follow britishdev on Twitter

How to solve Introducing FOREIGN KEY constraint may cause cycles or multiple cascade paths

I was writing my Entity Framework 5 Code First data models one day when I received a dramatic sounding error: "Introducing FOREIGN KEY constraint 'FK_dbo.Days_dbo.Weeks_WeekID' on table 'Days' may cause cycles or multiple cascade paths." I was instructed to "Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints." And then simply the message, "Could not create constraint." So what happened with my Foreign Key that makes it cyclic?"

First a spot of code that can nicely demonstrate this scenario:

public class Year
{
    public int ID { get; set; }
    public string Name { get; set; }

    public ICollection<Month> Months { get; set; }
}
public class Month
{
    public int ID { get; set; }
    public string Name { get; set; }

    public Year Year { get; set; }
    public ICollection<Day> Days { get; set; }
}
public class Day
{
    public int ID { get; set; }
    public string Name { get; set; }

    public Month Month { get; set; }
    public Week Week { get; set; }
}

//problem time
public class Week
{
    public int ID { get; set; }
    public string Name { get; set; }

    public Year Year { get; set; }
    public ICollection<Day> Days { get; set; }
}

So, let's explain this. A Year has Months and a Month has Days - all is well at this point and it will build and generate all your tables happily. The problem comes when you add the highlighted parts to add Weeks into the data model.

By making those Navigation Properties you have implicitly instructed EF Code First to create Foreign Key constraints for you and each of these will have cascading deletes on them. This means, upon deleting a Year, the database will cascade that delete to that Year's Months and in turn those Month's Days. This is to, quite rightly, hold referential integrity; you will not have Yearless Months.

So if you think about it, now that we have added a Week entity that also has a relationship with Days this too will cascade deletes. So as well as the cascading deletes that occur when you delete a Year as I described above, now deleting a Year will also delete its Weeks and those Weeks will delete the Days. But those Days would have already been deleted by the cascade that went via Months. So who wins? Who gets there first? Don't know... that's why you need to design an answer to this conundrum.

Removing the multiple cascade paths

The way I found to solve this issue is you need to remove one of the cascades, since there is nothing wrong with the Foreign Keys, it is only the multiple cascade paths. So lets remove the cascading delete on the Week -> Days relationship since that delete will be taken care of by each Month cascading its deletes to their Days.

public class CalenderContext : DbContext
{
    public DbSet<Year> Years{ get; set; }
    public DbSet<Month> Months { get; set; }
    public DbSet<Day> Days { get; set; }
    public DbSet<Week> Weeks { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Day>()
            .HasRequired(d => d.Week)
            .WithMany(w => w.Days)
            .WillCascadeOnDelete(false);

        base.OnModelCreating(modelBuilder);
    }
}

If you already have a ForeignKey field explicitly written into your code model (as I usually do - I removed them from the above example for clarity) you will need to specify this existing Foreign Key in the code just written. So if your Day entity looked like this:

public class Day
{
    public int ID { get; set; }
    public string Name { get; set; }

    [ForeignKey("Month")]
    public int MonthID { get; set; }
    public Month Month { get; set; }

    [ForeignKey("Week")]
    public int WeekID { get; set; }
    public Week Week { get; set; }
}

You will need your OnModelCreating method to look like this:

modelBuilder.Entity<Day>()
    .HasRequired(d => d.Week)
    .WithMany(w => w.Days)
    .HasForeignKey(d => d.WeekID)
    .WillCascadeOnDelete(false);

Have fun deleting safely!

Follow britishdev on Twitter

Thursday, 28 February 2013

The request filtering module is configured to deny a request that contains a double escape sequence

In IIS 7.5 I have a site that contains a page that takes an encrypted part of a URL. This encrypted string includes a plus sign '+' which causes IIS to throw a "HTTP Error 404.11 - Not Found" error stating "The request filtering module is configured to deny a request that contains a double escape sequence."

The problem is that a + sign used to be acceptable in earlier versions of IIS so these URLs need to remain for legacy reasons. So, I need to make them allowed again in IIS.

The quick fix

This can be easily achieved with a simple web.config change:

<system.webServer>
    <security>
        <requestFiltering allowDoubleEscaping="true" />
    </security>
</system.webServer>

This allows URLs to contain this plus symbol '+'.

The warning

There are consequences to this which unsurprisingly are security related so please read Double Encoding to familiarise yourself with the risk for your situation. If it is a risk to you maybe the best solution is to redesign those URLs?

Follow britishdev on Twitter

Saturday, 19 January 2013

Handler “ExtensionlessUrlHandler-Integrated-4.0” has a bad module “ManagedPipelineHandler” in its module list

How to fix the error 'Handler “ExtensionlessUrlHandler-Integrated-4.0” has a bad module “ManagedPipelineHandler” in its module list'.

This error occurred when I moved my moved my ASP.NET 4.5 from an old Windows 7 machine to a brand new Windows 8 machine running IIS 8.

I was thinking I was in need of an aspnet_regiis -I but this threw back a message at me saying "Start installing ASP.NET (4.0.30319.17929). This option is not supported on this version of the operating system. Administrators should instead install/uninstall ASP.NET 4.5 with IIS8 using the "Turn Windows Features On/Off" dialog, the Server Manager management tool, or the dism.exe command line tool. For more details please see http://go.microsoft.com/fwlink/?LinkID=216771. Finished installing ASP.NET (4.0.30319.17929)."

So I turned to turning windows features on/off (Win+X then hit F). I had already turned IIS on but then I had found I hadn't done enough.

Previously when I had turned IIS on I had left it with the default but actually I needed more to get ASP.NET 4.5 to run. You need to go to:

  1. Internet Information Services
  2. World Wide Web Services
  3. Application Development Features
  4. Then check ASP.NET 4.5

This will select a few other bits for you and you should be good to go.

Follow britishdev on Twitter

Thursday, 17 January 2013

Finding Outlook Web Access (OWA) URL from Outlook 2010

I was struggling to find out my Outlook webmail address from Outlook 2010 on my machine so that I can access my work email from home or on my mobile. The OWA URL is hidden quite deep into Outlook 2010 so I have written down the location of the URL so I can remember for next time.

It is rather hidden away, probably because it is assumed you would just ask IT instead of finding out for yourself but that isn't like you is it. That's why you're here!

Finding your web mail address in Outlook 2010

  1. Click the File tab near the top left
  2. Click the Info tab
  3. Click the big "Account Settings" button and then "Account Settings" again on the pop out menu
  4. On E-mail tab ensure your address is selected and click the "Change..." button just above it
  5. Click the "More Settings ..." button at the bottom
  6. Go to the Connection tab and click "Exchange Proxy Settings..." button at the bottom
  7. It is the first URL on this window. Something like https://mail.mydomain.com

Follow britishdev on Twitter

Monday, 14 January 2013

Web forms button click event not firing in IE

There are probably many reasons why a button would not fire in an ASP.NET web forms application. Some can be fixed with deleting cookies and history, some are to do with when the event is registered. I found a new one though.

In my scenario I had a text box that was submitted with a button next to it. We didn't want to display the button though but instead wanted the form submitted with a click of the Enter button. The button was therefore wrapped in a display:none span, like this:

<input type="text" id="txtInput" onchange="btnGo_Click" runat="server"/>
<asp:Button style="display:none;" runat="server" OnClick="btnGo_Click" />
<asp:Label ID="lblOutput" runat="server" />

It turns out that this worked just fine in Chrome and Firefox but not in IE9. This was because not all browsers post data that is hidden with CSS. Security possibly? You could see this by looking at the request in Fiddler. There was no value for the button field in IE but there was for the others.

How to submit a form by clicking Enter and hiding the button

The best way I can think of is to hide the button without hiding it... How? You can push it miles of the page to hide the button with some CSS like this:

<input type="text" id="txtInput" onchange="btnGo_Click" runat="server"/>
<asp:Button style="position:absolute;left:-9999px;" runat="server" OnClick="btnGo_Click" />
<asp:Label ID="lblOutput" runat="server" />

This will put the button miles off to the left never to be seen by the user.

Follow britishdev on Twitter

Friday, 11 January 2013

Add rel="nofollow" to all external links with one line of jQuery

You may want to change all external links on a page to do something different such as add a target="_blank" to each one or add rel="nofollow" to every external link. This post will show you how this can be done in one line of jQuery!

SEO SEO SEO... How tiresome a concept... Anyway, people make their living with it apparently and while that is the case I get all kinds of weird requirements like this. This time I had to add rel="nofollow" to all external links for latest whim of an SEO consultant.

Since SEO requirements seem to chop and change fairly unpredictably I wanted to add rel="nofollow" to all external links in the quickest and least interfering way possible. I managed it with one line of jQuery so that is fairly innocuous and I can just remove it if it needs to be undone at some point. This can be used to add target="blank" to also.

Add rel="nofollow" to all external links

$("div.content a[href^='http']:not([href*='mysite.co.uk'])").attr("rel", "follow");

Add target="_blank" to all external links

$("div.content a[href^='http']:not([href*='mysite.co.uk'])").attr("target", "_blank");

Hope this helps you spend as little time on this as possible :)

Update:

I did some reasearch into whether adding a nofollow in this way will work on Google and came to the conclusion that it probably wont. Colin Asquith commented similar thoughts. So this should probably be considered if using this for adding rel="nofollow" to links but technically this is a good way of going about it or anything similar like adding target="_blank" for example.

Follow britishdev on Twitter

Sunday, 11 November 2012

Upgrading Azure Storage Client Library to v2.0 from 1.7

I upgraded Azure Storage to version 2.0 from 1.7 and I've found a number of differences when using storage. I thought how I'd document how I upgraded these more awkward bits of Azure Storage in version 2.0.

DownloadByteArray has gone missing

For whatever reason DownloadByteArray has been taken from me. So has DownloadToFile, DownloadText, UploadFromFile, UploadByteArray, and UploadText

Without too much whinging I'm just going to get on and fix it. This is what was working PERFECTLY FINE in v1.7:

public byte[] GetBytes(string fileName)
{
    var blob = Container.GetBlobReference(fileName);
    return blob.DownloadByteArray();
}

And here is the code modified to account for the face that DownloadByteArray no longer exists in Azure Storage v2.0:

public byte[] GetBytes(string fileName)
{
    var blob = Container.GetBlockBlobReference(fileName);
    using (var ms = new MemoryStream())
    {
        blob.DownloadToStream(ms);
        ms.Position = 0;
        return ms.ToArray();
    }
}

How to get your CloudStorageAccount

Another apparently random change is that you can't get your storage account info in the same way as you used to. You used to be able to get it like this in Storage Client v1.7:

var storageAccountInfo = CloudStorageAccount.FromConfigurationSetting(configSetting);
var tableStorage = storageAccountInfo.CreateCloudTableClient();

But in Azure Storage v2.0 you must get it like this:

var storageAccountInfo = CloudStorageAccount.Parse(
            CloudConfigurationManager.GetSetting(configSetting));
var tableStorage = storageAccountInfo.CreateCloudTableClient();

Why?.. not sure. I have had problems with getting storage account information before so maybe this resolve that.

What happened to CreateTableIfNotExist?

Again, it's disappeared but who cares.. Oh you do? Right well let's fix that up. So, in Azure Storage Client v1.7 you did this:

var tableStorage = storageAccountInfo.CreateCloudTableClient();
tableStorage.CreateTableIfNotExist(tableName);

But now in Azure Storage Client Library v2.0 you must do this:

var tableStorage = storageAccountInfo.CreateCloudTableClient();
var table = tableStorage.GetTableReference(tableName);
table.CreateIfNotExists();

Attributes seem to have disappeared and LastModifiedUtc has gone

Another random change that possibly doesn't achieve anything other than making you refactor your code. This was my old code from Storage Library Client v1.7:

var blob = BlobService.FetchAttributes(FileName);
if (blob == null || blob.Attributes.Properties.LastModifiedUtc < DateTime.UtcNow.AddHours(-1))
{
    ...
}

But now it should read like this because thought it looks prettier (which it does in fairness).

var blob = BlobService.FetchAttributes(FileName);
if (blob == null || blob.Properties.LastModified < DateTimeOffset.UtcNow.AddHours(-1))
{
    ...
}

Change your development storage connection string

This is just a straight bug so that's excellent. I was getting a useless exception stating "The given key was not present in the dictionary" when trying to create a CloudStorageAccount reference. To resolve this change your development environment connection string from UseDevelopmentStorage=true to UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://127.0.0.1 then it will magically work.

Bitch and moan

Apologies for the whingy nature of this post, I'm quite a fan of Azure but I have wasted about 3-4 hours with this "upgrade" from Azure Storage Client Library 1.7 to 2.0. It's been incredibly frustrating particularly since there seems to be no obvious reason why these changes were made. I just can't believe the amount of breaking changes when I haven't really written that much Azure storage code.

Randomly taking out nice methods like DownloadByteArray and DownloadText is surely a step backwards no? Or randomly renaming CreateIfNotExist() to CreateIfNotExists()... what is the point in that?!

I remember when upgrading to ASP.NET 4 from 3.5, I spent very little time working through breaking changes and I have 100 times more .NET code than I do Azure Storage code. As well as that, I was well aware of the many improvements with that .NET version update, with this Azure Storage update I have no idea what I'm getting. No matter the improvements, it is just an Azure storage API and this number of breaking changes, often for the benefit of syntax niceties is just unnacceptable.

Oh, if you are still in pain doing this I have found a complete list of breaking changes in this update along with minimal explanations here.

Follow britishdev on Twitter