tag:blogger.com,1999:blog-27919218138583020662024-03-19T06:14:46.666+00:00British DeveloperA blog about Microsoft's web development technologies written by a Brit.<br>
Mainly ASP.NET MVC C# Entity Framework Azure Visual Studio and SQLColin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.comBlogger142125tag:blogger.com,1999:blog-2791921813858302066.post-42719672716360557062016-12-07T12:38:00.002+00:002016-12-07T12:39:41.930+00:00Reference object with ref keyword on a method. What's the point?<p><strong>I wondered the other day what the point is in making a method parameter a ref when it is already a reference type. After playing around for a bit I came up with some concise code to illustrate the difference.</strong></p>
<p>The difference is that, although you can modify the reference object that was passed in with both methods. Changing the reference entirely (i.e. pointing to a different object) only has an effect outside the scope of the method if the object was passed through with the ref keyword.</p>
<p>This code should explain better than English:</p>
<pre class="brush:csharp">void Main()
{
var t1 = new Thing();
Test(t1);
t1.Write(); // Chair
var t2 = new Thing();
Test(ref t2);
t2.Write(); // Table
}
void Test(Thing t)
{
t.Name = "Chair";
t = new Thing { Name = "Table" };
}
void Test(ref Thing t)
{
t.Name = "Chair";
t = new Thing { Name = "Table" };
}
class Thing
{
public string Name { get; set; }
public void Write() => Console.WriteLine(Name);
}</pre>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com560tag:blogger.com,1999:blog-2791921813858302066.post-5765058152952054852015-12-22T17:02:00.001+00:002015-12-22T17:04:14.786+00:00Stop processing OPTIONS requests for CORS in ASP.NET Web API<p><strong>I was attempting to allow some particular origins to access my ASP.NET Web API from a client side single page application. I was using the EnableCorsAttribute that comes with the Microsoft.AspNet.WebApi.Cors NuGet package.</strong></p>
<p>I managed to set up CORS using the following code in my WebApiConfig:</p>
<pre class="brush:csharp">var origins = ConfigurationManager.AppSettings["AllowedOrigins"];
var cors = new EnableCorsAttribute(origins, "accept,content-type,origin,customId", "GET,POST,PUT");
config.EnableCors(cors);</pre>
<p>There is quite a lot to CORS but essentially, (some browsers) send a pre-flight request recognised with its HTTP method OPTIONS. This basically asks the application who is allowed to access this URL with the attempted headers and HTTP method. Your Web API will respond saying which origins are allowed or if there are any errors. The browser then decides if it is one of those allowed origins and sends the request if it is.</p>
<p>The problem I found is that on this initial OPTIONS request my IoC container, Unity, was constructing a whole dependency chain of classes. Some of which access the database and some check HTTP headers. This was throwing an error since bits were missing from the HTTP headers that would be with normal requests and unnecessarily hitting the database. So really, I wanted to stop these requests in their tracks whilst making sure they did their intended pre-flight work.</p>
<p>The best way I found to do this was to ignore routes based on an HTTP constraint for "OPTIONS". Basically shove this in your routing:</p>
<pre class="brush:csharp">var constraints = new { httpMethod = new HttpMethodConstraint(HttpMethod.Options) };
config.Routes.IgnoreRoute("OPTIONS", "{*pathInfo}", constraints);</pre>
<p>More info on <a href="http://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api">Enabling CORS in Web API</a>.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com256tag:blogger.com,1999:blog-2791921813858302066.post-29002769193695914492015-08-06T10:27:00.001+01:002015-08-06T10:28:58.769+01:00Outbound IP address from Azure WebJobs or WebSite<p><strong>I need to find the outbound IP address of an Azure Website so that I can whitelist this IP address in a service I wish to call. However there are concerns with this practice as I will explain.</strong></p>
<p>Azure WebJobs are completely awesome and they should be used more. I have project however that I am using to pre-process a large amount of data. As part of this process, my Web Job will need to call a third party web service that operates an IP whitelist. So to call it successfully I need to find the IP address of my Azure WebJob.</p>
<p>That is simple enough, I confirmed the IP address using two sources, one is a given list of IP's as documented by Microsoft. Details on how to find yours are here: <a href="https://social.msdn.microsoft.com/Forums/azure/en-US/fd53afb7-14b8-41ca-bfcb-305bdeea413e/maintenance-notice-upcoming-changes-to-increase-capacity-for-outbound-network-calls?forum=windowsazurewebsitespreview">Azure Outbound IP restrictions</a>. I also wrote a little code to make sure this matches from a WebJob like so:</p>
<pre class="brush:csharp">public async static Task ProcessQueueMessage(
[QueueTrigger("iptest")] string message,
TextWriter log)
{
using (var client = new HttpClient())
{
var response = await client.GetAsync("http://ip.appspot.com/");
var ip = await response.Content.ReadAsStringAsync();
await log.WriteAsync(ip);
}
}</pre>
<p>Popping something in the "iptest" queue kicked off the job and checking the logs confirmed the WebJobs are in fact consistent with the IP ranges documented.</p>
<p>There is a problem however, if you read that link you will discover that although it is roughly static it is not unique. You will share your outbound IP with other users of Azure WebSites that are hosted in the same region as you and the same scale unit as you. What is a scale unit? Who cares but there are 15 of them in the North Europe data centre for example so not a lot. Now how secure do you think IP whitelisting a shared IP is? Not very!</p>
<h3>Workaround</h3>
<p>Don't give up hope! The work arounds I can see are to ask the service provider to not rely on only IP whitelisting, have another form of authentication, an API key over SSL would work for example. Have it as well as IP Whitelisting if it makes them happy.</p>
<p>If they can't be controlled you can do your own magic. There are proxy providers out there that will provide your calls with a unique static IP address. Try <a href="https://www.quotaguard.com/">QuotaGuard</a>. Or make your own - if you already have a Cloud Service running in Azure you can proxy the service via that as they can have static and unique outbound IP addresses.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com226tag:blogger.com,1999:blog-2791921813858302066.post-50827185961933812092015-08-06T00:25:00.002+01:002015-08-06T00:25:38.893+01:00Skipping unit tests is a false economy<p><strong>You only need unit tests if you write buggy code but why would I write bugs? I don't need them.</strong></p>
<p>For a long time I thought unit tests was just vanity code. Oooh look, I've made this repository unit testable and I've used my fav' IoC container to inject dependencies now so in my tests I mock them to create and elegant suite of unit tests. No one will ever run the tests but it is cool because all the top dev bloggers write about it.</p>
<p>That is probably a lot of people's motives, that and the lead dev told them to. You can tell when people don't fully understand why they are writing unit test when they are running short of time to complete their work and the first thing they compromise is the unit tests. Unit tests are always the first to be dropped in high pressure environment.</p>
<h3>Let me sell unit tests to you</h3>
<p>They save you time! How can that be right? They take so long to write and refactor every time you change your code. Unit tests save you time because you no longer need to trawl through mountains of code to work out in you head any possibility for causing a bug. You don't need to step through every line of code in your application finding potential for unexpected consequences when you have decent unit test. Which saves so much time!</p>
<p>Worse still is the people that wouldn't have trawled through the code to find a potential bug and just committed their code anyway and broke something. Sooner or later it will get noticed and someone is going to spend a very long time tracking it down and then fixing the root cause.</p>
<p>What about deployment time as well, when you come to merge and commit a branch or do a deployment, how long would it take you to review every line in the merge or click about every section of the site? 30mins? Hours? Or maybe you wouldn't bother and as mentioned that is when the even harder to find bugs creep in. It is such a waste of time!</p>
<p>Writing unit test from the beginning will slash all this wasted time!</p>
<h3>Unit tests should be the first things your write!</h3>
<p>This is called TDD (Test Driven Development) and it is the most effective way of ensuring your tests get written as you can't drop them out to save time as they are already written. It also forces you to really think about your code before writing it.</p>
<p>But TDD or not, please PLEASE write them. Lots of them. Use NCrunch that has continuous test runners for your test suite and shows the lines of code covered and the state of that test. Aim for 80%+ code coverage from the beginning and you will save so much time in the future doing all that boring clicking about or line by line reviewing of your code merges.</p>
<h3>Not writing unit tests is a false economy</h3>
<p>And developers are expensive so don't waste their time.</p>
<p><strong>Please send this on to your team including the project manager. Everyone must know the importance of getting unit tests done and written well.</strong></p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com172tag:blogger.com,1999:blog-2791921813858302066.post-81806274640397688422013-05-17T09:55:00.002+01:002013-05-17T09:56:06.644+01:00SQL MERGE that changes the ON constraint as it runs<p><strong>What happens when your MERGE statement INSERTs a row that now satisfies the ON constraint. If a new row comes along that satisfies the ON constraint will it fall into WHEN MATCHED or WHEN NOT MATCHED?</strong></p>
<p>Let's use an example piece of code to decide what happens. Here I have two table variables created and seeded with some random data:</p>
<pre class="brush:sql">DECLARE @t1 Table (id int, name varchar(12))
INSERT INTO @t1 VALUES(1, 'hi')
DECLARE @t2 Table (id int, name varchar(12))
INSERT INTO @t2 VALUES(1, 'bye')
INSERT INTO @t2 VALUES(3, 'colin')
INSERT INTO @t2 VALUES(3, 'sarah')</pre>
<p>Now I will attempt to MERGE @t2 into @t1. If you have not come across MERGE before it is basically a more efficient way of saying UPDATE if it exists or INSERT if it is new.</p>
<pre class="brush:sql">MERGE @t1 AS t1
USING(SELECT * FROM @t2) AS t2
ON t2.id = t1.id
WHEN MATCHED THEN
UPDATE SET t1.name = t2.name
WHEN NOT MATCHED THEN
INSERT(id, name)
VALUES(t2.id, t2.name);</pre>
<p>So, what we are saying is:</p>
<ul><li>USING this source of data (SELECT * FROM @t2)
<li>ON this decider (matching the ID's to judge if this row already exists)</li>
<li>WHEN the ON clause is MATCHED update this row with new values</li>
<li>WHEN NOT MATCHED we should INSERT the new row</li></ul>
<p>But, given the values in @t2, what will happen?</p>
<ol><li>The 1st row will match on ID 1 and so will do an UPDATE</li>
<li>The 2nd row doesn't have a match for an ID of 3 so will INSERT</li>
<li>The 3rd row didn't have a match for an ID of 3 before but since the last INSERT it now does. So INSERT or UPDATE?</li></ol>
<p>A <code>SELECT * FROM @t1</code> will show you that it has INSERTed twice...</p>
<table><tr><th>id</th><th>name</th></tr>
<tr><td>1</td><td>bye</td></tr>
<tr><td>3</td><td>colin</td></tr>
<tr><td>3</td><td>sarah</td></tr></table>
<p>So watch out for this. The constraint is decided about the source and destination once at the beginning and not again so you should be sure that the source table being MERGED is complete in itself. You can do with using a GROUP BY or DISTINCT on the USING table. Whichever is most appropriate for your scenario.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com110London, UK51.511213899999987 -0.1198243999999704151.195100899999986 -0.7652713999999704 51.827326899999989 0.52562260000002958tag:blogger.com,1999:blog-2791921813858302066.post-3581720474179049052013-05-17T09:23:00.000+01:002013-05-17T09:23:02.938+01:00How to solve Introducing FOREIGN KEY constraint may cause cycles or multiple cascade paths<p><strong>I was writing my Entity Framework 5 Code First data models one day when I received a dramatic sounding error: "Introducing FOREIGN KEY constraint 'FK_dbo.Days_dbo.Weeks_WeekID' on table 'Days' may cause cycles or multiple cascade paths." I was instructed to "Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints." And then simply the message, "Could not create constraint." So what happened with my Foreign Key that makes it cyclic?"</strong></p>
<p>First a spot of code that can nicely demonstrate this scenario:</p>
<pre class="brush: csharp; highlight:[22,25,26,27,28,29,30,31,32,33]">public class Year
{
public int ID { get; set; }
public string Name { get; set; }
public ICollection<Month> Months { get; set; }
}
public class Month
{
public int ID { get; set; }
public string Name { get; set; }
public Year Year { get; set; }
public ICollection<Day> Days { get; set; }
}
public class Day
{
public int ID { get; set; }
public string Name { get; set; }
public Month Month { get; set; }
public Week Week { get; set; }
}
//problem time
public class Week
{
public int ID { get; set; }
public string Name { get; set; }
public Year Year { get; set; }
public ICollection<Day> Days { get; set; }
}
</pre>
<p>So, let's explain this. A Year has Months and a Month has Days - all is well at this point and it will build and generate all your tables happily. The problem comes when you add the highlighted parts to add Weeks into the data model.</p>
<p>By making those Navigation Properties you have implicitly instructed EF Code First to create Foreign Key constraints for you and each of these will have cascading deletes on them. This means, upon deleting a Year, the database will cascade that delete to that Year's Months and in turn those Month's Days. This is to, quite rightly, hold referential integrity; you will not have Yearless Months.</p>
<p>So if you think about it, now that we have added a Week entity that also has a relationship with Days this too will cascade deletes. So as well as the cascading deletes that occur when you delete a Year as I described above, now deleting a Year will also delete its Weeks and those Weeks will delete the Days. But those Days would have already been deleted by the cascade that went via Months. So who wins? Who gets there first? Don't know... that's why you need to design an answer to this conundrum.</p>
<h3>Removing the multiple cascade paths</h3>
<p>The way I found to solve this issue is you need to remove one of the cascades, since there is nothing wrong with the Foreign Keys, it is only the multiple cascade paths. So lets remove the cascading delete on the Week -> Days relationship since that delete will be taken care of by each Month cascading its deletes to their Days.</p>
<pre class="brush: csharp; highlight:[10,11,12,13]">public class CalenderContext : DbContext
{
public DbSet<Year> Years{ get; set; }
public DbSet<Month> Months { get; set; }
public DbSet<Day> Days { get; set; }
public DbSet<Week> Weeks { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Day>()
.HasRequired(d => d.Week)
.WithMany(w => w.Days)
.WillCascadeOnDelete(false);
base.OnModelCreating(modelBuilder);
}
}
</pre>
<p>If you already have a ForeignKey field explicitly written into your code model (as I usually do - I removed them from the above example for clarity) you will need to specify this existing Foreign Key in the code just written. So if your Day entity looked like this:</p>
<pre class="brush: csharp; highlight:[6,7,10,11]">public class Day
{
public int ID { get; set; }
public string Name { get; set; }
[ForeignKey("Month")]
public int MonthID { get; set; }
public Month Month { get; set; }
[ForeignKey("Week")]
public int WeekID { get; set; }
public Week Week { get; set; }
}</pre>
<p>You will need your OnModelCreating method to look like this:</p>
<pre class="brush: csharp; highlight:[4]">modelBuilder.Entity<Day>()
.HasRequired(d => d.Week)
.WithMany(w => w.Days)
.HasForeignKey(d => d.WeekID)
.WillCascadeOnDelete(false);</pre>
<p>Have fun deleting safely!</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com209tag:blogger.com,1999:blog-2791921813858302066.post-33015330308373574892013-02-28T09:45:00.000+00:002013-02-28T09:45:29.943+00:00The request filtering module is configured to deny a request that contains a double escape sequence<p><strong>In IIS 7.5 I have a site that contains a page that takes an encrypted part of a URL. This encrypted string includes a plus sign '+' which causes IIS to throw a "HTTP Error 404.11 - Not Found" error stating "The request filtering module is configured to deny a request that contains a double escape sequence."</strong></p>
<p>The problem is that a + sign used to be acceptable in earlier versions of IIS so these URLs need to remain for legacy reasons. So, I need to make them allowed again in IIS.</p>
<h3>The quick fix</h3>
<p>This can be easily achieved with a simple web.config change:</p>
<pre class="brush:csharp"><system.webServer>
<security>
<requestFiltering allowDoubleEscaping="true" />
</security>
</system.webServer></pre>
<p>This allows URLs to contain this plus symbol '+'.</p>
<h3>The warning</h3>
<p>There are consequences to this which unsurprisingly are security related so please read <a href="https://www.owasp.org/index.php/Double_Encoding">Double Encoding</a> to familiarise yourself with the risk for your situation. If it is a risk to you maybe the best solution is to redesign those URLs?</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com104London, UK51.511213899999987 -0.1198243999999704151.195090399999984 -0.7652713999999704 51.82733739999999 0.52562260000002958tag:blogger.com,1999:blog-2791921813858302066.post-19740718066028627032013-01-19T11:22:00.001+00:002013-01-19T11:22:58.393+00:00Handler “ExtensionlessUrlHandler-Integrated-4.0” has a bad module “ManagedPipelineHandler” in its module list<p><strong>How to fix the error 'Handler “ExtensionlessUrlHandler-Integrated-4.0” has a bad module “ManagedPipelineHandler” in its module list'.</strong></p>
<p>This error occurred when I moved my moved my ASP.NET 4.5 from an old Windows 7 machine to a brand new Windows 8 machine running IIS 8.</p>
<p>I was thinking I was in need of an <code>aspnet_regiis -I</code> but this threw back a message at me saying "<code>Start installing ASP.NET (4.0.30319.17929). This option is not supported on this version of the operating system. Administrators should instead install/uninstall ASP.NET 4.5 with IIS8 using the "Turn Windows Features On/Off" dialog, the Server Manager management tool, or the dism.exe command line tool. For more details please see http://go.microsoft.com/fwlink/?LinkID=216771.
Finished installing ASP.NET (4.0.30319.17929).</code>"</p>
<p>So I turned to turning windows features on/off (Win+X then hit F). I had already turned IIS on but then I had found I hadn't done enough.</p>
<p>Previously when I had turned IIS on I had left it with the default but actually I needed more to get ASP.NET 4.5 to run. You need to go to:</p>
<ol><li>Internet Information Services</li>
<li>World Wide Web Services</li>
<li>Application Development Features</li>
<li>Then check ASP.NET 4.5</li>
</ol>
<p>This will select a few other bits for you and you should be good to go.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com241London, UK51.5073346 -0.1276831000000129351.1912111 -0.77313010000001292 51.8234581 0.51776389999998707tag:blogger.com,1999:blog-2791921813858302066.post-31548168336012529412013-01-17T10:25:00.000+00:002013-01-17T10:25:21.920+00:00Finding Outlook Web Access (OWA) URL from Outlook 2010<p><strong>I was struggling to find out my Outlook webmail address from Outlook 2010 on my machine so that I can access my work email from home or on my mobile. The OWA URL is hidden quite deep into Outlook 2010 so I have written down the location of the URL so I can remember for next time.</strong></p>
<p>It is rather hidden away, probably because it is assumed you would just ask IT instead of finding out for yourself but that isn't like you is it. That's why you're here!</p>
<h3>Finding your web mail address in Outlook 2010</h3>
<ol>
<li>Click the File tab near the top left</li>
<li>Click the Info tab</li>
<li>Click the big "Account Settings" button and then "Account Settings" again on the pop out menu</li>
<li>On E-mail tab ensure your address is selected and click the "Change..." button just above it</li>
<li>Click the "More Settings ..." button at the bottom</li>
<li>Go to the Connection tab and click "Exchange Proxy Settings..." button at the bottom</li>
<li>It is the first URL on this window. Something like <em>https://</em><strong>mail.mydomain.com</strong></li>
</ol>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com139London, UK51.5073346 -0.1276831000000129351.1912226 -0.77313010000001292 51.8234466 0.51776389999998707tag:blogger.com,1999:blog-2791921813858302066.post-82912852294165206332013-01-14T12:02:00.000+00:002013-01-14T12:02:11.755+00:00Web forms button click event not firing in IE<p><strong>There are probably many reasons why a button would not fire in an ASP.NET web forms application. Some can be fixed with deleting cookies and history, some are to do with when the event is registered. I found a new one though.</strong></p>
<p>In my scenario I had a text box that was submitted with a button next to it. We didn't want to display the button though but instead wanted the form submitted with a click of the Enter button. The button was therefore wrapped in a <code>display:none</code> span, like this:</p>
<pre class="brush:csharp"><input type="text" id="txtInput" onchange="btnGo_Click" runat="server"/>
<asp:Button style="display:none;" runat="server" OnClick="btnGo_Click" />
<asp:Label ID="lblOutput" runat="server" /></pre>
<p>It turns out that this worked just fine in Chrome and Firefox but not in IE9. This was because not all browsers post data that is hidden with CSS. Security possibly? You could see this by looking at the request in Fiddler. There was no value for the button field in IE but there was for the others.</p>
<h3>How to submit a form by clicking Enter and hiding the button</h3>
<p>The best way I can think of is to hide the button without hiding it... How? You can push it miles of the page to hide the button with some CSS like this:</p>
<pre class="brush:csharp"><input type="text" id="txtInput" onchange="btnGo_Click" runat="server"/>
<asp:Button style="position:absolute;left:-9999px;" runat="server" OnClick="btnGo_Click" />
<asp:Label ID="lblOutput" runat="server" /></pre>
<p>This will put the button miles off to the left never to be seen by the user.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com240tag:blogger.com,1999:blog-2791921813858302066.post-71534943675393122412013-01-11T14:58:00.002+00:002013-02-14T09:39:56.755+00:00Add rel="nofollow" to all external links with one line of jQuery<p><strong>You may want to change all external links on a page to do something different such as add a target="_blank" to each one or add rel="nofollow" to every external link. This post will show you how this can be done in one line of jQuery!</strong></p>
<p>SEO SEO SEO... How tiresome a concept... Anyway, people make their living with it apparently and while that is the case I get all kinds of weird requirements like this. This time I had to add rel="nofollow" to all external links for latest whim of an SEO consultant.</p>
<p>Since SEO requirements seem to chop and change fairly unpredictably I wanted to add rel="nofollow" to all external links in the quickest and least interfering way possible. I managed it with one line of jQuery so that is fairly innocuous and I can just remove it if it needs to be undone at some point. This can be used to add target="blank" to also.</p>
<h3>Add rel="nofollow" to all external links</h3>
<pre class="brush: csharp">$("div.content a[href^='http']:not([href*='mysite.co.uk'])").attr("rel", "follow");</pre>
<h3>Add target="_blank" to all external links</h3>
<pre class="brush: csharp">$("div.content a[href^='http']:not([href*='mysite.co.uk'])").attr("target", "_blank");</pre>
<p>Hope this helps you spend as little time on this as possible :)</p>
<h3>Update:</h3>
<p>I did some reasearch into whether adding a nofollow in this way will work on Google and came to the conclusion that it probably wont. Colin Asquith commented similar thoughts. So this should probably be considered if using this for adding rel="nofollow" to links but technically this is a good way of going about it or anything similar like adding target="_blank" for example.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com99London, UK51.5073346 -0.1276831000000129351.1912226 -0.77313010000001292 51.8234466 0.51776389999998707tag:blogger.com,1999:blog-2791921813858302066.post-87572676489670249692012-11-11T16:42:00.000+00:002012-11-11T20:29:50.227+00:00Upgrading Azure Storage Client Library to v2.0 from 1.7<p><strong>I upgraded Azure Storage to version 2.0 from 1.7 and I've found a number of differences when using storage. I thought how I'd document how I upgraded these more awkward bits of Azure Storage in version 2.0.</strong></p>
<h3>DownloadByteArray has gone missing</h3>
<p>For whatever reason DownloadByteArray has been taken from me. So has DownloadToFile, DownloadText, UploadFromFile, UploadByteArray, and UploadText</p>
<p>Without too much whinging I'm just going to get on and fix it. This is what was working PERFECTLY FINE in v1.7:</p>
<pre class="brush: csharp">public byte[] GetBytes(string fileName)
{
var blob = Container.GetBlobReference(fileName);
return blob.DownloadByteArray();
}</pre>
<p>And here is the code modified to account for the face that DownloadByteArray no longer exists in Azure Storage v2.0:</p>
<pre class="brush:csharp">public byte[] GetBytes(string fileName)
{
var blob = Container.GetBlockBlobReference(fileName);
using (var ms = new MemoryStream())
{
blob.DownloadToStream(ms);
ms.Position = 0;
return ms.ToArray();
}
}</pre>
<h3>How to get your CloudStorageAccount</h3>
<p>Another apparently random change is that you can't get your storage account info in the same way as you used to. You used to be able to get it like this in Storage Client v1.7:</p>
<pre class="brush:csharp">var storageAccountInfo = CloudStorageAccount.FromConfigurationSetting(configSetting);
var tableStorage = storageAccountInfo.CreateCloudTableClient();</pre>
<p>But in Azure Storage v2.0 you must get it like this:</p>
<pre class="brush:csharp">var storageAccountInfo = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting(configSetting));
var tableStorage = storageAccountInfo.CreateCloudTableClient();</pre>
<p>Why?.. not sure. I have had problems with getting storage account information before so maybe this resolve that.</p>
<h3>What happened to CreateTableIfNotExist?</h3>
<p>Again, it's disappeared but who cares.. Oh you do? Right well let's fix that up. So, in Azure Storage Client v1.7 you did this:</p>
<pre class="brush: csharp">var tableStorage = storageAccountInfo.CreateCloudTableClient();
tableStorage.CreateTableIfNotExist(tableName);</pre>
<p>But now in Azure Storage Client Library v2.0 you must do this:</p>
<pre class="brush:csharp">var tableStorage = storageAccountInfo.CreateCloudTableClient();
var table = tableStorage.GetTableReference(tableName);
table.CreateIfNotExists();</pre>
<h3>Attributes seem to have disappeared and LastModifiedUtc has gone</h3>
<p>Another random change that possibly doesn't achieve anything other than making you refactor your code. This was my old code from Storage Library Client v1.7:</p>
<pre class="brush:csharp">var blob = BlobService.FetchAttributes(FileName);
if (blob == null || blob.Attributes.Properties.LastModifiedUtc < DateTime.UtcNow.AddHours(-1))
{
...
}</pre>
<p>But now it should read like this because thought it looks prettier (which it does in fairness).</p>
<pre class="brush:csharp">var blob = BlobService.FetchAttributes(FileName);
if (blob == null || blob.Properties.LastModified < DateTimeOffset.UtcNow.AddHours(-1))
{
...
}</pre>
<h3>Change your development storage connection string</h3>
<p>This is just a straight bug so that's excellent. I was getting a useless exception stating "The given key was not present in the dictionary" when trying to create a CloudStorageAccount reference. To resolve this change your development environment connection string from <code>UseDevelopmentStorage=true</code> to <code>UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://127.0.0.1</code> then it will magically work.</p>
<h3>Bitch and moan</h3>
<p>Apologies for the whingy nature of this post, I'm quite a fan of Azure but I have wasted about 3-4 hours with this "upgrade" from Azure Storage Client Library 1.7 to 2.0. It's been incredibly frustrating particularly since there seems to be no obvious reason why these changes were made. I just can't believe the amount of breaking changes when I haven't really written that much Azure storage code.</p>
<p>Randomly taking out nice methods like DownloadByteArray and DownloadText is surely a step backwards no? Or randomly renaming <code>CreateIfNotExist()</code> to <code>CreateIfNotExist<strong>s</strong>()</code>... what is the point in that?!</p>
<p>I remember when upgrading to ASP.NET 4 from 3.5, I spent very little time working through breaking changes and I have 100 times more .NET code than I do Azure Storage code. As well as that, I was well aware of the many improvements with that .NET version update, with this Azure Storage update I have no idea what I'm getting. No matter the improvements, it is <em>just</em> an Azure storage API and this number of breaking changes, often for the benefit of syntax niceties is just unnacceptable.</p>
<p>Oh, if you are still in pain doing this I have found a complete list of <a href="http://blogs.msdn.com/b/windowsazurestorage/archive/2012/10/29/windows-azure-storage-client-library-2-0-breaking-changes-amp-migration-guide.aspx">breaking changes in this update</a> along with minimal explanations here.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com59Lewisham, SE12, UK51.4282052 0.02158230.9964002 -40.4081055 71.8600102 40.4512695tag:blogger.com,1999:blog-2791921813858302066.post-1500671067157373432012-10-27T09:18:00.000+01:002012-10-27T09:18:19.008+01:00Unexpected "string" keyword after "@" character. Once inside code, you do not need to prefix constructs like "string" with "@"<p><strong>After upgrading an ASP.NET MVC 3 project to MVC 4 I noticed a change in the Razor parser that threw a Parser Error saying: 'Unexpected "string" keyword after "@" character. Once inside code, you do not need to prefix constructs like "string" with "@"'</strong></p>
<p>Firstly this has always working when it was MVC 3 and Razor v1. I may have been getting the syntax wrong all along but if the syntax allowed it was it really wrong?</p>
<p>What I was doing was trying to put some server code in a Razor helper with no surrounding HTML tags, like this example:</p>
<pre class="brush:csharp;highlight:9">@helper Currency1000s(int? value)
{
if(value == null)
{
<text>-</text>
}
else
{
@string.Format("{0:C0}k", value / 1000.0)
}
}</pre>
<p>Interestingly if I were to replace line 9 with <code>@value</code> all would be fine. Anyway, it is an easy enough fix, you just need to wrap the string.Format in HTML tags or the text tags as I did here:</p>
<pre class="brush:csharp;highlight:9">@helper Currency1000s(int? value)
{
if(value == null)
{
<text>-</text>
}
else
{
<text>@string.Format("{0:C0}k", value / 1000.0)</text>
}
}</pre>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com65London, UK51.5073346 -0.127683151.4974516 -0.1474241 51.5172176 -0.10794209999999999tag:blogger.com,1999:blog-2791921813858302066.post-25720443901997536752012-10-23T17:38:00.000+01:002012-10-23T17:38:19.484+01:00Build Visual Studio solutions without Visual Studio<p><strong>I have a project that has four separate Visual Studio solutions. It is a bit annoying when I do an update from SVN and have to open each solution in Visual Studio just so I can build them. Visual Studio is hardly a lightweight program so surely there's a simpler way?</strong></p>
<p>Most sys admins or developers accustomed to automated builds will probably start to titter at this point, but you can do it easily using NotePad (and the various stuff already installed on your dev machine). I don't want a mega complex system to deploy in a special way to a different server using expensive software, I just want to build everything without having to load many Visual Studio instances. So here is how.</p>
<h3>Simple way to build all solutions with a batch file</h3>
<p>First open NotePad and write the following code:</p>
<pre class="brush: csharp">@echo off
CALL "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat"
MSBUILD /v:q C:\Projects\Forums\Forums.sln
MSBUILD /v:q C:\Projects\MainSite.sln
MSBUILD /v:q C:\Projects\Users\UserMgmt.sln
MSBUILD /v:q C:\Projects\Core\Global.sln
PAUSE</pre>
<p>Save this as something like BuildAll.bat then whenever you want to build everything just double click this file.</p>
<h3>What was that?! Explain (a bit)</h3>
<p>To use MSBUILD you must be running the Visual Studio command prompt but by default batch files run in normal compand prompt so line 2 enables all the Visual Studio-ness. Also, I added the <code>/v:q</code> parameter so that MSBUILD wont output every little detail about the build, just the important bits (PASS/FAIL).</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com55tag:blogger.com,1999:blog-2791921813858302066.post-32704459098865750992012-10-18T13:32:00.000+01:002012-10-18T13:32:02.749+01:00To encodeURI or to encodeURIComponent?<p><strong>Solved a bug today whilst encoding in JavaScript a URL that contained a certain special character: the hash character #.</strong></p>
<p>I found a generated querystring was not working correctly because many of the params were not being passed to the server somehow. I was generating a link like so: <code>var link = 'http://www.britishdeveloper.co.uk?link=' + encodeURI(url) + '&userID=232';</code></p>
<p>I was quite pleased with myself remembering to URL Encode params that are going into a querystring to get around special characters but I had not thought of something... the hash character <strong>#</strong>!</p>
<p>Say the url being placed in the querystring is <code>'/my page.aspx#comment3'</code> my link was coming out as <code>http://www.britishdeveloper.co.uk?link=/my%20page.aspx#comment3&userID=232</code>.</p>
<h3>What's wrong with that?</h3>
<p>Well it's not what I was expecting really but it was working up until I got urls with hashes in. As you may or may not know everything proceeding a # in a url is for browsers so your server will ignore everything after it. In this case the userID param was not seen by my server.</p>
<h3>encodeURIComponent not encodeURI</h3>
<p>For this particular scenario encodeURIComponent was what I needed since:
<p><pre class="brush: csharp">encodeURI('/my page.aspx#comment3')
//output: /my%20page.aspx#comment2
encodeURIComponent('/my page.aspx#comment3')
//output: %2Fmy%20page.aspx%23comment2</pre></p>
<p><strong>encodeURI</strong> is for encoding non-URI special characters from a string so the above example of encodeURI would have been great for appending it like so <code>'http://www.britishdeveloper.co.uk' + encodeURI(url)</code></p>
<p><strong>encodeURIComponent</strong> is for encoding a string to be fit for a querystring param where characters such as . and # shouldn't really be included as in my above example.</p>
<p>So know your Javascript encoding!</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com40London, UK51.5073346 -0.127683151.3492066 -0.4435401 51.6654626 0.1881739tag:blogger.com,1999:blog-2791921813858302066.post-45248405611145543972012-05-28T15:21:00.000+01:002012-05-28T15:21:13.565+01:00Could not load file or assembly 'msshrtmi' or one of its dependencies<p><strong>Sometimes after publishing my Azure solution I get a yellow screen of death giving a "System.BadImageFormatException" exception with a description of "Could not load file or assembly 'msshrtmi' or one of its dependencies. An attempt was made to load a program with an incorrect format."</strong></p>
<p>I tried everything to get rid of this msshrtmi complaint: building it again, rebuilding it, cleaning the solution, <strong>restarting my computer</strong> even and that fixes everything in Windows ever!</p>
<p>Very strange as my newly published Azure solution is fine but my development site is now at the mercy of this mysterious msshrtmi assembly.. whatever the hell that is.</p>
<h3>Kill msshrtmi! Kill it with fire!</h3>
<p>So it's is an assembly right? Assemblies go in the bin folder... let's look there... There it is! msshrtmi.dll. DELETE IT!</p>
<p>I don't know what it is and I didn't put it there so I deleted it and all is back to normal. Excellent.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com46tag:blogger.com,1999:blog-2791921813858302066.post-42049959740451571602012-05-28T09:38:00.001+01:002012-05-28T09:38:29.228+01:00How to turn off logging on Combres<p><strong>I have been using Combres for my .NET app to minify, combine and compress my JavaScript and CSS. So far I have found it awesome.</strong></p>
<p>It really does do everything it claims. Minifing your scripts and CSS, then combines them all so that all the code is sent via one HTTP request (well one for CSS and one for JavaScript). It also handles GZip or deflate compression should the request accept it. It is easy to set up since it is now available through NuGet and using it will make YSlow smile when it scans your site.</p>
<h3>Logging</h3>
<p>One thing that is annoying though is that Combres seems to log everything it does. "<code>Use content from content's cache for Resource x</code>...<code>Use content from content's cache for Resource y</code>" This is fine in development but unnecessary in production so I wanted to turn it off but couldn't find how to do this in any documentation.</p>
<p>The way I managed to turn off logging was to find the line in the web.config that combres put in that looks like this:</p>
<pre class="brush: csharp"><combres definitionUrl="~/App_Data/combres.xml" logProvider="Combres.Loggers.Log4NetLogger" /></pre>
<p>You simply need to remove the logProvider so it looks like this:</p>
<pre class="brush: csharp"><combres definitionUrl="~/App_Data/combres.xml" /></pre>
<p>If you still want logging on your development environment you can simply remove the logProvider attribute in a <a href="http://msdn.microsoft.com/en-us/library/dd465326.aspx">web.config transform</a>.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com27tag:blogger.com,1999:blog-2791921813858302066.post-8349886037986708272012-05-22T10:51:00.000+01:002012-05-22T15:09:23.220+01:00Export and back up your SQL Azure databases nightly into blob storage<p><strong>With Azure I have always believed, if you can do it with the Azure Management Portal then you can do it with a REST API. So I thought it would be a breeze to make an automated job to run every night to export and back up my SQL Azure database into a BACPAC file in blob storage. I was suprised to find scheduling bacpac exports of your SQL Azure databases is not documented in the Azure Service Management API. Maybe it is because the bacpac exporting and importing is in beta? Nevermind. I successfully have a worker role backing up my databases and here's how:</strong></p>
<p>It is a REST API so you can't use nice WCF to handle all your POST data for you but there is a trick still to avoid writing out all your XML parameters by hand and instead strong typing a few classes.</p>
<p>Go to your worker role or console application and add a service reference to your particular DACWebService (it varies by region):</p>
<ul>
<li>North Central US: <code>https://ch1prod-dacsvc.azure.com/DACWebService.svc</code></li>
<li>South Central US: <code>https://sn1prod-dacsvc.azure.com/DACWebService.svc</code></li>
<li>North Europe: <code>https://db3prod-dacsvc.azure.com/DACWebService.svc</code></li>
<li>West Europe: <code>https://am1prod-dacsvc.azure.com/DACWebService.svc</code></li>
<li>East Asia: <code>https://hkgprod-dacsvc.azure.com/DACWebService.svc</code></li>
<li>Southeast Asia: <code>https://sg1prod-dacsvc.azure.com/DACWebService.svc</code></li>
</ul>
<p>Once you import this Service Reference you will have some new classes that will come in handy in the following code:</p>
<pre class="brush: csharp">
//these details are passed into my method but here is an example of what is needed
var dbServerName = "qwerty123.database.windows.net";
var dbName = "mydb";
var dbUserName = "myuser";
var dbPassword = "Password!";
//storage connection is in my ServiceConfig
//I know these the CloudStorageConnection can be obtained in one line of code
//but this way is necessary to be able to get the StorageAccessKey later
var storageConn = RoleEnvironment.GetConfigurationSettingValue("Storage.ConnectionString");
var storageAccount = CloudStorageAccount.Parse(storageConn);
//1. Get your blob storage credentials
var credentials= new BlobStorageAccessKeyCredentials();
//e.g. https://myStore.blob.core.windows.net/backups/mydb/2012-05-22.bacpac
credentials.Uri = string.Format("{0}backups/{1}/{2}.bacpac",
storageAccount.BlobEndpoint,
dbName,
DateTime.UtcNow.ToString("yyyy-MM-dd"));
credentials.StorageAccessKey = ((StorageCredentialsAccountAndKey)storageAccount.Credentials)
.Credentials.ExportBase64EncodedKey();
//2. Get the DB you want to back up
var connectionInfo = new ConnectionInfo();
connectionInfo.ServerName = dbServerName;
connectionInfo.DatabaseName = dbName;
connectionInfo.UserName = dbUserName;
connectionInfo.Password = dbPassword;
//3. Fill the object required for a successful POST
var export = new ExportInput();
export.BlobCredentials = credentials;
export.ConnectionInfo = connectionInfo;
//4. Create your request
var request = WebRequest.Create("https://am1prod-dacsvc.azure.com/DACWebService.svc/Export");
request.Method = "POST";
request.ContentType = "application/xml";
using (var stream = request.GetRequestStream())
{
var dcs = new DataContractSerializer(typeof(ExportInput));
dcs.WriteObject(stream, export);
}
//5. make the POST!
using (var response = (HttpWebResponse)request.GetResponse())
{
if (response.StatusCode != HttpStatusCode.OK)
{
throw new HttpException((int)response.StatusCode, response.StatusDescription);
}
}</pre>
<p>This code would run in a scheduled task or worker role to be set for 2am each night for example. It is important you have appropriate logging and notifications in the event of failure.</p>
<h3>Conclusion</h3>
<p>This sends off the request to <strong>start</strong> the back up / export of database into a bacpac file. The success of this is no indication that the back up was successful, only the request submition. If your credentials are wrong you will get a 200 OK response but it the back up will fail silently later.</p>
<p>To see if it has been successful you can check on the status of your exports via the Azure Management Portal, or by waiting a short while and having a look in your blob storage.</p>
<p>I have not covered Importing because, really, exporting is the boring yet important activity that must happen regularly (such as nightly). Importing is the one you do on the odd occassion when there has been a disaster and the Azure Management Portal is well suited to such an occassion.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com42London, UK51.5081289 -0.12800551.350006900000004 -0.443862 51.6662509 0.187852tag:blogger.com,1999:blog-2791921813858302066.post-70670492008963940622012-05-18T11:36:00.001+01:002012-05-18T11:36:48.225+01:00How to get Azure storage Account Key from a CloudStorageAccount object<p><strong>I have a CloudStorageAccount object and I want to get the AccountKey out of it. Seems like you should be able to get the Cloud Storage Key out of a CloudStorageAccount but I did struggle a bit at first.</strong></p>
<p>I first used the <code>CloudStorageAccount.FromConfigurationSetting(string)</code> method at first and then played about in debug mode to see if I could find it.</p>
<p>I then found that that method doesn't return the correct type of object which left me unable to find my Azure storage access key. I then tried the same thing but using <code>CloudStorageAccount.Parse(string)</code> instead. This did have access to the Azure storage access key.</p>
<pre class="brush: csharp">//this method of getting your CloudStorageAccount is no good here
//var account = CloudStorageAccount.FromConfigurationSetting("StorageConnectionStr");
//these two lines do...
var accountVal = RoleEnvironment.GetConfigurationSettingValue("StorageConnectionStr");
var account = CloudStorageAccount.Parse(accountVal);
//and then you can retrieve the key like this:
var key = ((StorageCredentialsAccountAndKey)account.Credentials)
.Credentials.ExportBase64EncodedKey();</pre>
<p>It is strange that <code>CloudStorageAccount.FromConfigurationSetting</code> doesn't give you access to the account key in your credentials but <code>CloudStorageAccount.Parse</code> does. Oh well, hope that helps.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com30London, UK51.5081289 -0.12800551.350006900000004 -0.443862 51.6662509 0.187852tag:blogger.com,1999:blog-2791921813858302066.post-17167971770230671262012-05-10T13:02:00.000+01:002012-05-10T13:02:02.586+01:00Set maxlength on a textarea<p><strong>It's annoyed me for quite a while that you can set a maximum length on an <input type="text" /> but not on a textarea. Why? Are they so different?</strong></p>
<p>I immediately thought I was going to have to write some messy JavaScript but then I learned that HTML5 implements maxlength on text areas now and I'm only considering modern browsers! Wahoo!</p>
<p>Then I learnt that IE9 doesn't support it so JavaScript it is...</p>
<h3>JavaScript textarea maxlength limiter</h3>
<p>What I have done that works well is bind events keyup and blur on my text area to a function that removes characters over the maxlength provided. The code looks like this:</p>
<pre class="brush: csharp">$('#txtMessage').bind('keyup blur', function () {
var $this = $(this);
var len = $this.val().length;
var maxlength = $this.attr('maxlength')
if (maxlength && len > maxlength) {
$this.val($this.val().slice(0, maxlength));
}
});</pre>
<h3>Conclusion</h3>
<p>It works quite well because if the browser already supports maxlength on a textarea there will be no interruption because the value of the textarea will not go over that maxlength.</p>
<p>The keyup event doesn't fire when the user pastes text in using the mouse but that is where the blur event comes in. Also, an enter click makes a new line on a textarea so the user has to click the submit button (blurring the textarea).
<p>Beware though that this is for usability only; a nasty user could easily bypass this so ensure you are checking the length server side if it is important to you.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com31tag:blogger.com,1999:blog-2791921813858302066.post-11876916631467094682012-04-18T12:11:00.001+01:002012-04-18T16:42:21.313+01:00jQuery - Don't use bind and definitely don't use live!<p><strong>If you use jQuery you have almost certainly used the .click(fn) event. You may know that just calls .bind('click', fn). You may have used live(). You may know that live() is weird and buggy and inefficient. You may have heard you should not use it in favour of .delegate(). You may have heard of the new .on(). But which is better? bind vs live vs delegate vs on?</strong></p>
<p>Take the following html:</p>
<pre class="brush: csharp"><div id="myContainer">
<input type="button" id="myBtn" value="Click me!" />
</div>
<script type="text/javascript>
$('#myBtn').click(doSomething);
</script></pre>
<p>I have a button that has a javascript function called doSomething attached to it. This works fine until I dynamically replace the inner HTML of the div myContainer with a new button (with same ID etc). doSomething will no longer be attached to the new myBtn button so it will not be called when the button is clicked.</p>
<p>FYI, $('#myBtn').click(doSomething); is just shorthand for writing $('#myBtn').bind('click', doSomething);</p>
<p>So you can change the JavaScript code above to $('#myBtn').live('click', doSomething); and it will still work even when the button is brought in or generated dynamically. Up until now I have found that the live function uses magic to just make it all work even after ajax <small><em>ab</em></small>use and therefore it is better.</p>
<p>However, recently (in a more complex implementation) I found it was causing some undesired behaviour so I had a dig into it. Turns out live() is buggy and badly performing and has actually been deprecated as of jQuery v1.7 so <strong>do not use live!</strong> What should you be using for this type of functionality then?</p>
<p>Well, delegate has been the popular replacement since it attaches the method once to a selected container but still targets the inner element as bind or click would. The importance is two fold:</p><ol><li>The functionality is not wastefully attached to each element matched and instead just once to the containing element</li>
<li>Dynamically added items that match the selector will still fire the event since the event is not attached to this element, it is attached to the container that has remained static</li></ol>
<p>In fact live does the same as delegate but it's containing element cannot be defined, it is the whole document which, if you have a deep DOM, finding the originating element using the selector could cause it to search a long way. Lame... In fact, I found bugs with it and there are other details which mean you <em>should</em> use delegate instead. I say <em>should</em> because there is a new cool kid on the block.</p>
<h3>So what is better than bind, live and delegate?</h3><p>You should use .on(). All the cool kids are doing it. Want me to name drop users of on? Well, <strong>ME</strong>! Perhaps more famous in the JavaScript world is jQuery, and they use it! If you look at the jQuery library bind, live and delegate all just call <code>on</code> as of jQuery v1.7+</p>
<p>So my JavaScript code above should change to:</p><pre class="brush: csharp"><script type="text/javascript">
$('#myContainer').on('click', '#myBtn', doSomething);
</script></pre>
<p>Bind and click are still fine in my opinion as nice little shortcuts but only when using it them to attach a function to one specific element and only when you are not attaching to dynamic objects. If you are attaching to, say, multiple <li>’s in a <ul> you should use on() instead to attach the event to the <ul> and target the event to the <li>’s. This way there is only one function and multiple references to it rather than creating the function once for each of the <li>’s.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com50tag:blogger.com,1999:blog-2791921813858302066.post-89070127688330469172012-04-17T11:26:00.000+01:002012-04-17T11:26:53.352+01:00What to do with LastIndexOf unexpected ArgumentOutOfRangeException behaviour<p><strong>Using <code>string.LastIndexOf(char value, int startIndex, int count);</code> started giving me some behaviour I wasn't expecting when it started giving me an ArgumentOutOfRangeException Exception with a description of: "Count must be positive and count must refer to a location within the string/array/collection."</strong></p>
<p>Let me give you an example:<pre class="brush: csharp; highlight: [3]">var message = ".NET is unusually faultless!";
var maxSearch = 16;
var count = message.LastIndexOf('l', 0, maxSearch);
var newMsg = message.Substring(0, count);</pre></p>
<p>This should work right? What could be wrong with this? Well according to IntelliSense "Reports the index position of the last occurrence of the specified Unicode character in a substring within this instance. The search starts at a specified character position and examines a specified number of character positions." Hmm.. Okay. And the count parameter that I am getting wrong? "The number of character positions to examine." Seems legit.</p>
<p>I decided to disassemble .NET to have a peep at what's going on internally... Have I found a bug in .NET?! Can't tell because the method is extern, so I cannot look at it. I instead did some experimenting and worked it out.</p>
<h3>How to properly use LastIndexOf</h3>
<p>Turns out I was just using it incorrectly and blaming my tools. Although I am convinced some blame lies with IntelliSense for forgetting to mention that LastIndexOf <em>searches backwards through the string</em>! Should I have known that? I don't know. Anyway this is what I should have written:</p>
<pre class="brush: csharp; highlight: [3]">var message = ".NET is unusually faultless!";
var maxSearch = 16;
var count = message.LastIndexOf('l', maxSearch, maxSearch);
var newMsg = message.Substring(0, count);</pre></p>
<p>The startIndex parameter should be at the last point you want to search <em>backwards</em> from. The count then specifies the amount of chars for your search to go <em>back</em> through.</p>
<p>Maybe I'm just being stupid or maybe it is just that <code>Console.WriteLine(newMsg);</code></p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com48tag:blogger.com,1999:blog-2791921813858302066.post-49002160045712464082012-03-27T15:10:00.001+01:002013-03-12T09:21:54.639+00:00Reuse Select function in Linq<p><strong>You can reuse Linq Select functions by defining a delegate Func field that can be used in multiple Linq queries.</strong></p>
<p>I had a few Linq to Objects queries that were querying the same collection but had different Where clauses that were nested between various if statements. The similarity between all of these statements was the Select function. This was quite long itself and it was annoying seeing such repetition. DRY!</p>
<p>So I tried to create a Func parameter that I could just pass into my Select statement which previously, for example could look like this:</p>
<pre class="brush: csharp">ReturningPerson person;
if (iFeelLikeIt)
{
person = people.Where(person => person.IsNameFunny)
.Select(person =>
new ReturningPerson
{
DateOfBirth = input.DateBorn,
Name = string.Format("{0} {1}",
input.FirstName, input.LastName),
AwesomenessLevel = input.KnowsLinq ? "High" : "Must try harder",
CanFlyJet = false
})
.FirstOrDefault();
}
else
{
person = people.Where(person => person.Toes == 10)
.Select(person =>
new ReturningPerson
{
DateOfBirth = input.DateBorn,
Name = string.Format("{0} {1}",
input.FirstName, input.LastName),
AwesomenessLevel = input.KnowsLinq ? "High" : "Must try harder",
CanFlyJet = false
})
.FirstOrDefault();
}
</pre>
<p>This could be modified to work like this:</p>
<pre class="brush: csharp">Expression<Func<InputPerson, ReturningPerson>> selector = (input) =>
new ReturningPerson
{
DateOfBirth = input.DateBorn,
Name = string.Format("{0} {1}",
input.FirstName, input.LastName),
AwesomenessLevel = input.KnowsLinq ? "High" : "Must try harder",
CanFlyJet = false
};
ReturningPerson person;
if (iFeelLikeIt)
{
person = people.Where(person => person.IsNameFunny)
.Select(selector)
.FirstOrDefault();
}
else
{
person = people.Where(person => person.Toes == 10)
.Select(selector)
.FirstOrDefault();
}</pre>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com161London, UK51.5081289 -0.12800551.350006900000004 -0.443862 51.6662509 0.187852tag:blogger.com,1999:blog-2791921813858302066.post-33362060140834028542012-03-13T10:26:00.001+00:002012-03-13T10:26:53.228+00:00Error during serialization or deserialization using the JSON JavaScriptSerializer. The length of the string exceeds the value set on the maxJsonLength property<p><strong>Whilst creating a JsonResult for my web service in ASP.NET MVC I received a deserialisation error of type "InvalidOperationException" with the description "Error during serialization or deserialization using the JSON JavaScriptSerializer. The length of the string exceeds the value set on the maxJsonLength property."</strong></p>
<p>In fairness I was sending quite a large JSON object at the time, largely due to there being 288 base64 embedded images totally ~15MB I'd guess... Whoops! Anyway, it may be a hilariously large amount of data but it <em>is</em> what I want so how to work around this?...</p>
<p>There is a web config setting that can resolve this which was my first discovery in my path to success:</p>
<pre class="brush: xml"><system.web.extensions>
<scripting>
<webServices>
<jsonSerialization maxJsonLength="1000000000" />
</webServices>
</scripting>
</system.web.extensions></pre>
<p>This is the official word from Microsoft about this but unfortunately this only works when you are specifically serialising (or deserialising) things yourself. This has no relation on the inner workings of the framework such as my bit of MVC code which is currently as follows:</p>
<pre class="brush: csharp">public JsonResult GetData()
{
return Json(GetCrazyAmountOfJson(), JsonRequestBehavior.AllowGet);
}</pre>
<p>So for those of you using the Json() method in ASP.NET MVC or some other similar .NET framework voodoo like me the work around is to write your own code and bypass the framework such as:</p>
<pre class="brush: csharp">public ContentResult GetData()
{
var data = GetCrazyAmountOfJson();
var serializer = new JavaScriptSerializer();
serializer.MaxJsonLength = int.MaxValue;
var result = new ContentResult();
result.Content = serializer.Serialize(data);
result.ContentType = "application/json";
return result;
}</pre>
<p>The MaxJsonLength is an Int32 so the maximum length can only be the maximum int value (1.4billion or something), you cannot make it unlimited. I assume this limit is here to make you think twice before making crazy big JSON serlisations. Did it?</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com39tag:blogger.com,1999:blog-2791921813858302066.post-46506199065899758642012-02-09T20:16:00.001+00:002012-02-09T20:16:38.261+00:00Add Cloud and Local service configurations when there is only a Default<p><strong>I would like a cloud and local versions of my service configurations to change some of the configuration settings in different deployments to either Local or Cloud. Sometimes you have only one service configuration called "Default" which is not enough. This guide shows how to add more.</strong></p>
<p>So you only have one Service configuration file? You would be better off with more than one since it is likely you would want to have different configuration setting defined for different deployments e.g. storage emulator used for local deployments and Azure for cloud deployments.</p><div class="separator" style="clear: both;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjHXoNV37eN0pV1M01ejynug1RKtLpuNbSyQnaa90QoLei5kSthxLE0EMUhxf5Hsmov41sZWlqNRGd6MVuYKHZxGJoVFxudpdyxpcyCWKoKJcd2hs1hYMPa_ZrEXtLfAOgXbAtAvYUJlE/s1600/0.png" imageanchor="1" style=""><img border="0" height="116" width="252" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjHXoNV37eN0pV1M01ejynug1RKtLpuNbSyQnaa90QoLei5kSthxLE0EMUhxf5Hsmov41sZWlqNRGd6MVuYKHZxGJoVFxudpdyxpcyCWKoKJcd2hs1hYMPa_ZrEXtLfAOgXbAtAvYUJlE/s400/0.png" /></a></div>
<p>Right click on one of your roles and go to properties. On the Service Configuration drop down select <Manage...></p><div class="separator" style="clear: both;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzlZE-ovc-gn66zUFHdQ62IfF16Y8jKYDJMUM7455nr5q4C-9MgAZR7zp6I1swOUV4juEdbEsGNBIegHcehQ6vl0XUbDO2-CfrEc2wIUPSP6jRZqB0wE0CmibVc_47dBlXl0B6Lts_fVg/s1600/1.png" imageanchor="1"><img border="0" height="111" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzlZE-ovc-gn66zUFHdQ62IfF16Y8jKYDJMUM7455nr5q4C-9MgAZR7zp6I1swOUV4juEdbEsGNBIegHcehQ6vl0XUbDO2-CfrEc2wIUPSP6jRZqB0wE0CmibVc_47dBlXl0B6Lts_fVg/s400/1.png" /></a></div>
<p>On the Manage Service Configurations dialogue box you want to click the Default configuration (for example) and click 'Create copy' and rename it to 'Local'. Do the same again for 'Cloud' and then remove 'Default'.</p><div class="separator" style="clear: both;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfWCpKfGnEzXRSboDM4vLPKQhh48Hu7s71Dtuj_ypq_YrVPuBDPZIJnQY1_S3CVgyUbEWQ3se-PjSPgL7sB7yaRSwHvx8iTn2RVczYa6hRNuvFtevz-j4UF1VUNUMw1hHQRXNuTCK-nIo/s1600/2.png" imageanchor="1" style=""><img border="0" height="319" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfWCpKfGnEzXRSboDM4vLPKQhh48Hu7s71Dtuj_ypq_YrVPuBDPZIJnQY1_S3CVgyUbEWQ3se-PjSPgL7sB7yaRSwHvx8iTn2RVczYa6hRNuvFtevz-j4UF1VUNUMw1hHQRXNuTCK-nIo/s400/2.png" /></a></div>
<p>You should now have two versions of the service configuration, both Local and Default.</p><div class="separator" style="clear: both;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfyTQSYKEuzSvKuDuivLUauW0zNwDDaaaLPBobaDrB49PNokI0KuF1QT35_arj43RNTOdmsz-aIVuZMfcesyhQcYdczivLfNmOF-zynWSD5KY7L3e5Zk5Fl-H0XfsH0WolpYIwoQMMRPY/s1600/3.png" imageanchor="1"><img border="0" height="114" width="257" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfyTQSYKEuzSvKuDuivLUauW0zNwDDaaaLPBobaDrB49PNokI0KuF1QT35_arj43RNTOdmsz-aIVuZMfcesyhQcYdczivLfNmOF-zynWSD5KY7L3e5Zk5Fl-H0XfsH0WolpYIwoQMMRPY/s400/3.png" /></a></div>
<p>These two versions of ServiceConfiguration, Local and Cloud, are now in existence which conforms with the standard set up when you have a new Azure Service. Of course, these service configurations will be duplicates of each other so you will need to make the appropriate changes in them to reflect that, e.g. set the storage account to point to Azure on Cloud and storage emulator on Local service configurations.</p>Colin Farrhttp://www.blogger.com/profile/10354777018984660024noreply@blogger.com32