XML docs in service fabric web api

I usually use the swashbuckle swagger library to auto-document my web apis, but ran into issues when trying to deploy said APIs to a service fabric cluster – the swashbuckle/swagger initialization code threw “file not found” exception when trying to load the XML file generated by the build.

In order to get it to work, I edited the csproj file to generated the XML files for x64 config as well – from

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">



<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">

That’s twice this has had me stumped for a few minutes, hopefully this short post can help somebody else as well.

Green/blue deployments on Azure with Octopus Deploy

Pushing the “promote to production” button should be fun, not scary. Implementing a blue/green deployment strategy is one way to alleviate the strees around pushing to prod.

Image borrowed from https://ianpaullin.com/2014/06/27/octopus-deploy-series-conclusion/

Anyone who has ever deployed a piece of code to production, knows how nerve wracking the process can be. Sure, devops practices such as continuous integration and deployment have reduced the need for manual testing and 20 page installation manuals, but nevertheless: clicking that “Promote to production” button still has a tendency to prove Murphy’s law, and we are always looking for ways to reduce the risk even further.

Enter blue/green development. The principle is better explained elsewhere, but in short it is a way to promote a build to production and ensure it is working as expected in a production environment before exposing it to actual end users. I’ll stick to cloud foundry’s definition of blue/green in this article, meaning “blue” is the live production version and “green” is the new one.

More and more of my work is hosted on Azure these days, so setting up blue/green deployments for azure web sites has been a priority. Microsoft have already got the concept covered, with deployment slots, so it was just a matter of getting that to play nicely with Octopus Deploy, our go-to deployment server.

Since we are running quite a few web sites (and other sorts of resources), keeping the costs down was also a priority. That meant that I did not want all web sites to have a two deployment slots at all times (doubling the cost of each web site). Instead, I wanted the process to create the green (new) slot at the start of the process and remove it once the deployment was completed successfully. Octopus themselves have a guide for this, but I found it a bit lacking in detail, especially around the service principal/resource management parts. Hopefully, this guide will get you up and running without you having to fight through octopus’ less than informative error messages.

The process I had to set up, from source code change to live in production, was this.

  1. Get and build changes from source code repository.
  2. Run unit and integration tests
  3. Create green slot
  4. Deploy changes to green slot
  5. Warm up web site and run smoke tests
  6. Swap deployment slots (swap blue and green)
  7. Delete deployment slot created in pt 3.

Steps 1 and 2 are handled by our CI server, which pushes the build artifacts to Octopus’ nuget feed. Step 4 was already in place, so I needed to configure octopus to create the new slot, swap the new slot with current production, and delete it. In addition, I wanted to have a few rudimentary tests in place to verify the green slot before swapping it into the live production slot.

This is a screenshot of the complete build configuration, covering all the steps above:


A few words on Azure resources and powershell scripting

Octopus has a built in step template called “Run Azure powershell script” which is well suited for our purpose. However, since we need to create (and delete) resources via powershell the powershell scripts have to be run in the context of a user with the permissions to do so. In octopus, that means you have to have an azure connection/account configured as a service principal. Creating a service principal is a rather lengthy process, but the script at  https://octopus.com/docs/guides/azure-deployments/creating-an-azure-account/creating-an-azure-service-principal-account eases the process. I strongly urge you to change the password before running it.

Note that in order to get the resource manager cmdlets to work, I had to install the AzureRM module on the VM hosting octopus. That is done by starting powershell with administrator privileges and doing

Install-Module AzureRm

Creating the deployment slot

The first step of the deployment process is to create the green deployment slot. In the octopus process screen, click “Add step” and then “Run an Azure Powershell script”. Give the step a descriptive name (I chose “Create deployment slot”).

Under “Account”, you will have to select the service principal you created. If you have not added it to octopus already, now is the time: Select “Add new…” next to the Account drop down. The screenshot below indicates which fields you need to fill out (make sureyou check the “service principal” radio button first). The IDs (GUIDs) you need are all part of the output of the powershell script mentioned above.

Once the service principal is tested OK and saved, select it in the powershell script step configuration. Make sure the script source is set to “Source code” and paste the line below into the script text area (substituting the resource group name and web app name with the values applicable to your deployment):

New-AzureRmWebAppSlot -ResourceGroupName your-resource-group -Name web-app-name -Slot green

Leave the environment radio button on “run for all applicable environments”, and click “Save”.

Deploy the website to the green deployment slot

I will assume you already know how to set up the deploy to azure step. Make sure the step deploys to the green slot – in my case, the step config looks like this:

Deploy to azure octopus configuration. Note the “(cd)” part of the web app name. In this case, my green deployment slot in the web app stored in the “Azure.WebAppName” variable is called “cd”.

Test the new deployment

When the web app has been deployed (and assuming all went well), your app will now be available for testing on the deployment slot’s URL. E.g for a web app called my web app: http://mywebapp-cd.azurewebsites.net.

I am planning on running a selenium test suite against that URL, but for now I am merely verifying that the front page returns 200 OK. For that, I have set up a powershell step in octopus which runs the following script (with thanks to Steve Fenton). The web site in question is based on a CMS which in no way prides itself on a quick startup, hence the long waits and retries:

Write-Output "Starting"

$MaxAttempts = 10

If (![string]::IsNullOrWhiteSpace($TestUrl)) {
 Write-Output "Making request to $TestUrl"
 Try {
 $stopwatch = [Diagnostics.Stopwatch]::StartNew()
 # Allow redirections on the warm up
 $response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 10
 $statusCode = [int]$response.StatusCode
 Write-Output "$statusCode Warmed Up Site $TestUrl in $($stopwatch.ElapsedMilliseconds)s ms"
 } catch {
 $_.Exception|format-list -force
 For ($i = 0; $i -lt $MaxAttempts; $i++) {
 try {
 Write-Output "Checking Site"
 $stopwatch = [Diagnostics.Stopwatch]::StartNew()
 # Don't allow redirections on the check
 $response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 0
 $statusCode = [int]$response.StatusCode
 Write-Output "$statusCode Second request took $($stopwatch.ElapsedMilliseconds)s ms"
 If ($statusCode -ge 200 -And $statusCode -lt 400) {
 Start-Sleep -s 5
 } catch {
 $_.Exception|format-list -force

 If ($statusCode -ge 200 -And $statusCode -lt 400) {
 # Hooray, it worked
 } Else {
 throw "Warm up failed for " + $TestUrl
} Else {
 Write-Output "No TestUrl configured for this machine."

Write-Output "Done"

If and when this script returns successfully, it’s time to

Swap the green and blue slots

We have established that the green slot version is good to go. To promote it to the production slot, we need to swap the blue and green slots. This is done by another azure powershell octopus step (which should be run under a resource manager account), with the following script:

Switch-AzureWebsiteSlot –Name #{Azure.WebAppName} -Slot1 "green" -Force

This will cause the (already warmed up and tested) deployment in slot “cd” to be swapped with the current production version.

All that’s left now, is to clean up after ourselves:

Remove the green slot

The “green” slot is now redundant and can be removed. Yet another azure powershell step is needed:

Remove-AzureRmResource -ResourceGroupName  -ResourceType Microsoft.Web/sites/slots –Name #{Azure.WebAppName}/green -ApiVersion 2015-07-01 -Force

I have chosen to only run this step if all other steps succeed. YMMV, but I found that troubleshooting is much easier when the environment still exists.


Hopefully, this should have you up and running with green/blue deployments. There are certainly more thorough ways of doing this (ensuring that the front page loads is certainly not an exhaustive test of a new version), but this article will leave you with a process which can be extended to your liking. As briefly mentioned, I am working on a selenium test suite which I plan to plug into this project – I expect it to result in a blog post as well.

Azure functions – storage queue trigger

As I described in the previous blog post (in Norwegian), I’ve spent a few hours the last couple of days to set up a barcode scanner in our kitchen. The barcode scanner is attached to a raspberry pi and when a barcode is scanned, the barcode is pushed to an azure storage queue and the product eventually ends up in our kolonial.no shopping cart.

I’m using the project to familiarize myself with Azure Functions before utilizing them in customer projects, and I decided to use a storage queue triggered function to process incoming barcodes. Read more about storage queues at microsoft.com. They’re basically lightweight, high volume and low-maintenance distributed queues.

Creating the function

Before coding the function, it has to be set up as part of an Azure Function app. An azure function app is basically a special version of a web app. The app can be created either via the Azure Functions Portal or through the regular azure portal.

Side note: While working on this, the continuous depoyment of a totally unrelated app in the same azure subscription suddenly started failing when a powershell command tried to switch deployment slots, with the error message

Requested value 'Dynamic' was not found.

This had me scratching my head for quite some time, but some googling revealed that adding a function app to an existing subscription will (may?) break powershell functionality. The fix was to completely delete the function app. YMMV .

Once the app is set up, it’s time to create the function. As mentioned, we want an azure storage queue triggered function, and I opted for C#:


Selecting the trigger type will reveal a few options below the “Choose a template” grid:


Here we give the function a descriptive name and enter the name of an existing queue (or a new one) in the “Queue name” field. The storage account connection field is a drop down of the storage accounts available. It obviously needs to be set to the storage account we want the  queue to be stored in. Once we click create, the function is added to the function app, and it will be triggered (executed) every time a new message is added to the storage queue “incoming-barcodes”. This configuration (queue name, storage accouht) can be changed at any time by clicking “Integrate” beneath the function name in the function portal:


The next step is to actually write the function. In this first version, everything is done in one function call, and we’re only covering the happy path: we assume the kolonial account exists, that the password is correct and that the product exists. If not, the message will end up in the poison queue or just log a warning message. A natural next step would be to alert the user that any errors occured, but that’s for another day.

The default entry point for the function is a static method called “Run” (I realize that RunAsync would be more correct with regards to naming async methods, but I’m sticking as close as I can to the defaults):

public static async Task Run(string message, TraceWriter log)
log.Info($"Processing incoming barcode: {message}");
var incoming = IncomingBarcode.FromMessage(message);
var httpClient = await CreateKolonialHttpClientAsync();
var kolonialProduct = await GetKolonialProductAsync(httpClient, incoming.Barcode);
if(kolonialProduct == null)
log.Warning($"Product with barcode {incoming.Barcode} is not available at Kolonial.");
await AddProductToCartAsync (httpClient, kolonialProduct, log);

First, we extract the raspberry ID and barcode from the incoming message with a small data transport class (IncomingBarcode), since the barcode is passed to the function by the raspberry pi on the format “rpi-id:barcode”.

The kolonial API needs to be set up with a user agent and a token, and in order to access a specific user’s cart we also need to get a user session. That’s all handled by the CreateKolonialHttpClientAsync function:

public static async Task<HttpClient> CreateKolonialHttpClientAsync(){
var httpClient = new HttpClient();
// The kolonial API requires a client token and a user agent (supplied by kolonial) to work
httpClient.DefaultRequestHeaders.Add("X-Client-Token", ConfigurationManager.AppSettings["KolonialToken"]);
httpClient.DefaultRequestHeaders.Add("User-Agent", ConfigurationManager.AppSettings["KolonialUserAgent"]);
// Modifying the cart requires a valid, active session
string sessionId = await GetSessionIdAsync(httpClient);
// Ensure the session cookie is sent as part of the calls to the API.
httpClient.DefaultRequestHeaders.Add("Cookie", $"sessionid={sessionId}");
return httpClient;
public static async Task<string> GetSessionIdAsync(HttpClient httpClient)
// The session cookie ID is retrieved by passing an object like { username: "something", password: "something-secure" }
// to the user/login endpoint
var result = await httpClient.PostAsync("https://kolonial.no/api/v1/user/login/",
new StringContent(
new { username = "an-email-address", password = "a-password" }),
Encoding.UTF8, "application/json"));
var json = await result.Content.ReadAsStringAsync();
var response = JsonConvert.DeserializeObject<LogInResponse>(json);
return response.sessionid;

As can be seen in the gist above, configuration values are handled just as in regular .net code, by utilizing the ConfigurationManager. The settings themselves are set via the function app settings:

Navigating to the app settings: Function app settings -> Configure app settings…
…and then adding the setting as usual (you’ll recognize the settings blade from ordinary azure web apps).

Once the connection/session to kolonial.no is set up, we attempt to get the product by its bar code. I’ve separated the “get product from kolonial and transform it to a model I need” part into a separate http triggered azure function, which I’ll cover later, so there’s not a whole lot of logic needed; if the function returns a non-null json, the barcode is a valid kolonial product which is returned, if not we return null.

public static async Task<KolonialProduct> GetKolonialProductAsync(HttpClient client, string barcode)
var httpResult = await client.GetAsync(ConfigurationManager.AppSettings["GetKolonialProductUri"] + "&barcode=" + barcode);
if (httpResult.IsSuccessStatusCode)
var json = await httpResult.Content.ReadAsStringAsync();
if (json != null)
return JsonConvert.DeserializeObject<KolonialProduct>(json);
return null;

As can be seen in the Run method, all that’s left to do when the product exists, is to add it to the cart. This is done by POSTing to the /cart/items endpoint:

public static async Task AddProductToCartAsync(HttpClient httpClient, KolonialProduct kolonialProduct, TraceWriter log)
var productsJson = JsonConvert.SerializeObject(
new { items = new []{ new { product_id = kolonialProduct.Id, quantity = 1 } }});
log.Info($"Updating Kolonial with {productsJson}");
var response = await httpClient.PostAsync(
new StringContent(productsJson, Encoding.UTF8, "application/json"));

That’s all there is to it.

Dev notes

I tried setting the project up in Visual Studio, but the development experience for Azure Functions leaves a lot to be desired (the tools are still in beta), so I ended up coding the function in the function portal.

Testing a storage queue triggered function is actually pretty easy. I used the Azure Storage Explorer to manually add entries to the queue when developing.

When working with REST APIs, I like to have strongly typed models to work with. An easy way to create them, is the paste example JSON responses into http://json2csharp.com/, which will create C# classes for you.