Deploying a code first entity framework database in Azure DevOps

I spent most of Friday banging my head against Azure’s new devops experience, trying to get a database migration set up as part of a web app deployment. The project was a .net core 2.1 web site with an Entity Framework database, and we hit a surprising number of hurdles along the way. Hopefully, this write-up will help others in the same situation save some time.

Our solution, at least for the intents and purposes of this post, is made up of a web app project containing the business logic and a .net standard class library with the EF code first classes (note that this is a separate database project, which most tutorials fail to address).

The first step of setting up the pipeline is creating a build in azure devops:


We set it up against our source code provider and started out with the “ core” template – in fact, we did not have to alter any of the defaults for it to work straight out of the box.

Getting the database up and running was another story, however. Articles, tips and tutorials online are a bit outdated, and provide solutions which no longer work or are no longer necessary (e.g adding Microsoft.EntityFrameworkCore.Tools.DotNet to the DB project, which is no longer required and generates a build warning).

Generate migration script

The first step is to generate the migration script as part of the build, which the release step(s) will run against the database further down the line.

We gave up getting the built in .net core task to work with entity framework (we could not get past the error message ‘No executable found matching command “dotnet-ef”‘ regardless of what we tried), so we fell back to a good ol’ command line task:


And for your copying needs:

dotnet ef migrations script -i -o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql --project EfMigrationApp.Database\EfMigrationApp.Database.csproj --startup-project EfMigrationApp\EfMigrationApp.csproj -i -o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql

You will obviously need to replace the project names with your own.

A quick breakdown of the command:

dotnet ef migrations script: the command to generate a migration script

-i: i is for idempotent, ie the script generated can be run multiple times on the same database without conflicts.

-o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql: the migration script will be placed in the artifact staging directory, along with the rest of the build output

–project EfMigrationApp.Database\EfMigrationApp.Database.csproj: the project containing the database definition

–startup-project EfMigrationApp\EfMigrationApp.csproj: instructs EF that this is the start up project of the app.

Run migrations in the release pipeline

I’m sure there are many ways to run sql scripts in the release step (both command line tasks and powershell tasks could be utilized), but we landed on the predefined “Azure SQL Publish” task, which we added after the web app deploy task:


Fill in the db details according to your project, and the deployment package section with these values:

Action: Publish

Type: SQL script file

Sql script:

$(System.ArtifactsDirectory)/_$(Build.DefinitionName)/drop/migrate.sql (note the underscore before the build.definitionname variable – I suspect there’s a system variable we could use instead)

And that’s basically it – running the build and release pipeline will deploy the web app first, then migrate the database according to your latest EF code goodness. Enjoy!






Time zone and group by day in influxdb

All right, time for a slightly technical one.

At my current job, we do a lot of work on time series values, and we have recently started using InfluxDb, a blazingly fast timeseries database written in go. In order for our customers to get an overview of the devices they are operating, they want to see a certain metric, on a per-day basis. This meant we had to find a way to group the data by day. InfluxDb, being written for tasks like this, has a very nice syntax for time grouping:

select mean(value) from reportedValues where time > 1508283700000000000 group by time(1d), deviceId

The query above returns a nice list of exactly what we asked for – a list of the devices in question and their average value, grouped by day:

name: reportedValues
tags: deviceId='bathroom-window'
time mean
---- ----
2017-10-18T00:00:00Z 1.02

name: reportedValues
tags: deviceId='kitchen-window'
time mean
---- ----
2017-10-17T00:00:00Z 0.4
2017-10-18T00:00:00Z 0.75

We did run into an issue, however, with time zones. Our customers are in different time zones (none of the UTC, which all values in influxdb are stored as), so when grouping on days especially, we had to find a way to group on day-in-the-timezone-in-question.

Luckily, v1.3 of influxdb introduced the time zone clause. Appending TZ(‘Europe/Oslo’) to the query above, should, in theory, give us the same time series grouped slightly differently. We did run into a slight roadblock here, though. The query

select mean(value) from reportedValues where time > 1508283700000000000 group by time(1d), deviceId TZ('Europe/Oslo')


ERR: error parsing query: unable to find time zone Europe/Oslo

and we got the same result regardless of which time zone we tried (even the one mentioned in the documentation, “America/Los_Angeles”, failed.

I then tried the exact same query on a linux VM I had running, and lo and behold:

name: reportedValues
tags: deviceId='bathroom-window'
time mean
---- ----
2017-10-18T00:00:00+02:00 1.02

name: reportedValues
tags: deviceId='kitchen-window'
time mean
---- ----
2017-10-18T00:00:00+02:00 0.6333333333333334

(note that both the averages are different because of the time difference and that the time stamps reflect the time zone of the query and result.)

So obviously, this was something windows specific. I noticed that the github PR which added the TZ clause uses a go library called time, calling the LoadLocation function. The docs for that function states that “The time zone database needed by LoadLocation may not be present on all systems, especially non-Unix systems”, so I was obviously on to something. There’s a go user reporting something similar at at, and the title of that issue solved this for me: To get this to work on my local windows machine, all I had to do was

install go and restart the influx daemon (influxd.exe)



XML docs in service fabric web api

I usually use the swashbuckle swagger library to auto-document my web apis, but ran into issues when trying to deploy said APIs to a service fabric cluster – the swashbuckle/swagger initialization code threw “file not found” exception when trying to load the XML file generated by the build.

In order to get it to work, I edited the csproj file to generated the XML files for x64 config as well – from

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">



<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">

That’s twice this has had me stumped for a few minutes, hopefully this short post can help somebody else as well.

.net Web API and Route Name not found in Route Collection

Another entry in the series “tweets and blog posts I write mostly to have somewhere to look the next time I encounter this error”: this time, I spent too much time figuring out why this piece of code in an MVC view:

@Url.HttpRouteUrl("GetPerformance", new { id = Model.PerformanceId })

combined with this (working, I might add) API endpoint

public class PerformancesController : ApiController
    [Route("", Name="GetPerformance")
    public Performance GetPerformances(string id = "")
        // Some code...

did not resolve to /api/controllers/{id}, but instead presented me with a glorious yellow screen proudly presenting and ArgumentException with the exception message

A route named ‘GetPerformance’ could not be found in the route collection.
Parameter name: name

As if often the case, it turns out I had tried to be a little too clever: This was an episerver project with the legacy global.asax.cs file doing the MVC routing, while the web api was set up in the owin startup.cs class, with a new HttpConfiguration instance created there and attached with app.UseWebApi.

To resolve the error, I had to tie the Web API registration to GlobalConfiguration.Configuration instead of a new instance in startup.cs. With that done, both MVC and API routing were aware of each other, the error went away, and I was able to programmatically create web API route links in MVC views.

Green/blue deployments on Azure with Octopus Deploy

Pushing the “promote to production” button should be fun, not scary. Implementing a blue/green deployment strategy is one way to alleviate the strees around pushing to prod.

Image borrowed from

Anyone who has ever deployed a piece of code to production, knows how nerve wracking the process can be. Sure, devops practices such as continuous integration and deployment have reduced the need for manual testing and 20 page installation manuals, but nevertheless: clicking that “Promote to production” button still has a tendency to prove Murphy’s law, and we are always looking for ways to reduce the risk even further.

Enter blue/green development. The principle is better explained elsewhere, but in short it is a way to promote a build to production and ensure it is working as expected in a production environment before exposing it to actual end users. I’ll stick to cloud foundry’s definition of blue/green in this article, meaning “blue” is the live production version and “green” is the new one.

More and more of my work is hosted on Azure these days, so setting up blue/green deployments for azure web sites has been a priority. Microsoft have already got the concept covered, with deployment slots, so it was just a matter of getting that to play nicely with Octopus Deploy, our go-to deployment server.

Since we are running quite a few web sites (and other sorts of resources), keeping the costs down was also a priority. That meant that I did not want all web sites to have a two deployment slots at all times (doubling the cost of each web site). Instead, I wanted the process to create the green (new) slot at the start of the process and remove it once the deployment was completed successfully. Octopus themselves have a guide for this, but I found it a bit lacking in detail, especially around the service principal/resource management parts. Hopefully, this guide will get you up and running without you having to fight through octopus’ less than informative error messages.

The process I had to set up, from source code change to live in production, was this.

  1. Get and build changes from source code repository.
  2. Run unit and integration tests
  3. Create green slot
  4. Deploy changes to green slot
  5. Warm up web site and run smoke tests
  6. Swap deployment slots (swap blue and green)
  7. Delete deployment slot created in pt 3.

Steps 1 and 2 are handled by our CI server, which pushes the build artifacts to Octopus’ nuget feed. Step 4 was already in place, so I needed to configure octopus to create the new slot, swap the new slot with current production, and delete it. In addition, I wanted to have a few rudimentary tests in place to verify the green slot before swapping it into the live production slot.

This is a screenshot of the complete build configuration, covering all the steps above:


A few words on Azure resources and powershell scripting

Octopus has a built in step template called “Run Azure powershell script” which is well suited for our purpose. However, since we need to create (and delete) resources via powershell the powershell scripts have to be run in the context of a user with the permissions to do so. In octopus, that means you have to have an azure connection/account configured as a service principal. Creating a service principal is a rather lengthy process, but the script at eases the process. I strongly urge you to change the password before running it.

Note that in order to get the resource manager cmdlets to work, I had to install the AzureRM module on the VM hosting octopus. That is done by starting powershell with administrator privileges and doing

Install-Module AzureRm

Creating the deployment slot

The first step of the deployment process is to create the green deployment slot. In the octopus process screen, click “Add step” and then “Run an Azure Powershell script”. Give the step a descriptive name (I chose “Create deployment slot”).

Under “Account”, you will have to select the service principal you created. If you have not added it to octopus already, now is the time: Select “Add new…” next to the Account drop down. The screenshot below indicates which fields you need to fill out (make sureyou check the “service principal” radio button first). The IDs (GUIDs) you need are all part of the output of the powershell script mentioned above.

Once the service principal is tested OK and saved, select it in the powershell script step configuration. Make sure the script source is set to “Source code” and paste the line below into the script text area (substituting the resource group name and web app name with the values applicable to your deployment):

New-AzureRmWebAppSlot -ResourceGroupName your-resource-group -Name web-app-name -Slot green

Leave the environment radio button on “run for all applicable environments”, and click “Save”.

Deploy the website to the green deployment slot

I will assume you already know how to set up the deploy to azure step. Make sure the step deploys to the green slot – in my case, the step config looks like this:

Deploy to azure octopus configuration. Note the “(cd)” part of the web app name. In this case, my green deployment slot in the web app stored in the “Azure.WebAppName” variable is called “cd”.

Test the new deployment

When the web app has been deployed (and assuming all went well), your app will now be available for testing on the deployment slot’s URL. E.g for a web app called my web app:

I am planning on running a selenium test suite against that URL, but for now I am merely verifying that the front page returns 200 OK. For that, I have set up a powershell step in octopus which runs the following script (with thanks to Steve Fenton). The web site in question is based on a CMS which in no way prides itself on a quick startup, hence the long waits and retries:

Write-Output "Starting"

$MaxAttempts = 10

If (![string]::IsNullOrWhiteSpace($TestUrl)) {
 Write-Output "Making request to $TestUrl"
 Try {
 $stopwatch = [Diagnostics.Stopwatch]::StartNew()
 # Allow redirections on the warm up
 $response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 10
 $statusCode = [int]$response.StatusCode
 Write-Output "$statusCode Warmed Up Site $TestUrl in $($stopwatch.ElapsedMilliseconds)s ms"
 } catch {
 $_.Exception|format-list -force
 For ($i = 0; $i -lt $MaxAttempts; $i++) {
 try {
 Write-Output "Checking Site"
 $stopwatch = [Diagnostics.Stopwatch]::StartNew()
 # Don't allow redirections on the check
 $response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 0
 $statusCode = [int]$response.StatusCode
 Write-Output "$statusCode Second request took $($stopwatch.ElapsedMilliseconds)s ms"
 If ($statusCode -ge 200 -And $statusCode -lt 400) {
 Start-Sleep -s 5
 } catch {
 $_.Exception|format-list -force

 If ($statusCode -ge 200 -And $statusCode -lt 400) {
 # Hooray, it worked
 } Else {
 throw "Warm up failed for " + $TestUrl
} Else {
 Write-Output "No TestUrl configured for this machine."

Write-Output "Done"

If and when this script returns successfully, it’s time to

Swap the green and blue slots

We have established that the green slot version is good to go. To promote it to the production slot, we need to swap the blue and green slots. This is done by another azure powershell octopus step (which should be run under a resource manager account), with the following script:

Switch-AzureWebsiteSlot –Name #{Azure.WebAppName} -Slot1 "green" -Force

This will cause the (already warmed up and tested) deployment in slot “cd” to be swapped with the current production version.

All that’s left now, is to clean up after ourselves:

Remove the green slot

The “green” slot is now redundant and can be removed. Yet another azure powershell step is needed:

Remove-AzureRmResource -ResourceGroupName  -ResourceType Microsoft.Web/sites/slots –Name #{Azure.WebAppName}/green -ApiVersion 2015-07-01 -Force

I have chosen to only run this step if all other steps succeed. YMMV, but I found that troubleshooting is much easier when the environment still exists.


Hopefully, this should have you up and running with green/blue deployments. There are certainly more thorough ways of doing this (ensuring that the front page loads is certainly not an exhaustive test of a new version), but this article will leave you with a process which can be extended to your liking. As briefly mentioned, I am working on a selenium test suite which I plan to plug into this project – I expect it to result in a blog post as well.

Sendgrid and Azure Functions

Azure Functions recently introduced Sendgrid as an output type. The feature is currently in beta/preview and the documentation is as such very sparse, so I thought I’d quickly write up how I got it working. Hopefully, it might save you some googling (then again, you probably googled your way here, but I digress).

I currently use sendgrid to mail a shopping list, built up through the use of a barcode scanner in our kitchen, to myself every sunday. However, since the content of the email is somewhat irrelevant in this case, we’ll build an function which sends a mail whenever its HTTP endpoint is triggered/accessed.

As a prerequisite for this short tutorial you will need an active sendgrid account. They are kind enough to offer a free tier, so head on over and register if you haven’t already.

I’ll assume you have already created an azure function app (if not, follow Microsoft’s instructions). In it, create a new function; select the HttpTrigger-Csharp type, give it a name and click Create.

After a short wait, you’ll be presented with your brand new function. The default implementation of an HTTP trigger will return a name sent to it either via the body as a POST request or as a querystring parameter. For this example’s sake, we will email that name to a hard coded email address as well.

First, add a new out parameter to the function. To do so, select “Integrate” beneath the function name in the panel on the left hand side, and then click on “+New output”:

Creating a new output parameter in azure functions.

Then, select “SendGrid (Preview)” (you might have to scroll down a little in the output type lsit) and hit “Select”. The following screen is not as self-explanatory as it should be, but here’s the deal: The message parameter name is what you think it is – it’s the name of the parameter in your function which will contain the email message. The SendGrid API Key value, however, shouldn’t be populated with your actual SendGrid API key, but the key of the app setting containing your sendgrid api. Leave the value at its default (SendGridApiKey).You can leave the other settings (from & to address, subject and message text) empty, as we’ll populate them via the function code instead. Save the new parameter.

Now it is time to get the API key from sendgrid and add the required app setting. You create the API key at Go to settings -> API Keys.


then “Create API key” and “General API Key”


Give the Api Key a descriptive name (I chose “Azure functions sendgrid example”) and make sure you grant the API key full access to “Send mail”:


When you hit save, the API key will be displayed. Copy the key to the clipboard (it might also be an idea to keep that browser window open until you’ve verified it works, since that screen is the first and last time sendgrid will ever show you the key).

Moving back to the azure portal, it’s time to create the app setting. Click “Function app settings” in the bottom left corner:


and then “Go to app service settings.”

There, select “Application Settings” and add a new App Setting. Its key has to be “SendGridApiKey” (as we specified in the sendgrid parameter setting). Paste the sendgrid api key as its value, click save and close the app settings blade. You should be returned to the azure function. Click on “Develop” under the function name – it’s time to write some code!

Since we have specified a new out parameter called “message”, the first thing we have to do is to add that. The sendgrid parameter type expects a “Mail” type object, found in the SendGrid.Helpers.Mail namespace. We also have to import the sendgrid assembly. Add the following two lines at the very top of the function:

#r "SendGrid"

using SendGrid.Helpers.Mail;

Add the new out parameter to the function signature and remove the async and Task parts (out parameters do not play nice with async).

public static HttpResponseMessage Run(
    HttpRequestMessage req, 
    TraceWriter log, 
    out Mail message)

Removing async means you will also have to change the following line

dynamic data = await req.Content.ReadAsAsync<object>();


dynamic data = req.Content.ReadAsAsync<object>().Result;

When that is done, all the boilerplate/infrastructure is in place, and it is just a matter of composing the email. The entire sendgrid C# API is documented at github, but the example below is a minimal, working implementation:

 message = new Mail();
 message.Subject = "Someone passed a name to an azure function!";
 var personalization = new Personalization();
 personalization.AddTo(new Email("")); 
 Content content = new Content
 Type = "text/plain",
 Value =$"The name was {name}" 

That everything that is needed to integrate sendgrid with azure functions. The complete function as a gist.

#r "SendGrid"
using System.Net;
using SendGrid.Helpers.Mail;
public static HttpResponseMessage Run(HttpRequestMessage req, TraceWriter log, out Mail message)
log.Info("C# HTTP trigger function processed a request.");
// parse query parameter
string name = req.GetQueryNameValuePairs()
.FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
// Get request body
dynamic data = req.Content.ReadAsAsync<object>().Result;
// Set name to query string or body data
name = name ?? data?.name;
message = new Mail();
message.Subject = "Someone passed a name to an azure function!";
var personalization = new Personalization();
personalization.AddTo(new Email(""));
Content content = new Content
Type = "text/plain",
Value =$"The name was {name}"
return name == null
? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a name on the query string or in the request body")
: req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
view raw SendGrid.csx hosted with ❤ by GitHub

Let me know how you get on in the comments!

Azure Function as an HTTP endpoint

Article three describing my first foray into serverless computing. Introduction here, storage queue triggered function here. TL;DR: I’m using this is as a way to familiarize myself with Azure Functions, and to save some precious family time better spent not trying to remember what we need come Sunday.

In my projects, we always end up integrating against some kind of third party – be it booking systems, news aggregators, external providers of content or something as simple as an image resizer. Common for the integrations, is the need to transform the data received into a format with which we want to work. Sometimes we need to enrich the data  by combining it with other sources, and other times the data exposed by the external API is more than we need.

For this project, I had to get product data from‘s API by barcode in order to add the product to my cart using their internal ID. Their API provides a product endpoint, but it exposes much more data than I needed. I decided to abstract it away behind a GET endpoint of my own.

This is a philosophy I try to follow in all projects I’m responsible for. It means the business code can trust that the contracts we’ve agreed upon won’t break suddenly because of any external factors, and third parties are kept at the edges of the system. Another advantage in this case is that I might want to cache the product data returned at some point, and with the endpoint (function) in place, I have an easy way to implement said caching without having to modify any business code.

So, an HTTP endpoint was needed, and since I’m working in a serverless architecture, that meant I had to create an HTTP triggered function:

The HTTP triggered function is found in the “API & WebHooks” scenario,

Note that you can specify the access/authorization level for the function:


“Function” means the caller needs to provide a key which is specific to the function in the query string. “Admin” means an app wide key is needed, while “anonymous” will allow anyone who accesses the URL to trigger the function.

Now, the HTTP trigger is not always used as a RESTful endpoint – it can just as well trigger a processing job, write to a storage table or whatever else you need it to do, but in this case I wanted to return a product object on the format determined by the Accept-Encoding of the request.

In order to make the function return an http request, the function needs to have an HTTP output parameter, which is set up by default when you create the function. I left everything as it was, and focused on writing the small function:

#r "Newtonsoft.Json"
using System;
using System.Configuration;
using Newtonsoft.Json;
using System.Net;
// These are the (interesting parts) of the models returned from the kolonial API.
// The actual JSON returned contains more properties, but I see no reason to bother the deserializer
// with more than what we actually need.
public class KolonialSearchResponse {
public KolonialProduct[] Products {get;set;}
public class KolonialProduct {
public string Id {get;set;}
public string Barcode { get; set; }
public string Brand {get; set;}
public string Name {get; set;}
static HttpClient HttpClient = null;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
if(HttpClient == null)
HttpClient = CreateHttpClient(log);
string barcode = GetBarcodeFromRequest(req);
// Request product from kolonial
var httpResult = await HttpClient.GetAsync("" + barcode);
if (!httpResult.IsSuccessStatusCode)
// not much we can do here, so just log the error and return an error status code.
log.Error($"Error occured when getting data from kolonial.");
return req.CreateErrorResponse(HttpStatusCode.InternalServerError, "Product not found");
var json = await httpResult.Content.ReadAsStringAsync();
if (json != null)
log.Info($"Processing JSON: {json}");
var results = JsonConvert.DeserializeObject<KolonialSearchResponse>(json);
// If the search call returns anything but a single product
// we have no idea how to handle it.
if(results.Products != null && results.Products.Length == 1)
var product = results.Products.First();
product.Barcode = barcode;
return req.CreateResponse(HttpStatusCode.OK, product);
return req.CreateErrorResponse(HttpStatusCode.BadRequest, "Product not found");
public static HttpClient CreateHttpClient(TraceWriter log)
log.Info("Instantiating HTTP client.");
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Add("X-Client-Token", ConfigurationManager.AppSettings["KolonialToken"]);
httpClient.DefaultRequestHeaders.Add("User-Agent", ConfigurationManager.AppSettings["KolonialUserAgent"]);
return httpClient;
public static string GetBarcodeFromRequest(HttpRequestMessage req){
return req.GetQueryNameValuePairs()
.FirstOrDefault(q => string.Compare(q.Key, "barcode", true) == 0)

Hopefully, the code should be pretty self-explanatory. There are a couple of things which are good to know, though: Newtonsoft.Json is available in azure functions, but it must be imported:

#r "Newtonsoft.Json"


And while I very much doubt that I would ever exhaust anything with the small scale of my project, the Microsoft patterns and practices team do recommend that HttpClient is instantiated as few times as possible, and instead kept in memory and re-used. That’s why the static HttpClient is created the first time the function is triggered and kept around:

if(HttpClient == null)
    HttpClient = CreateHttpClient(log);

What’s interesting is that despite the function in principle being serverless, the static variable will hang around (albeit for an unpredictable amount of time), effectively allowing us to follow best practice. You can read more about sharing state in Azure Functions on Mark Heath’s excellent blog.

When it comes to the formatting of the return value, the framework will take of that for you as long as you stick to the request.CreateResponse(…) functions. As an example, this is how the function responds to an Accept: application/json request:


While requesting XML, predictably, will return this:


(Both screenshots courtesy of the wonderful Postman Chrome plugin)

And that’s that – a small integration with an external service, contained in a single azure function.


Azure functions – storage queue trigger

As I described in the previous blog post (in Norwegian), I’ve spent a few hours the last couple of days to set up a barcode scanner in our kitchen. The barcode scanner is attached to a raspberry pi and when a barcode is scanned, the barcode is pushed to an azure storage queue and the product eventually ends up in our shopping cart.

I’m using the project to familiarize myself with Azure Functions before utilizing them in customer projects, and I decided to use a storage queue triggered function to process incoming barcodes. Read more about storage queues at They’re basically lightweight, high volume and low-maintenance distributed queues.

Creating the function

Before coding the function, it has to be set up as part of an Azure Function app. An azure function app is basically a special version of a web app. The app can be created either via the Azure Functions Portal or through the regular azure portal.

Side note: While working on this, the continuous depoyment of a totally unrelated app in the same azure subscription suddenly started failing when a powershell command tried to switch deployment slots, with the error message

Requested value 'Dynamic' was not found.

This had me scratching my head for quite some time, but some googling revealed that adding a function app to an existing subscription will (may?) break powershell functionality. The fix was to completely delete the function app. YMMV .

Once the app is set up, it’s time to create the function. As mentioned, we want an azure storage queue triggered function, and I opted for C#:


Selecting the trigger type will reveal a few options below the “Choose a template” grid:


Here we give the function a descriptive name and enter the name of an existing queue (or a new one) in the “Queue name” field. The storage account connection field is a drop down of the storage accounts available. It obviously needs to be set to the storage account we want the  queue to be stored in. Once we click create, the function is added to the function app, and it will be triggered (executed) every time a new message is added to the storage queue “incoming-barcodes”. This configuration (queue name, storage accouht) can be changed at any time by clicking “Integrate” beneath the function name in the function portal:


The next step is to actually write the function. In this first version, everything is done in one function call, and we’re only covering the happy path: we assume the kolonial account exists, that the password is correct and that the product exists. If not, the message will end up in the poison queue or just log a warning message. A natural next step would be to alert the user that any errors occured, but that’s for another day.

The default entry point for the function is a static method called “Run” (I realize that RunAsync would be more correct with regards to naming async methods, but I’m sticking as close as I can to the defaults):

public static async Task Run(string message, TraceWriter log)
log.Info($"Processing incoming barcode: {message}");
var incoming = IncomingBarcode.FromMessage(message);
var httpClient = await CreateKolonialHttpClientAsync();
var kolonialProduct = await GetKolonialProductAsync(httpClient, incoming.Barcode);
if(kolonialProduct == null)
log.Warning($"Product with barcode {incoming.Barcode} is not available at Kolonial.");
await AddProductToCartAsync (httpClient, kolonialProduct, log);

First, we extract the raspberry ID and barcode from the incoming message with a small data transport class (IncomingBarcode), since the barcode is passed to the function by the raspberry pi on the format “rpi-id:barcode”.

The kolonial API needs to be set up with a user agent and a token, and in order to access a specific user’s cart we also need to get a user session. That’s all handled by the CreateKolonialHttpClientAsync function:

public static async Task<HttpClient> CreateKolonialHttpClientAsync(){
var httpClient = new HttpClient();
// The kolonial API requires a client token and a user agent (supplied by kolonial) to work
httpClient.DefaultRequestHeaders.Add("X-Client-Token", ConfigurationManager.AppSettings["KolonialToken"]);
httpClient.DefaultRequestHeaders.Add("User-Agent", ConfigurationManager.AppSettings["KolonialUserAgent"]);
// Modifying the cart requires a valid, active session
string sessionId = await GetSessionIdAsync(httpClient);
// Ensure the session cookie is sent as part of the calls to the API.
httpClient.DefaultRequestHeaders.Add("Cookie", $"sessionid={sessionId}");
return httpClient;
public static async Task<string> GetSessionIdAsync(HttpClient httpClient)
// The session cookie ID is retrieved by passing an object like { username: "something", password: "something-secure" }
// to the user/login endpoint
var result = await httpClient.PostAsync("",
new StringContent(
new { username = "an-email-address", password = "a-password" }),
Encoding.UTF8, "application/json"));
var json = await result.Content.ReadAsStringAsync();
var response = JsonConvert.DeserializeObject<LogInResponse>(json);
return response.sessionid;

As can be seen in the gist above, configuration values are handled just as in regular .net code, by utilizing the ConfigurationManager. The settings themselves are set via the function app settings:

Navigating to the app settings: Function app settings -> Configure app settings…
…and then adding the setting as usual (you’ll recognize the settings blade from ordinary azure web apps).

Once the connection/session to is set up, we attempt to get the product by its bar code. I’ve separated the “get product from kolonial and transform it to a model I need” part into a separate http triggered azure function, which I’ll cover later, so there’s not a whole lot of logic needed; if the function returns a non-null json, the barcode is a valid kolonial product which is returned, if not we return null.

public static async Task<KolonialProduct> GetKolonialProductAsync(HttpClient client, string barcode)
var httpResult = await client.GetAsync(ConfigurationManager.AppSettings["GetKolonialProductUri"] + "&barcode=" + barcode);
if (httpResult.IsSuccessStatusCode)
var json = await httpResult.Content.ReadAsStringAsync();
if (json != null)
return JsonConvert.DeserializeObject<KolonialProduct>(json);
return null;

As can be seen in the Run method, all that’s left to do when the product exists, is to add it to the cart. This is done by POSTing to the /cart/items endpoint:

public static async Task AddProductToCartAsync(HttpClient httpClient, KolonialProduct kolonialProduct, TraceWriter log)
var productsJson = JsonConvert.SerializeObject(
new { items = new []{ new { product_id = kolonialProduct.Id, quantity = 1 } }});
log.Info($"Updating Kolonial with {productsJson}");
var response = await httpClient.PostAsync(
new StringContent(productsJson, Encoding.UTF8, "application/json"));

That’s all there is to it.

Dev notes

I tried setting the project up in Visual Studio, but the development experience for Azure Functions leaves a lot to be desired (the tools are still in beta), so I ended up coding the function in the function portal.

Testing a storage queue triggered function is actually pretty easy. I used the Azure Storage Explorer to manually add entries to the queue when developing.

When working with REST APIs, I like to have strongly typed models to work with. An easy way to create them, is the paste example JSON responses into, which will create C# classes for you.

Handleliste? Sånt har vi systemer for

Med en strekkodeleser, en raspberry pi, noen linjer kode og Microsoft Azure, oppdateres nå husholdningens handleliste på i takt med at vi går tomme for varer.

For noen måneder siden gikk jeg til innkjøp av en raspberry pi, siden jeg noe naivt trodde jeg ville ha tid og lyst til å bygge et Magic Mirror. Den illusjonen brast umiddelbart da jeg innså at det ville innebære én del koding/programvare og 99 deler finsnekring, så rpien ble liggende i en skuff.

Det endret seg dog veldig raskt da jeg kom over en bloggpost fra 2013 om Oscar, et system som automatisk oppdaterer en handleliste på trello ved hjelp av en strekkodescanner og litt koding.

Vår husholdning ble tidlig kunder av de nettbaserte dagligvarebutikkene, så ideen min var å kombinere prinsippet bak Oscar med en eksisterende netthandel. Som tenkt så utført: En strekkodescanner montert på kjøkkenet og noen linjer kode senere, og vi har et system som automatisk oppdaterer vår handlekurv på når vi scanner varer, enten når vi ser vi trenger mer eller når den tomm eemballasjen går i søpla.

Dette ble samtidig en gylden mulighet til å utforske Azure Functions før jeg tar de i bruk i faktiske kundeprosjekter – håpet er å publisere en liten føljetong med bloggposter som belyser de forskjellige kode- og arkitektur-valgene løsningen består av.



Selve strekkodeleseren kjøpte fra en mer eller mindre tilfeldig valgt forhandler på ebay. Det er en USB-variant som kobles til rpien på helt vanlig måte og registreres som en input-enhet:


Som man ser av skjermbildet over, valgte jeg python som utviklingsspråk på raspberrien. Jeg befinner meg som oftest på microsoft-stacken i mitt daglige virke, så valget var mest for variasjonens skyld og for å bruke python til noe annet enn å si hallo til verden.

Litt rask googling avslørte at evdev var et naturlig valg når det kommer til å lese input fra en enhet, og det tok ikke mange minutter å snekre sammen en liten snutt som leste alle tegn frem til linjeskift og samlet disse til en barcode.

Den første versjonen (altså nåværende, og eneste, versjon), dytter strekkodene til en Azure Storage Queue, også tar en azure function over derifra. Kodenunder faller forøvrig inn i kategorien “kode som virker”, men neppe “dogmatisk og strukturelt korrekt python” – ta det for det det er

import evdev
from evdev import *
from import QueueService, QueueMessageFormat
import threading
import time
from queue import *
import datetime
# responsible for uploading the barcodes to the azure storage queue.
class BarcodeUploader:
def __init__(self):
# Instiantiate the azure queue service (from the package)
self.queue_service = QueueService(account_name='wereoutof', account_key='your-key-here')
# azure functions is _very_ confused is the text isn't base64 encoded
self.queue_service.encode_function = QueueMessageFormat.text_base64encode
# use a simple queue to avoid blocking operations
self.queue = LifoQueue()
t = threading.Thread(target=self.worker, args=())
t.daemon = True
# processes all messages (barcodes) in queue - uploading them to azure one by one
def worker(self):
while True:
while not self.queue.empty():
barcode = self.queue.get()
self.queue_service.put_message('barcodes', u'account-key:' + barcode)
except Exception as exc:
print("Exception occured when uploading barcode:" + repr(exc))
# re-submit task into queue
print("Barcode " + barcode + " registered")
def register(self, barcode):
print "Registering barcode " + barcode + "..."
current_barcode = ""
# Reads barcode from "device"
def readBarcodes():
global current_barcode
print ("Reading barcodes from device")
for event in device.read_loop():
if event.type == evdev.ecodes.EV_KEY and event.value == 1:
keycode = categorize(event).keycode
if keycode == 'KEY_ENTER':
current_barcode = ""
current_barcode += keycode[4:]
# Finds the input device with the name "Barcode Reader ".
# Could and should be parameterized, of course. Device name as cmd line parameter, perhaps?
def find_device():
device_name = 'Barcode Reader '
devices = [evdev.InputDevice(fn) for fn in evdev.list_devices()]
for d in devices:
if == device_name:
print("Found device " +
device = d
return device
# Find device...
device = find_device()
if device is None:
print("Unable to find " + device_name)
#... instantiate the uploader...
uploader = BarcodeUploader()
# ... and read the bar codes.

Som man ser, gjør kodesnutten ikke mye: Den leser tall frem til “enter”, og sender disse (altså strekkoden) til Azure via I denne første, rudimentære versjonen brukes en kø som retry-mekanisme.

Scriptet starter ved maskinstart som en cronjobb:

# m h dom mon dow command
@reboot sh /home/pi/ > /home/pi/logs/cronlog 2>&1 setter opp pythonmiljøet og starter scriptet.

#!/usr/bin/env bash
cd /home/pi/Devel/barcode_reader/

Neste gang viser jeg hvordan dette håndteres på mottakssiden, men en azure function storage queue trigger (altså en funksjon som eksekveres hver gang noe legges til en spesifikk kø).

Experiences with Paypal Adaptive Payments API

I just finished a small project involving PayPal’s Adaptive Payments API (and the nuget package they supply for it). The points below is stuff I spent too much time on, hopefully this can save someone the trouble.

First of all, I got the error “Your payment can’t be completed. Please return to the participating website and try again.” after completing the payment as a test user in the sandbox. This was solved by creating an application id for the application and including it in the signature credentials I pass to PayPal. I think this happened after I switched to the SignatureCredential implementation. Its constructor does not accept  the Application ID, but the property can be set afterwards.

The second stumbling block was a generic exception (“Input string was not in a correct format.”) when passing the payment info to PayPal. It turns out that the Adaptive Payments package assumes that point is the universal decimal mark – “100.00” is OK, while “100,00”, which is my locale’s preferred way of expressing a hundred, does not compute.