Azure functions – storage queue trigger

As I described in the previous blog post (in Norwegian), I’ve spent a few hours the last couple of days to set up a barcode scanner in our kitchen. The barcode scanner is attached to a raspberry pi and when a barcode is scanned, the barcode is pushed to an azure storage queue and the product eventually ends up in our shopping cart.

I’m using the project to familiarize myself with Azure Functions before utilizing them in customer projects, and I decided to use a storage queue triggered function to process incoming barcodes. Read more about storage queues at They’re basically lightweight, high volume and low-maintenance distributed queues.

Creating the function

Before coding the function, it has to be set up as part of an Azure Function app. An azure function app is basically a special version of a web app. The app can be created either via the Azure Functions Portal or through the regular azure portal.

Side note: While working on this, the continuous depoyment of a totally unrelated app in the same azure subscription suddenly started failing when a powershell command tried to switch deployment slots, with the error message

Requested value 'Dynamic' was not found.

This had me scratching my head for quite some time, but some googling revealed that adding a function app to an existing subscription will (may?) break powershell functionality. The fix was to completely delete the function app. YMMV .

Once the app is set up, it’s time to create the function. As mentioned, we want an azure storage queue triggered function, and I opted for C#:


Selecting the trigger type will reveal a few options below the “Choose a template” grid:


Here we give the function a descriptive name and enter the name of an existing queue (or a new one) in the “Queue name” field. The storage account connection field is a drop down of the storage accounts available. It obviously needs to be set to the storage account we want the  queue to be stored in. Once we click create, the function is added to the function app, and it will be triggered (executed) every time a new message is added to the storage queue “incoming-barcodes”. This configuration (queue name, storage accouht) can be changed at any time by clicking “Integrate” beneath the function name in the function portal:


The next step is to actually write the function. In this first version, everything is done in one function call, and we’re only covering the happy path: we assume the kolonial account exists, that the password is correct and that the product exists. If not, the message will end up in the poison queue or just log a warning message. A natural next step would be to alert the user that any errors occured, but that’s for another day.

The default entry point for the function is a static method called “Run” (I realize that RunAsync would be more correct with regards to naming async methods, but I’m sticking as close as I can to the defaults):

public static async Task Run(string message, TraceWriter log)
log.Info($"Processing incoming barcode: {message}");
var incoming = IncomingBarcode.FromMessage(message);
var httpClient = await CreateKolonialHttpClientAsync();
var kolonialProduct = await GetKolonialProductAsync(httpClient, incoming.Barcode);
if(kolonialProduct == null)
log.Warning($"Product with barcode {incoming.Barcode} is not available at Kolonial.");
await AddProductToCartAsync (httpClient, kolonialProduct, log);

First, we extract the raspberry ID and barcode from the incoming message with a small data transport class (IncomingBarcode), since the barcode is passed to the function by the raspberry pi on the format “rpi-id:barcode”.

The kolonial API needs to be set up with a user agent and a token, and in order to access a specific user’s cart we also need to get a user session. That’s all handled by the CreateKolonialHttpClientAsync function:

public static async Task<HttpClient> CreateKolonialHttpClientAsync(){
var httpClient = new HttpClient();
// The kolonial API requires a client token and a user agent (supplied by kolonial) to work
httpClient.DefaultRequestHeaders.Add("X-Client-Token", ConfigurationManager.AppSettings["KolonialToken"]);
httpClient.DefaultRequestHeaders.Add("User-Agent", ConfigurationManager.AppSettings["KolonialUserAgent"]);
// Modifying the cart requires a valid, active session
string sessionId = await GetSessionIdAsync(httpClient);
// Ensure the session cookie is sent as part of the calls to the API.
httpClient.DefaultRequestHeaders.Add("Cookie", $"sessionid={sessionId}");
return httpClient;
public static async Task<string> GetSessionIdAsync(HttpClient httpClient)
// The session cookie ID is retrieved by passing an object like { username: "something", password: "something-secure" }
// to the user/login endpoint
var result = await httpClient.PostAsync("",
new StringContent(
new { username = "an-email-address", password = "a-password" }),
Encoding.UTF8, "application/json"));
var json = await result.Content.ReadAsStringAsync();
var response = JsonConvert.DeserializeObject<LogInResponse>(json);
return response.sessionid;

As can be seen in the gist above, configuration values are handled just as in regular .net code, by utilizing the ConfigurationManager. The settings themselves are set via the function app settings:

Navigating to the app settings: Function app settings -> Configure app settings…
…and then adding the setting as usual (you’ll recognize the settings blade from ordinary azure web apps).

Once the connection/session to is set up, we attempt to get the product by its bar code. I’ve separated the “get product from kolonial and transform it to a model I need” part into a separate http triggered azure function, which I’ll cover later, so there’s not a whole lot of logic needed; if the function returns a non-null json, the barcode is a valid kolonial product which is returned, if not we return null.

public static async Task<KolonialProduct> GetKolonialProductAsync(HttpClient client, string barcode)
var httpResult = await client.GetAsync(ConfigurationManager.AppSettings["GetKolonialProductUri"] + "&barcode=" + barcode);
if (httpResult.IsSuccessStatusCode)
var json = await httpResult.Content.ReadAsStringAsync();
if (json != null)
return JsonConvert.DeserializeObject<KolonialProduct>(json);
return null;

As can be seen in the Run method, all that’s left to do when the product exists, is to add it to the cart. This is done by POSTing to the /cart/items endpoint:

public static async Task AddProductToCartAsync(HttpClient httpClient, KolonialProduct kolonialProduct, TraceWriter log)
var productsJson = JsonConvert.SerializeObject(
new { items = new []{ new { product_id = kolonialProduct.Id, quantity = 1 } }});
log.Info($"Updating Kolonial with {productsJson}");
var response = await httpClient.PostAsync(
new StringContent(productsJson, Encoding.UTF8, "application/json"));

That’s all there is to it.

Dev notes

I tried setting the project up in Visual Studio, but the development experience for Azure Functions leaves a lot to be desired (the tools are still in beta), so I ended up coding the function in the function portal.

Testing a storage queue triggered function is actually pretty easy. I used the Azure Storage Explorer to manually add entries to the queue when developing.

When working with REST APIs, I like to have strongly typed models to work with. An easy way to create them, is the paste example JSON responses into, which will create C# classes for you.

OData Web APIs with AutoMapper 3

When developing an API on top of a domain layer, we rarely want to expose the actual domain objects to the API consumers. Rather, it is usually a matter of presenting the consumers with a subset of the domain object’s properties or a DTO/model object representing a composite of multiple domain objects.

Although the combination of OData and Entity Framework does provide some control of the presentation of the objects returned, it quickly falls apart when more advanced combinations and composites are needed. This meant that, to me, OData via .NET Web Api was not a viable alternative in most of my real world (read: customer) projects.

Enter AutoMapper. It has been an invaluable part of my development arsenal for quite some time, and the introduction of LINQ functionality in its latest incarnation makes an already awesome library even better. The LINQ support means that AutoMapper no longer populates the source objects completely, skipping any properties which aren’t needed for the mapping to the destination type. In other words, sensibly designed DTOs and some careful mapping configuration is all that’s needed to create an effective OData API.


Given the domain object below:

[gist /]

An ordinary domain object, containing a couple of properties we are not likely to want to expose over an OData API. It would be a horrible idea to expose the Image byte array, and there’s no need to expose the user who added the movie to the database, either. For this examples’s sake, we will limit the OData presentation of a movie to its Id, Title and Year Of Release.

[gist /]

The mapping profile needs to be configured, and that’s typically done in a separate class inheriting from AutoMapper.Profile. AutoMapper does a good job of matching and mapping properties which share names, but has to be told that we just want the year part of the release date: [gist /]

When that’s done, we merely need to set up an OData controller to delivery Movie objects over OData, and create a service and repository to bring the objects from the database to the controller. The Service, in this example called «MovieService», fetches the domain objects from an instance of the MovieRepository class. This layering might seem a bit contrived in our simple example, but it should prove that this technique is viable in a real world multi-layered architecture.

[gist /] [gist /]

I will skip the configuration of the actual OData endpoint, but I have more or less copied it verbatim from In addition, a visual studio 2012 project containing a runnable version of the code in this article is available on GitHub:

The most interesting part of this example, at least for those of us familiar with AutoMapper v2, is this line in the service: [gist /]
In version 2, it would most likely look like this instead: [gist /]

The difference is the Project().To()-syntax, which ensures that only the property values needed for the mapping are retrieved from the domain objects (and, by extension, the database), The OData endpoint now returns the Title and the Year of Release:

List of all the movies in the database
All movies in the database

SQL Express Profiler proves that the LINQ/AutoMapper/Entity Framework combination leads to an SQL query fetching just the properties needed:

LINQ-generated SQL.
LINQ-generated SQL.

If we try to fetch all Movies starting with a “D”, this is what happens:


And the SQL generated to fetch the objects is still as slim as possible.

How about adding elements to the Movie model which aren’t part of the Movie domain object? No problem, we’ll first extend the Model:

[gist /]

And then give AutoMapper a hand setting up the properties it’s not able to figure out itself:

[gist /]

This leads to more data being returned when getting the same URL as previously:


And even though the SQL’s complexity increases, it’s still limited to the properties/fields it actually needs:


Hopefully, these examples will give you an idea of how the combination of Entity Framework (and other ORMs) and AutoMapper is a powerful and productive combination. If OData fits the use case, devs can focus on the throttling and security side of things, and let the API consumers themselves decide how they search.

There are some security issues which must be addressed in a production system which I haven’t covered in this article. There is an abundance of advice and tutorials on that topic all around the internet. is a good starting point.

I have not yet had the time to experiment with how the OData $expand option can be used within the context of EF/LINQ and AutoMapper.


* For a discussion on the topic, see