Deploying a code first entity framework database in Azure DevOps

I spent most of Friday banging my head against Azure’s new devops experience, trying to get a database migration set up as part of a web app deployment. The project was a .net core 2.1 web site with an Entity Framework database, and we hit a surprising number of hurdles along the way. Hopefully, this write-up will help others in the same situation save some time.

Our solution, at least for the intents and purposes of this post, is made up of a web app project containing the business logic and a .net standard class library with the EF code first classes (note that this is a separate database project, which most tutorials fail to address).

The first step of setting up the pipeline is creating a build in azure devops:

azure-devops-new-build

We set it up against our source code provider and started out with the “Asp.net core” template – in fact, we did not have to alter any of the defaults for it to work straight out of the box.

Getting the database up and running was another story, however. Articles, tips and tutorials online are a bit outdated, and provide solutions which no longer work or are no longer necessary (e.g adding Microsoft.EntityFrameworkCore.Tools.DotNet to the DB project, which is no longer required and generates a build warning).

Generate migration script

The first step is to generate the migration script as part of the build, which the release step(s) will run against the database further down the line.

We gave up getting the built in .net core task to work with entity framework (we could not get past the error message ‘No executable found matching command “dotnet-ef”‘ regardless of what we tried), so we fell back to a good ol’ command line task:

command-line-task

And for your copying needs:

dotnet ef migrations script -i -o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql --project EfMigrationApp.Database\EfMigrationApp.Database.csproj --startup-project EfMigrationApp\EfMigrationApp.csproj -i -o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql --project EfMigrationApp.Database\EfMigrationApp.Database.csproj --startup-project EfMigrationApp\EfMigrationApp.csproj

You will obviously need to replace the project names with your own.

A quick breakdown of the command:

dotnet ef migrations script: the command to generate a migration script

-i: i is for idempotent, ie the script generated can be run multiple times on the same database without conflicts.

-o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql: the migration script will be placed in the artifact staging directory, along with the rest of the build output

–project EfMigrationApp.Database\EfMigrationApp.Database.csproj: the project containing the database definition

–startup-project EfMigrationApp\EfMigrationApp.csproj: instructs EF that this is the start up project of the app.

Run migrations in the release pipeline

I’m sure there are many ways to run sql scripts in the release step (both command line tasks and powershell tasks could be utilized), but we landed on the predefined “Azure SQL Publish” task, which we added after the web app deploy task:

release-db

Fill in the db details according to your project, and the deployment package section with these values:

Action: Publish

Type: SQL script file

Sql script:

$(System.ArtifactsDirectory)/_$(Build.DefinitionName)/drop/migrate.sql (note the underscore before the build.definitionname variable – I suspect there’s a system variable we could use instead)

And that’s basically it – running the build and release pipeline will deploy the web app first, then migrate the database according to your latest EF code goodness. Enjoy!

 

 

 

 

 

Time zone and group by day in influxdb

All right, time for a slightly technical one.

At my current job, we do a lot of work on time series values, and we have recently started using InfluxDb, a blazingly fast timeseries database written in go. In order for our customers to get an overview of the devices they are operating, they want to see a certain metric, on a per-day basis. This meant we had to find a way to group the data by day. InfluxDb, being written for tasks like this, has a very nice syntax for time grouping:

select mean(value) from reportedValues where time > 1508283700000000000 group by time(1d), deviceId

The query above returns a nice list of exactly what we asked for – a list of the devices in question and their average value, grouped by day:

name: reportedValues
tags: deviceId='bathroom-window'
time mean
---- ----
2017-10-17T00:00:00Z
2017-10-18T00:00:00Z 1.02

name: reportedValues
tags: deviceId='kitchen-window'
time mean
---- ----
2017-10-17T00:00:00Z 0.4
2017-10-18T00:00:00Z 0.75

We did run into an issue, however, with time zones. Our customers are in different time zones (none of the UTC, which all values in influxdb are stored as), so when grouping on days especially, we had to find a way to group on day-in-the-timezone-in-question.

Luckily, v1.3 of influxdb introduced the time zone clause. Appending TZ(‘Europe/Oslo’) to the query above, should, in theory, give us the same time series grouped slightly differently. We did run into a slight roadblock here, though. The query

select mean(value) from reportedValues where time > 1508283700000000000 group by time(1d), deviceId TZ('Europe/Oslo')

returned

ERR: error parsing query: unable to find time zone Europe/Oslo

and we got the same result regardless of which time zone we tried (even the one mentioned in the documentation, “America/Los_Angeles”, failed.

I then tried the exact same query on a linux VM I had running, and lo and behold:

name: reportedValues
tags: deviceId='bathroom-window'
time mean
---- ----
2017-10-18T00:00:00+02:00 1.02

name: reportedValues
tags: deviceId='kitchen-window'
time mean
---- ----
2017-10-18T00:00:00+02:00 0.6333333333333334

(note that both the averages are different because of the time difference and that the time stamps reflect the time zone of the query and result.)

So obviously, this was something windows specific. I noticed that the github PR which added the TZ clause uses a go library called time, calling the LoadLocation function. The docs for that function states that “The time zone database needed by LoadLocation may not be present on all systems, especially non-Unix systems”, so I was obviously on to something. There’s a go user reporting something similar at at https://github.com/golang/go/issues/21881, and the title of that issue solved this for me: To get this to work on my local windows machine, all I had to do was

install go and restart the influx daemon (influxd.exe)

 

 

XML docs in service fabric web api

I usually use the swashbuckle swagger library to auto-document my web apis, but ran into issues when trying to deploy said APIs to a service fabric cluster – the swashbuckle/swagger initialization code threw “file not found” exception when trying to load the XML file generated by the build.

In order to get it to work, I edited the csproj file to generated the XML files for x64 config as well – from

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">
 <DocumentationFile>bin\Debug\net461\win7-x64\WebApi.xml</DocumentationFile>
 </PropertyGroup>

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">
 <DocumentationFile>bin\Release\net461\win7-x64\WebApi.xml</DocumentationFile>
 </PropertyGroup>

 

to

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">
 <DocumentationFile>bin\Debug\net461\win7-x64\AssetsGraphWebApi.xml</DocumentationFile>
 </PropertyGroup>

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">
 <DocumentationFile>bin\Release\net461\win7-x64\AssetsGraphWebApi.xml</DocumentationFile>
 </PropertyGroup>

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
 <DocumentationFile>bin\Debug\net461\win7-x64\AssetsGraphWebApi.xml</DocumentationFile>
 </PropertyGroup>

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
 <DocumentationFile>bin\Release\net461\win7-x64\AssetsGraphWebApi.xml</DocumentationFile>
 </PropertyGroup>

That’s twice this has had me stumped for a few minutes, hopefully this short post can help somebody else as well.

.net Web API and Route Name not found in Route Collection

Another entry in the series “tweets and blog posts I write mostly to have somewhere to look the next time I encounter this error”: this time, I spent too much time figuring out why this piece of code in an MVC view:

@Url.HttpRouteUrl("GetPerformance", new { id = Model.PerformanceId })

combined with this (working, I might add) API endpoint

[RoutePrefix("api/performances")]
public class PerformancesController : ApiController
{
    [HttpGet]
    [Route("", Name="GetPerformance")
    public Performance GetPerformances(string id = "")
    {
        // Some code...
    }
}

did not resolve to /api/controllers/{id}, but instead presented me with a glorious yellow screen proudly presenting and ArgumentException with the exception message

A route named ‘GetPerformance’ could not be found in the route collection.
Parameter name: name

As if often the case, it turns out I had tried to be a little too clever: This was an episerver project with the legacy global.asax.cs file doing the MVC routing, while the web api was set up in the owin startup.cs class, with a new HttpConfiguration instance created there and attached with app.UseWebApi.

To resolve the error, I had to tie the Web API registration to GlobalConfiguration.Configuration instead of a new instance in startup.cs. With that done, both MVC and API routing were aware of each other, the error went away, and I was able to programmatically create web API route links in MVC views.

Sendgrid and Azure Functions

Azure Functions recently introduced Sendgrid as an output type. The feature is currently in beta/preview and the documentation is as such very sparse, so I thought I’d quickly write up how I got it working. Hopefully, it might save you some googling (then again, you probably googled your way here, but I digress).

I currently use sendgrid to mail a shopping list, built up through the use of a barcode scanner in our kitchen, to myself every sunday. However, since the content of the email is somewhat irrelevant in this case, we’ll build an function which sends a mail whenever its HTTP endpoint is triggered/accessed.

As a prerequisite for this short tutorial you will need an active sendgrid account. They are kind enough to offer a free tier, so head on over and register if you haven’t already.

I’ll assume you have already created an azure function app (if not, follow Microsoft’s instructions). In it, create a new function; select the HttpTrigger-Csharp type, give it a name and click Create.

After a short wait, you’ll be presented with your brand new function. The default implementation of an HTTP trigger will return a name sent to it either via the body as a POST request or as a querystring parameter. For this example’s sake, we will email that name to a hard coded email address as well.

First, add a new out parameter to the function. To do so, select “Integrate” beneath the function name in the panel on the left hand side, and then click on “+New output”:

new-output.png
Creating a new output parameter in azure functions.

Then, select “SendGrid (Preview)” (you might have to scroll down a little in the output type lsit) and hit “Select”. The following screen is not as self-explanatory as it should be, but here’s the deal: The message parameter name is what you think it is – it’s the name of the parameter in your function which will contain the email message. The SendGrid API Key value, however, shouldn’t be populated with your actual SendGrid API key, but the key of the app setting containing your sendgrid api. Leave the value at its default (SendGridApiKey).You can leave the other settings (from & to address, subject and message text) empty, as we’ll populate them via the function code instead. Save the new parameter.

Now it is time to get the API key from sendgrid and add the required app setting. You create the API key at sendgrid.com. Go to settings -> API Keys.

settings-apikeys

then “Create API key” and “General API Key”

general-api-key

Give the Api Key a descriptive name (I chose “Azure functions sendgrid example”) and make sure you grant the API key full access to “Send mail”:

mail-send

When you hit save, the API key will be displayed. Copy the key to the clipboard (it might also be an idea to keep that browser window open until you’ve verified it works, since that screen is the first and last time sendgrid will ever show you the key).

Moving back to the azure portal, it’s time to create the app setting. Click “Function app settings” in the bottom left corner:

appsettings

and then “Go to app service settings.”

There, select “Application Settings” and add a new App Setting. Its key has to be “SendGridApiKey” (as we specified in the sendgrid parameter setting). Paste the sendgrid api key as its value, click save and close the app settings blade. You should be returned to the azure function. Click on “Develop” under the function name – it’s time to write some code!

Since we have specified a new out parameter called “message”, the first thing we have to do is to add that. The sendgrid parameter type expects a “Mail” type object, found in the SendGrid.Helpers.Mail namespace. We also have to import the sendgrid assembly. Add the following two lines at the very top of the function:

#r "SendGrid"

using SendGrid.Helpers.Mail;

Add the new out parameter to the function signature and remove the async and Task parts (out parameters do not play nice with async).

public static HttpResponseMessage Run(
    HttpRequestMessage req, 
    TraceWriter log, 
    out Mail message)

Removing async means you will also have to change the following line

dynamic data = await req.Content.ReadAsAsync<object>();

to

dynamic data = req.Content.ReadAsAsync<object>().Result;

When that is done, all the boilerplate/infrastructure is in place, and it is just a matter of composing the email. The entire sendgrid C# API is documented at github, but the example below is a minimal, working implementation:

 message = new Mail();
 message.Subject = "Someone passed a name to an azure function!";
 var personalization = new Personalization();
 personalization.AddTo(new Email("joachim.lovf@xyz.com")); 
 Content content = new Content
 {
 Type = "text/plain",
 Value =$"The name was {name}" 
 };
 message.AddContent(content);
 message.AddPersonalization(personalization);

That everything that is needed to integrate sendgrid with azure functions. The complete function as a gist.

Let me know how you get on in the comments!

Azure Function as an HTTP endpoint

Article three describing my first foray into serverless computing. Introduction here, storage queue triggered function here. TL;DR: I’m using this is as a way to familiarize myself with Azure Functions, and to save some precious family time better spent not trying to remember what we need come Sunday.

In my projects, we always end up integrating against some kind of third party – be it booking systems, news aggregators, external providers of content or something as simple as an image resizer. Common for the integrations, is the need to transform the data received into a format with which we want to work. Sometimes we need to enrich the data  by combining it with other sources, and other times the data exposed by the external API is more than we need.

For this project, I had to get product data from Kolonial.no‘s API by barcode in order to add the product to my cart using their internal ID. Their API provides a product endpoint, but it exposes much more data than I needed. I decided to abstract it away behind a GET endpoint of my own.

This is a philosophy I try to follow in all projects I’m responsible for. It means the business code can trust that the contracts we’ve agreed upon won’t break suddenly because of any external factors, and third parties are kept at the edges of the system. Another advantage in this case is that I might want to cache the product data returned at some point, and with the endpoint (function) in place, I have an easy way to implement said caching without having to modify any business code.

So, an HTTP endpoint was needed, and since I’m working in a serverless architecture, that meant I had to create an HTTP triggered function:

azure-functions-http-trigger.png
The HTTP triggered function is found in the “API & WebHooks” scenario,

Note that you can specify the access/authorization level for the function:

azure-functions-http-authorization-level.png

“Function” means the caller needs to provide a key which is specific to the function in the query string. “Admin” means an app wide key is needed, while “anonymous” will allow anyone who accesses the URL to trigger the function.

Now, the HTTP trigger is not always used as a RESTful endpoint – it can just as well trigger a processing job, write to a storage table or whatever else you need it to do, but in this case I wanted to return a product object on the format determined by the Accept-Encoding of the request.

In order to make the function return an http request, the function needs to have an HTTP output parameter, which is set up by default when you create the function. I left everything as it was, and focused on writing the small function:

Hopefully, the code should be pretty self-explanatory. There are a couple of things which are good to know, though: Newtonsoft.Json is available in azure functions, but it must be imported:

#r "Newtonsoft.Json"

 

And while I very much doubt that I would ever exhaust anything with the small scale of my project, the Microsoft patterns and practices team do recommend that HttpClient is instantiated as few times as possible, and instead kept in memory and re-used. That’s why the static HttpClient is created the first time the function is triggered and kept around:

if(HttpClient == null)
    HttpClient = CreateHttpClient(log);

What’s interesting is that despite the function in principle being serverless, the static variable will hang around (albeit for an unpredictable amount of time), effectively allowing us to follow best practice. You can read more about sharing state in Azure Functions on Mark Heath’s excellent blog.

When it comes to the formatting of the return value, the framework will take of that for you as long as you stick to the request.CreateResponse(…) functions. As an example, this is how the function responds to an Accept: application/json request:

json

While requesting XML, predictably, will return this:

xml

(Both screenshots courtesy of the wonderful Postman Chrome plugin)

And that’s that – a small integration with an external service, contained in a single azure function.

 

Azure functions – storage queue trigger

As I described in the previous blog post (in Norwegian), I’ve spent a few hours the last couple of days to set up a barcode scanner in our kitchen. The barcode scanner is attached to a raspberry pi and when a barcode is scanned, the barcode is pushed to an azure storage queue and the product eventually ends up in our kolonial.no shopping cart.

I’m using the project to familiarize myself with Azure Functions before utilizing them in customer projects, and I decided to use a storage queue triggered function to process incoming barcodes. Read more about storage queues at microsoft.com. They’re basically lightweight, high volume and low-maintenance distributed queues.

Creating the function

Before coding the function, it has to be set up as part of an Azure Function app. An azure function app is basically a special version of a web app. The app can be created either via the Azure Functions Portal or through the regular azure portal.

Side note: While working on this, the continuous depoyment of a totally unrelated app in the same azure subscription suddenly started failing when a powershell command tried to switch deployment slots, with the error message

Requested value 'Dynamic' was not found.

This had me scratching my head for quite some time, but some googling revealed that adding a function app to an existing subscription will (may?) break powershell functionality. The fix was to completely delete the function app. YMMV .

Once the app is set up, it’s time to create the function. As mentioned, we want an azure storage queue triggered function, and I opted for C#:

create-azure-function-storage-trigger.png

Selecting the trigger type will reveal a few options below the “Choose a template” grid:

name-function

Here we give the function a descriptive name and enter the name of an existing queue (or a new one) in the “Queue name” field. The storage account connection field is a drop down of the storage accounts available. It obviously needs to be set to the storage account we want the  queue to be stored in. Once we click create, the function is added to the function app, and it will be triggered (executed) every time a new message is added to the storage queue “incoming-barcodes”. This configuration (queue name, storage accouht) can be changed at any time by clicking “Integrate” beneath the function name in the function portal:

integrate.png

The next step is to actually write the function. In this first version, everything is done in one function call, and we’re only covering the happy path: we assume the kolonial account exists, that the password is correct and that the product exists. If not, the message will end up in the poison queue or just log a warning message. A natural next step would be to alert the user that any errors occured, but that’s for another day.

The default entry point for the function is a static method called “Run” (I realize that RunAsync would be more correct with regards to naming async methods, but I’m sticking as close as I can to the defaults):

First, we extract the raspberry ID and barcode from the incoming message with a small data transport class (IncomingBarcode), since the barcode is passed to the function by the raspberry pi on the format “rpi-id:barcode”.

The kolonial API needs to be set up with a user agent and a token, and in order to access a specific user’s cart we also need to get a user session. That’s all handled by the CreateKolonialHttpClientAsync function:

As can be seen in the gist above, configuration values are handled just as in regular .net code, by utilizing the ConfigurationManager. The settings themselves are set via the function app settings:

app-settings
Navigating to the app settings: Function app settings -> Configure app settings…
appsettigns
…and then adding the setting as usual (you’ll recognize the settings blade from ordinary azure web apps).

Once the connection/session to kolonial.no is set up, we attempt to get the product by its bar code. I’ve separated the “get product from kolonial and transform it to a model I need” part into a separate http triggered azure function, which I’ll cover later, so there’s not a whole lot of logic needed; if the function returns a non-null json, the barcode is a valid kolonial product which is returned, if not we return null.

As can be seen in the Run method, all that’s left to do when the product exists, is to add it to the cart. This is done by POSTing to the /cart/items endpoint:

That’s all there is to it.

Dev notes

I tried setting the project up in Visual Studio, but the development experience for Azure Functions leaves a lot to be desired (the tools are still in beta), so I ended up coding the function in the function portal.

Testing a storage queue triggered function is actually pretty easy. I used the Azure Storage Explorer to manually add entries to the queue when developing.

When working with REST APIs, I like to have strongly typed models to work with. An easy way to create them, is the paste example JSON responses into http://json2csharp.com/, which will create C# classes for you.