Deploying a code first entity framework database in Azure DevOps

I spent most of Friday banging my head against Azure’s new devops experience, trying to get a database migration set up as part of a web app deployment. The project was a .net core 2.1 web site with an Entity Framework database, and we hit a surprising number of hurdles along the way. Hopefully, this write-up will help others in the same situation save some time.

Our solution, at least for the intents and purposes of this post, is made up of a web app project containing the business logic and a .net standard class library with the EF code first classes (note that this is a separate database project, which most tutorials fail to address).

The first step of setting up the pipeline is creating a build in azure devops:

azure-devops-new-build

We set it up against our source code provider and started out with the “Asp.net core” template – in fact, we did not have to alter any of the defaults for it to work straight out of the box.

Getting the database up and running was another story, however. Articles, tips and tutorials online are a bit outdated, and provide solutions which no longer work or are no longer necessary (e.g adding Microsoft.EntityFrameworkCore.Tools.DotNet to the DB project, which is no longer required and generates a build warning).

Generate migration script

The first step is to generate the migration script as part of the build, which the release step(s) will run against the database further down the line.

We gave up getting the built in .net core task to work with entity framework (we could not get past the error message ‘No executable found matching command “dotnet-ef”‘ regardless of what we tried), so we fell back to a good ol’ command line task:

command-line-task

And for your copying needs:

dotnet ef migrations script -i -o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql --project EfMigrationApp.Database\EfMigrationApp.Database.csproj --startup-project EfMigrationApp\EfMigrationApp.csproj -i -o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql --project EfMigrationApp.Database\EfMigrationApp.Database.csproj --startup-project EfMigrationApp\EfMigrationApp.csproj

You will obviously need to replace the project names with your own.

A quick breakdown of the command:

dotnet ef migrations script: the command to generate a migration script

-i: i is for idempotent, ie the script generated can be run multiple times on the same database without conflicts.

-o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql: the migration script will be placed in the artifact staging directory, along with the rest of the build output

–project EfMigrationApp.Database\EfMigrationApp.Database.csproj: the project containing the database definition

–startup-project EfMigrationApp\EfMigrationApp.csproj: instructs EF that this is the start up project of the app.

Run migrations in the release pipeline

I’m sure there are many ways to run sql scripts in the release step (both command line tasks and powershell tasks could be utilized), but we landed on the predefined “Azure SQL Publish” task, which we added after the web app deploy task:

release-db

Fill in the db details according to your project, and the deployment package section with these values:

Action: Publish

Type: SQL script file

Sql script:

$(System.ArtifactsDirectory)/_$(Build.DefinitionName)/drop/migrate.sql (note the underscore before the build.definitionname variable – I suspect there’s a system variable we could use instead)

And that’s basically it – running the build and release pipeline will deploy the web app first, then migrate the database according to your latest EF code goodness. Enjoy!

 

 

 

 

 

Time zone and group by day in influxdb

All right, time for a slightly technical one.

At my current job, we do a lot of work on time series values, and we have recently started using InfluxDb, a blazingly fast timeseries database written in go. In order for our customers to get an overview of the devices they are operating, they want to see a certain metric, on a per-day basis. This meant we had to find a way to group the data by day. InfluxDb, being written for tasks like this, has a very nice syntax for time grouping:

select mean(value) from reportedValues where time > 1508283700000000000 group by time(1d), deviceId

The query above returns a nice list of exactly what we asked for – a list of the devices in question and their average value, grouped by day:

name: reportedValues
tags: deviceId='bathroom-window'
time mean
---- ----
2017-10-17T00:00:00Z
2017-10-18T00:00:00Z 1.02

name: reportedValues
tags: deviceId='kitchen-window'
time mean
---- ----
2017-10-17T00:00:00Z 0.4
2017-10-18T00:00:00Z 0.75

We did run into an issue, however, with time zones. Our customers are in different time zones (none of the UTC, which all values in influxdb are stored as), so when grouping on days especially, we had to find a way to group on day-in-the-timezone-in-question.

Luckily, v1.3 of influxdb introduced the time zone clause. Appending TZ(‘Europe/Oslo’) to the query above, should, in theory, give us the same time series grouped slightly differently. We did run into a slight roadblock here, though. The query

select mean(value) from reportedValues where time > 1508283700000000000 group by time(1d), deviceId TZ('Europe/Oslo')

returned

ERR: error parsing query: unable to find time zone Europe/Oslo

and we got the same result regardless of which time zone we tried (even the one mentioned in the documentation, “America/Los_Angeles”, failed.

I then tried the exact same query on a linux VM I had running, and lo and behold:

name: reportedValues
tags: deviceId='bathroom-window'
time mean
---- ----
2017-10-18T00:00:00+02:00 1.02

name: reportedValues
tags: deviceId='kitchen-window'
time mean
---- ----
2017-10-18T00:00:00+02:00 0.6333333333333334

(note that both the averages are different because of the time difference and that the time stamps reflect the time zone of the query and result.)

So obviously, this was something windows specific. I noticed that the github PR which added the TZ clause uses a go library called time, calling the LoadLocation function. The docs for that function states that “The time zone database needed by LoadLocation may not be present on all systems, especially non-Unix systems”, so I was obviously on to something. There’s a go user reporting something similar at at https://github.com/golang/go/issues/21881, and the title of that issue solved this for me: To get this to work on my local windows machine, all I had to do was

install go and restart the influx daemon (influxd.exe)

 

 

XML docs in service fabric web api

I usually use the swashbuckle swagger library to auto-document my web apis, but ran into issues when trying to deploy said APIs to a service fabric cluster – the swashbuckle/swagger initialization code threw “file not found” exception when trying to load the XML file generated by the build.

In order to get it to work, I edited the csproj file to generated the XML files for x64 config as well – from

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">
 <DocumentationFile>bin\Debug\net461\win7-x64\WebApi.xml</DocumentationFile>
 </PropertyGroup>

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">
 <DocumentationFile>bin\Release\net461\win7-x64\WebApi.xml</DocumentationFile>
 </PropertyGroup>

 

to

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">
 <DocumentationFile>bin\Debug\net461\win7-x64\AssetsGraphWebApi.xml</DocumentationFile>
 </PropertyGroup>

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">
 <DocumentationFile>bin\Release\net461\win7-x64\AssetsGraphWebApi.xml</DocumentationFile>
 </PropertyGroup>

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
 <DocumentationFile>bin\Debug\net461\win7-x64\AssetsGraphWebApi.xml</DocumentationFile>
 </PropertyGroup>

<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
 <DocumentationFile>bin\Release\net461\win7-x64\AssetsGraphWebApi.xml</DocumentationFile>
 </PropertyGroup>

That’s twice this has had me stumped for a few minutes, hopefully this short post can help somebody else as well.

.net Web API and Route Name not found in Route Collection

Another entry in the series “tweets and blog posts I write mostly to have somewhere to look the next time I encounter this error”: this time, I spent too much time figuring out why this piece of code in an MVC view:

@Url.HttpRouteUrl("GetPerformance", new { id = Model.PerformanceId })

combined with this (working, I might add) API endpoint

[RoutePrefix("api/performances")]
public class PerformancesController : ApiController
{
    [HttpGet]
    [Route("", Name="GetPerformance")
    public Performance GetPerformances(string id = "")
    {
        // Some code...
    }
}

did not resolve to /api/controllers/{id}, but instead presented me with a glorious yellow screen proudly presenting and ArgumentException with the exception message

A route named ‘GetPerformance’ could not be found in the route collection.
Parameter name: name

As if often the case, it turns out I had tried to be a little too clever: This was an episerver project with the legacy global.asax.cs file doing the MVC routing, while the web api was set up in the owin startup.cs class, with a new HttpConfiguration instance created there and attached with app.UseWebApi.

To resolve the error, I had to tie the Web API registration to GlobalConfiguration.Configuration instead of a new instance in startup.cs. With that done, both MVC and API routing were aware of each other, the error went away, and I was able to programmatically create web API route links in MVC views.

Green/blue deployments on Azure with Octopus Deploy

Pushing the “promote to production” button should be fun, not scary. Implementing a blue/green deployment strategy is one way to alleviate the strees around pushing to prod.

OctopusDeployKeepCalm
Image borrowed from https://ianpaullin.com/2014/06/27/octopus-deploy-series-conclusion/

Anyone who has ever deployed a piece of code to production, knows how nerve wracking the process can be. Sure, devops practices such as continuous integration and deployment have reduced the need for manual testing and 20 page installation manuals, but nevertheless: clicking that “Promote to production” button still has a tendency to prove Murphy’s law, and we are always looking for ways to reduce the risk even further.

Enter blue/green development. The principle is better explained elsewhere, but in short it is a way to promote a build to production and ensure it is working as expected in a production environment before exposing it to actual end users. I’ll stick to cloud foundry’s definition of blue/green in this article, meaning “blue” is the live production version and “green” is the new one.

More and more of my work is hosted on Azure these days, so setting up blue/green deployments for azure web sites has been a priority. Microsoft have already got the concept covered, with deployment slots, so it was just a matter of getting that to play nicely with Octopus Deploy, our go-to deployment server.

Since we are running quite a few web sites (and other sorts of resources), keeping the costs down was also a priority. That meant that I did not want all web sites to have a two deployment slots at all times (doubling the cost of each web site). Instead, I wanted the process to create the green (new) slot at the start of the process and remove it once the deployment was completed successfully. Octopus themselves have a guide for this, but I found it a bit lacking in detail, especially around the service principal/resource management parts. Hopefully, this guide will get you up and running without you having to fight through octopus’ less than informative error messages.

The process I had to set up, from source code change to live in production, was this.

  1. Get and build changes from source code repository.
  2. Run unit and integration tests
  3. Create green slot
  4. Deploy changes to green slot
  5. Warm up web site and run smoke tests
  6. Swap deployment slots (swap blue and green)
  7. Delete deployment slot created in pt 3.

Steps 1 and 2 are handled by our CI server, which pushes the build artifacts to Octopus’ nuget feed. Step 4 was already in place, so I needed to configure octopus to create the new slot, swap the new slot with current production, and delete it. In addition, I wanted to have a few rudimentary tests in place to verify the green slot before swapping it into the live production slot.

This is a screenshot of the complete build configuration, covering all the steps above:

blue-green-process

A few words on Azure resources and powershell scripting

Octopus has a built in step template called “Run Azure powershell script” which is well suited for our purpose. However, since we need to create (and delete) resources via powershell the powershell scripts have to be run in the context of a user with the permissions to do so. In octopus, that means you have to have an azure connection/account configured as a service principal. Creating a service principal is a rather lengthy process, but the script at  https://octopus.com/docs/guides/azure-deployments/creating-an-azure-account/creating-an-azure-service-principal-account eases the process. I strongly urge you to change the password before running it.

Note that in order to get the resource manager cmdlets to work, I had to install the AzureRM module on the VM hosting octopus. That is done by starting powershell with administrator privileges and doing

Install-Module AzureRm

Creating the deployment slot

The first step of the deployment process is to create the green deployment slot. In the octopus process screen, click “Add step” and then “Run an Azure Powershell script”. Give the step a descriptive name (I chose “Create deployment slot”).

Under “Account”, you will have to select the service principal you created. If you have not added it to octopus already, now is the time: Select “Add new…” next to the Account drop down. The screenshot below indicates which fields you need to fill out (make sureyou check the “service principal” radio button first). The IDs (GUIDs) you need are all part of the output of the powershell script mentioned above.

Once the service principal is tested OK and saved, select it in the powershell script step configuration. Make sure the script source is set to “Source code” and paste the line below into the script text area (substituting the resource group name and web app name with the values applicable to your deployment):

New-AzureRmWebAppSlot -ResourceGroupName your-resource-group -Name web-app-name -Slot green

Leave the environment radio button on “run for all applicable environments”, and click “Save”.

Deploy the website to the green deployment slot

I will assume you already know how to set up the deploy to azure step. Make sure the step deploys to the green slot – in my case, the step config looks like this:

deploy-to-azure
Deploy to azure octopus configuration. Note the “(cd)” part of the web app name. In this case, my green deployment slot in the web app stored in the “Azure.WebAppName” variable is called “cd”.

Test the new deployment

When the web app has been deployed (and assuming all went well), your app will now be available for testing on the deployment slot’s URL. E.g for a web app called my web app: http://mywebapp-cd.azurewebsites.net.

I am planning on running a selenium test suite against that URL, but for now I am merely verifying that the front page returns 200 OK. For that, I have set up a powershell step in octopus which runs the following script (with thanks to Steve Fenton). The web site in question is based on a CMS which in no way prides itself on a quick startup, hence the long waits and retries:

Write-Output "Starting"

$MaxAttempts = 10

If (![string]::IsNullOrWhiteSpace($TestUrl)) {
 Write-Output "Making request to $TestUrl"
 
 Try {
 $stopwatch = [Diagnostics.Stopwatch]::StartNew()
 # Allow redirections on the warm up
 $response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 10
 $stopwatch.Stop()
 $statusCode = [int]$response.StatusCode
 Write-Output "$statusCode Warmed Up Site $TestUrl in $($stopwatch.ElapsedMilliseconds)s ms"
 } catch {
 $_.Exception|format-list -force
 }
 
 For ($i = 0; $i -lt $MaxAttempts; $i++) {
 try {
 Write-Output "Checking Site"
 $stopwatch = [Diagnostics.Stopwatch]::StartNew()
 # Don't allow redirections on the check
 $response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 0
 $stopwatch.Stop()
 
 $statusCode = [int]$response.StatusCode
 
 Write-Output "$statusCode Second request took $($stopwatch.ElapsedMilliseconds)s ms"
 
 If ($statusCode -ge 200 -And $statusCode -lt 400) {
 break;
 }
 
 Start-Sleep -s 5
 } catch {
 $_.Exception|format-list -force
 }
 }

 If ($statusCode -ge 200 -And $statusCode -lt 400) {
 # Hooray, it worked
 } Else {
 throw "Warm up failed for " + $TestUrl
 }
} Else {
 Write-Output "No TestUrl configured for this machine."
}

Write-Output "Done"

If and when this script returns successfully, it’s time to

Swap the green and blue slots

We have established that the green slot version is good to go. To promote it to the production slot, we need to swap the blue and green slots. This is done by another azure powershell octopus step (which should be run under a resource manager account), with the following script:

Switch-AzureWebsiteSlot –Name #{Azure.WebAppName} -Slot1 "green" -Force

This will cause the (already warmed up and tested) deployment in slot “cd” to be swapped with the current production version.

All that’s left now, is to clean up after ourselves:

Remove the green slot

The “green” slot is now redundant and can be removed. Yet another azure powershell step is needed:

Remove-AzureRmResource -ResourceGroupName  -ResourceType Microsoft.Web/sites/slots –Name #{Azure.WebAppName}/green -ApiVersion 2015-07-01 -Force

I have chosen to only run this step if all other steps succeed. YMMV, but I found that troubleshooting is much easier when the environment still exists.

Summary

Hopefully, this should have you up and running with green/blue deployments. There are certainly more thorough ways of doing this (ensuring that the front page loads is certainly not an exhaustive test of a new version), but this article will leave you with a process which can be extended to your liking. As briefly mentioned, I am working on a selenium test suite which I plan to plug into this project – I expect it to result in a blog post as well.

Sendgrid and Azure Functions

Azure Functions recently introduced Sendgrid as an output type. The feature is currently in beta/preview and the documentation is as such very sparse, so I thought I’d quickly write up how I got it working. Hopefully, it might save you some googling (then again, you probably googled your way here, but I digress).

I currently use sendgrid to mail a shopping list, built up through the use of a barcode scanner in our kitchen, to myself every sunday. However, since the content of the email is somewhat irrelevant in this case, we’ll build an function which sends a mail whenever its HTTP endpoint is triggered/accessed.

As a prerequisite for this short tutorial you will need an active sendgrid account. They are kind enough to offer a free tier, so head on over and register if you haven’t already.

I’ll assume you have already created an azure function app (if not, follow Microsoft’s instructions). In it, create a new function; select the HttpTrigger-Csharp type, give it a name and click Create.

After a short wait, you’ll be presented with your brand new function. The default implementation of an HTTP trigger will return a name sent to it either via the body as a POST request or as a querystring parameter. For this example’s sake, we will email that name to a hard coded email address as well.

First, add a new out parameter to the function. To do so, select “Integrate” beneath the function name in the panel on the left hand side, and then click on “+New output”:

new-output.png
Creating a new output parameter in azure functions.

Then, select “SendGrid (Preview)” (you might have to scroll down a little in the output type lsit) and hit “Select”. The following screen is not as self-explanatory as it should be, but here’s the deal: The message parameter name is what you think it is – it’s the name of the parameter in your function which will contain the email message. The SendGrid API Key value, however, shouldn’t be populated with your actual SendGrid API key, but the key of the app setting containing your sendgrid api. Leave the value at its default (SendGridApiKey).You can leave the other settings (from & to address, subject and message text) empty, as we’ll populate them via the function code instead. Save the new parameter.

Now it is time to get the API key from sendgrid and add the required app setting. You create the API key at sendgrid.com. Go to settings -> API Keys.

settings-apikeys

then “Create API key” and “General API Key”

general-api-key

Give the Api Key a descriptive name (I chose “Azure functions sendgrid example”) and make sure you grant the API key full access to “Send mail”:

mail-send

When you hit save, the API key will be displayed. Copy the key to the clipboard (it might also be an idea to keep that browser window open until you’ve verified it works, since that screen is the first and last time sendgrid will ever show you the key).

Moving back to the azure portal, it’s time to create the app setting. Click “Function app settings” in the bottom left corner:

appsettings

and then “Go to app service settings.”

There, select “Application Settings” and add a new App Setting. Its key has to be “SendGridApiKey” (as we specified in the sendgrid parameter setting). Paste the sendgrid api key as its value, click save and close the app settings blade. You should be returned to the azure function. Click on “Develop” under the function name – it’s time to write some code!

Since we have specified a new out parameter called “message”, the first thing we have to do is to add that. The sendgrid parameter type expects a “Mail” type object, found in the SendGrid.Helpers.Mail namespace. We also have to import the sendgrid assembly. Add the following two lines at the very top of the function:

#r "SendGrid"

using SendGrid.Helpers.Mail;

Add the new out parameter to the function signature and remove the async and Task parts (out parameters do not play nice with async).

public static HttpResponseMessage Run(
    HttpRequestMessage req, 
    TraceWriter log, 
    out Mail message)

Removing async means you will also have to change the following line

dynamic data = await req.Content.ReadAsAsync<object>();

to

dynamic data = req.Content.ReadAsAsync<object>().Result;

When that is done, all the boilerplate/infrastructure is in place, and it is just a matter of composing the email. The entire sendgrid C# API is documented at github, but the example below is a minimal, working implementation:

 message = new Mail();
 message.Subject = "Someone passed a name to an azure function!";
 var personalization = new Personalization();
 personalization.AddTo(new Email("joachim.lovf@xyz.com")); 
 Content content = new Content
 {
 Type = "text/plain",
 Value =$"The name was {name}" 
 };
 message.AddContent(content);
 message.AddPersonalization(personalization);

That everything that is needed to integrate sendgrid with azure functions. The complete function as a gist.

Let me know how you get on in the comments!

Azure Function as an HTTP endpoint

Article three describing my first foray into serverless computing. Introduction here, storage queue triggered function here. TL;DR: I’m using this is as a way to familiarize myself with Azure Functions, and to save some precious family time better spent not trying to remember what we need come Sunday.

In my projects, we always end up integrating against some kind of third party – be it booking systems, news aggregators, external providers of content or something as simple as an image resizer. Common for the integrations, is the need to transform the data received into a format with which we want to work. Sometimes we need to enrich the data  by combining it with other sources, and other times the data exposed by the external API is more than we need.

For this project, I had to get product data from Kolonial.no‘s API by barcode in order to add the product to my cart using their internal ID. Their API provides a product endpoint, but it exposes much more data than I needed. I decided to abstract it away behind a GET endpoint of my own.

This is a philosophy I try to follow in all projects I’m responsible for. It means the business code can trust that the contracts we’ve agreed upon won’t break suddenly because of any external factors, and third parties are kept at the edges of the system. Another advantage in this case is that I might want to cache the product data returned at some point, and with the endpoint (function) in place, I have an easy way to implement said caching without having to modify any business code.

So, an HTTP endpoint was needed, and since I’m working in a serverless architecture, that meant I had to create an HTTP triggered function:

azure-functions-http-trigger.png
The HTTP triggered function is found in the “API & WebHooks” scenario,

Note that you can specify the access/authorization level for the function:

azure-functions-http-authorization-level.png

“Function” means the caller needs to provide a key which is specific to the function in the query string. “Admin” means an app wide key is needed, while “anonymous” will allow anyone who accesses the URL to trigger the function.

Now, the HTTP trigger is not always used as a RESTful endpoint – it can just as well trigger a processing job, write to a storage table or whatever else you need it to do, but in this case I wanted to return a product object on the format determined by the Accept-Encoding of the request.

In order to make the function return an http request, the function needs to have an HTTP output parameter, which is set up by default when you create the function. I left everything as it was, and focused on writing the small function:

Hopefully, the code should be pretty self-explanatory. There are a couple of things which are good to know, though: Newtonsoft.Json is available in azure functions, but it must be imported:

#r "Newtonsoft.Json"

 

And while I very much doubt that I would ever exhaust anything with the small scale of my project, the Microsoft patterns and practices team do recommend that HttpClient is instantiated as few times as possible, and instead kept in memory and re-used. That’s why the static HttpClient is created the first time the function is triggered and kept around:

if(HttpClient == null)
    HttpClient = CreateHttpClient(log);

What’s interesting is that despite the function in principle being serverless, the static variable will hang around (albeit for an unpredictable amount of time), effectively allowing us to follow best practice. You can read more about sharing state in Azure Functions on Mark Heath’s excellent blog.

When it comes to the formatting of the return value, the framework will take of that for you as long as you stick to the request.CreateResponse(…) functions. As an example, this is how the function responds to an Accept: application/json request:

json

While requesting XML, predictably, will return this:

xml

(Both screenshots courtesy of the wonderful Postman Chrome plugin)

And that’s that – a small integration with an external service, contained in a single azure function.