Containerise Asp.NET API with a SQL Server DB on Azure DevOps

Mahdi Karimipour
6 min readSep 3, 2021

While enabling containers for an Asp.NET API that has a SQL Server DB you will face a few problems along the way for secret management, database connections, push to artifactory, etc. This guide is a step by step walk through to address all those problems, and have a container image pushed to artifactory through an automated pipeline.

Problems?

At first, adding container support using Visual Studio’s magic button seems to be easy, however if you have an application with a few dependencies, and specific ways to manage things, it will soon becomes a bit tricky, as you would face a few challenges. Some of the dependencies I cover in this guide are:

  • Database: SQL Server DB
  • Artifactory: Azure Artifactory
  • Secret Storage: Azure KeyVault

By the way, this topic belongs to the series to set up Infrastructure for your React Apps and Asp.NET APIs.

  1. Containerise Asp.NET 5.0 API
  2. Containerise React Apps Hosted by Asp.NET 5.0
  3. Use Docker-Compose to Run a Multi-Container Application
  4. Setup Minikube for Asp.NET APIs and React Apps
  5. Kubernetes with Asp.NET, React and Azure DevOps

Project Structure

I assume that similar to many API projects, you would have a structure like below, which would impact some of the commands in your Dockerfile.

And based on that project structure here is the final Dockerfile, which I will fully explain below:

Multi Stage Builds

To keep my final image size low, I am using multi stage build, and by that I am referring to multiple From statements in my Dockerfile. It helps me to manage the layers I add to the final image, by separating what I need to ‘build’ my image (build), from what I need to ‘run’ my image (base).

Authenticate to Private Feeds

I am using Azure DevOps to manage my pipelines and artifacts, however all my feeds are private. So in order to fetch the artifacts and libraries that I have used in my API, and exist in my private feed, I need to authenticate to that feed, right from inside the process that is building my image. For that purpose, I am using Artifact Credential Provider. I have also chosen to use Personal Access Tokens (PAT) to perform my authentication.

Hardcode or Fetch

Personal Access Tokens can be generated through Azure DevOps portal, and I will explain how shortly, but before that we should note that the tokens must NOT be hardcoded and kept as cleartext in our repositories. For that reason, I have created a build argument (PAT), which will hold the PAT, passed from the command line when someone is executing the docker build to build the image.

Credential Provider

When fetching artifacts from a private feed, Nuget will need to be injected with the credentials. The credential provider needs to be installed beforehand, and that is the purpose of run wget in my Dockerfile. To prepare the credentials then, I am using an environment variable (VSS_NUGET_EXTERNAL_FEED_ENDPOINTS), and through that I am passing my PAT and also the endpoints to the private feed.

nuget.config

Note that I am not copying the nuget.config file to my docker image, as it interferes with my credential provider and feed settings. That is the reason I am passing the feed address through an environment variable here, and when I try to restore my packages.

Pass the PAT

At the time that you want to run build your image, you can use the below line to pass the PAT, and the name of your image to docker process:

docker build --tag technologyleads.api-name:latest --build-arg PAT=your-pat-without-quotation .

If we don’t authenticate to the feed, here is the error message you would get is “Response status code does not indicate success: 401 (Unauthorized)”

Note

Configuration, plumbing and troubleshooting your software foundation take a considerable amount of time in your product development. Consider using Pellerex which is a complete foundation for your enterprise software products, providing source-included Identity and Payment functions across UI (React), API (.NET), Pipeline (Azure DevOps) and Infrastructure (Kubernetes).

Create Personal Access Tokens

The thing about Personal Access Tokens is you can revoke and create a new one easily, without impacting and share the account information. With Azure DevOps, you simply go to your personal space in the portal, and create a new one:

Image Names

There is small point I need to mention about image names, and that is case sensitivity. By design the name of your image must be lowercase. There doesn’t seem to be any reason behind it, but it is what it is. However the tag (the bit that comes after : ) can be mixed.

Secret Store

By now you should have been able to authenticate to the Azure Artifactory, restored your packages and built your API. However you are not out of the woods yet. As you know secrets should not be stored in clear text in our repositories, and that’s why, I am using KeyVault to manage my secrets in production, and also secret store on my local machine. To learn more have a look at Secret Management for Asp.NET 5.0 APIs.

Now the question is how would I pass my secrets to the docker process that is building my image? On your local machine, the answer is through volumes, and I simply map the secret stores like below, when I am running my container:

docker run -e ASPNETCORE_ENVIRONMENT=Development -p 8080:80 -v C:\Users\<username>\AppData\Roaming\Microsoft\UserSecrets:/root/.microsoft/usersecrets:ro container-id

Environment

As you have noticed, I am also relying on environment variables when setting the environment for my Asp.NET Api through ASPNETCORE_ENVIRONMENT environment variable

SQL Server Database

Last but not least, it is now time to connect my containerised Asp.NET API to the SQL Server running on my local machine. Traditionally for local testing, we use (.) as the local server name to connect to our database, but inside the container, that refers to nowhere, as my DB doesn’t exist within the container.

Now the question is how would I get the IP address of the host (who is running the docker container) and pass it the Asp.NET API? The answer is through host.docker.internal. Subsequently the whole connection string would like:

Server = host.docker.internal, 1433; Database=<db-name>; Trusted_Connection=False; User Id=<db-user>; Password=<db-pass>; MultipleActiveResultSets=True

Before this would work however, you would need to enable TCP connections to your SQL Server, and you do that by going to SQL Server Configuration program, and enabling the TCP. Don’t forget to restart the instance before it would work.

Also please note the port number. You could point to the default port number if you liked, however if you wanted to change that, you can also set it through the above dialogue.

Here is the error you will get if the above DB steps were not followed: Cannot authenticate using Kerberos. Ensure Kerberos has been initialized on the client with ‘kinit’ and a Service Principal Name has been registered for the SQL Server to allow Kerberos authentication.

Pellerex: Infrastructure Foundation for Your Next Enterprise Software

How are you building your current software today? Build everything from scratch or use a foundation to save on development time, budget and resources? For an enterprise software MVP, which might take 8–12 months with a small team, you might indeed spend 6 months on your foundation. Things like Identity, Payment, Infrastructure, DevOps, etc. they all take time, while contributing not much to your actual product. These features are needed, but they are not your differentiators.

Pellerex does just that. It provides a foundation that save you a lot development time and effort at a fraction of the cost. It gives you source-included Identity, Payment, Infrastructure, and DevOps to build Web, Api and Mobile apps all-integrated and ready-to-go on day 1.

Check out Pellerex and talk to our team today to start building your next enterprise software fast.

--

--