Kubernetes with Asp.NET and React and Azure DevOps

Mahdi Karimipour
10 min readOct 3, 2021

Setting up Kubernetes in production while implementing cloud-native best practices could be a journey for some enterprise products as it involves DNS, Ingress, Secret and Config Management, Automated Pipelines and a few other elements. In this post, I cover all the main building blocks along with detailed implementation steps to automatically build, deploy and run Asp.NET APIs and React apps on Kubernetes.

1. Flow

As you can see in the above diagram, a fairly standard data flow for a K8s cluster at the very least includes a DNS, Ingress, Services, PODs, Containers and KeyVaults. As an example, when a request to https://api.domain.com leaves your customers’ mobile device, it then queries a DNS to resolve the IP address of your Ingress, to be then redirected to an underlying K8s service, and eventually hit the PODs and Containers.

In ths post, I cover a very detailed implementation of the above flow for four APIs and one React app along with all the best practices involved to host such a system using Azure and Kubernetes.

By the way, this topic belongs to the series to set up Infrastructure for your React Apps and Asp.NET APIs.

  1. Containerise Asp.NET 5.0 API
  2. Containerise React Apps Hosted by Asp.NET 5.0
  3. Use Docker-Compose to Run a Multi-Container Application
  4. Setup Minikube for Asp.NET APIs and React Apps
  5. Kubernetes with Asp.NET, React and Azure DevOps

Prerequisite

As a prerequisite to this post, I strongly suggest to read through the local version of this set up using Minikube, where I covered topics such as Ingress, Helm, Probes, Deployment, etc.

In this post we will focus on the parts specific to production such as Secret Management & Azure KeyVault, Ingress, DNS, and Automated Build and Deployment Pipelines.

2. Helm

If Helm charts were useful for running Apps and APIs on Minikube locally, they are even more helpful for deploying and running them in production, specifically when implementing our CD pipelines.

2.1 Where to store Helm Charts?

There are multiple approaches to storing and retrieving Helm Charts for higher environments:

  • Remote Repository: a remote repository can be used to store and retrieve Helm charts at different phases of our pipeline execution, and a common location is Azure Container Registry
  • Code Repository: an even simpler way is of-course storing the Charts inside your App or API repository, which helps you manage everything related to your asset such as Ingress, TLS, etc.

While the Code Repository offers some simplicity in terms of managing the charts, the remote repository model helps to keep versioning of the Charts separate from the application code. For the purpose of this post, I have chosen the Code Repository approach.

2.2 Multiple Environments

Obviously our values.yaml file needs to hold different values for each environment, and in order to achieve that, we will have multiple values files for each environment:

  • Values.development.yaml
  • Values.production.yaml

2.3 Asp.NET API: values.production.yaml

Below is the Helm values file for my Identity .NET API in production, and it has some dependency on some other APIs such as Subscription API. Have a look at the file, and then I will cover some topics later:

Note

Configuration, plumbing and troubleshooting your software foundation take a considerable amount of time in your product development. Consider using Pellerex which is a complete foundation for your enterprise software products, providing source-included Identity and Payment functions across UI (React), API (.NET), Pipeline (Azure DevOps) and Infrastructure (Kubernetes).

2.4 Cross Container Communication

As we stand up services inside the Cluster as an abstraction layer on top of my PODs with changing IP Addresses, the question is how containers should talk to other services? Although the answer is simple, the notation is interesting: simply by pointing at http://service-name through environment variables:

3. Ingress

Before I get into the Ingress settings, we must first install the Ingress Controllers on the cluster. This is the repo and a getting-started guide. For AKS (Azure K8s) running the below command will do the job:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.2/deploy/static/provider/cloud/deploy.yaml

Once that is installed, let’s have a look at our Ingress settings. Look how easy it is to configure your APIs to listen at api.domain.com (if you like to read more on the topic of routing, refer to here).

3.1 CORS

CORS could be handled at different layers:

  • API Gateway
  • Ingress
  • Application

For the sake of this application, I have disabled CORS handling at my API layer, and I am handling it at Ingress layer using Annotations. As you see in the below snippet, I have included two annotations to enable CORS and specify the allowed domains.

For more information on handling CORS in K8s clusters refer to here.

3.2 TLS

There are two approaches in handling TLS in clusters:

  • SSL Termination: You could terminate the SSL/TLS at the entry point of your cluster (i.e. ingress or load balancer), and have the rest of your communication inside the cluster as http
  • SSL Passthrough: when we don’t decrypt the traffic when it reaches the load-balancer or Ingress, and we pass it through to the application, which means end-to-end SSL.

By specifying the TLS certificate, you are basically injecting the TLS Keys to manage traffic encryption at the Ingress level. The way we manage the TLS key is by adding a secret of TLS type to the cluster, which needs to be of a certificate authority and not self-signed.

kubectl create secret tls technologyleads-ecosystem-api-certificate --key privatekey.pem --cert certificate.crt

4. Secret Management using KeyVault

This is, again, the topic that might take most of your time, due to the level of complexity involved in fetching secrets from KeyVaults into the POD.

What we want is to store our secrets in Azure KeyVaults securely, and then fetch them to the cluster and mount them on a volume which POD will then access.

Azure Key Vault Provider for Secrets Store CSI Driver allows you to get secret contents stored in an Azure Key Vault instance and use the Secrets Store CSI driver interface to mount them into Kubernetes pods.

It integrates secrets stores with Kubernetes via a Container Storage Interface (CSI) volume. The Secrets Store CSI driver secrets-store.csi.k8s.io allows Kubernetes to mount multiple secrets, keys, and certs stored in enterprise-grade external secrets stores into their pods as a volume. Once the Volume is attached, the data in it is mounted into the container’s file system.

4.1 Flow

The way we do it is by defining a SecretProviderClass that acts like a manifest for our API, on what secrets to fetch and which KeyVault to fetch them from. Here is this Custom Resource Definition:

Please consider the below points about this class:

  • Secret Objects: are the secrets we want to fetch from Azure KeyVault
  • Parameters: explain where to fetch the secrets from and who these secrets are for, as it contains information the KeyVault and the cluster.

Recap

Just a quick recap on what we are doing for secrets. We are simply saying Cluster A seeks to fetch some secrets from KeyVault B, and this manifest is summarised in the above Secret Provide Class. Once fetched, we will use a Secret Store driver, to mount them in a Volume, and make it available to the container.

Before we proceed go ahead and install Secrets Store CSI Driver and the Azure Keyvault Provider.

4.2 Authentication to KeyVault

However before KeyVault allows the cluster to fetch the secrets, its identity needs to be established and if it has access to those secrets. There are multiple ways to achieve that without using the old credential based authentication by the means of Managed Identity. I say without traditional credentials, cause if we also use credentials here, it will become chicken and egg, and we need to secure that credential too.

  • System Assigned Managed Identity: Where we assign a unique ID to each instance of a particular resource (like a VM or Cluster)
  • User Assigned Managed Identity: Where we assign an ID to all resources of specific type, which would be easier to manage over time.

What happens then is we grab that ID (for a cluster for example), and we assign roles and permissions on target resources (e.g. Azure KeyVault).

In our case, to give permission on an Azure KeyVault to a cluster, we first need to enable Managed Identity for the cluster either by:

  • Creating a new cluster with Managed Identity enabled
az aks create -g myResourceGroup -n myManagedCluster --enable-managed-identity

You could also achieve the same thing through UI

  • Enabling managed Identity for an existing cluster
az aks update -g <RGName> -n <AKSName> --enable-managed-identity

I couldn’t find the option to enable Managed Identity through UI for an existing cluster

For further material on this, refer to here.

4.3 Give Permissions to Fetch Secrets

Once the Managed Identity is enabled, we will then need to give permission to that Managed Identity in the Azure KeyVault, so it can fetch:

  • Secrets:
az keyvault set-policy -n keyvaultname --secret-permissions get --spn aks-identity-client-id
  • Keys:
az keyvault set-policy -n keyvaultname --key-permissions get --spn aks-identity-client-id
  • Certs:
az keyvault set-policy -n keyvaultname --certificate-permissions get --spn aks-identity-client-id

This could also be achieved through UI portal and Access Policies.

Once that is done, go ahead and fill the KeyVault name, Azure Tenant Id, and Kubernetes cluster Managed Id in the SecretProviderClass:

You can get the Cluster Managed ID using Azure Portal by heading to the new resource group Azure created for you once you enabled Managed Identity for your Cluster. It is generally a resource of type Managed Identity with a name ending with ..agentpool. The Identifier we are after is called Client ID. The final step is to apply the above SecretProviderClass to your cluster:

kubectl apply -f secret-provider-class.yaml

A SecretProviderClass per Micro-Service

Please note that all the above resources are per micro-service. As an example, ideally you should have a KeyVault, SecretProviderClass, and Helm Charts per micro-service.

4.4 Add Secret Provider Class to Helm Charts

Now we have all the elements to attach the secret provider to the Helm Charts values file, and make it part of our deployment. So when the deployment happens, the secrets are fetched from the KeyVault and loaded into the containers as volumes. I have done that like below:

azure-es-identity-api-vault is the name of the SecretProviderClass, and as you can see I am matching that to a volume mount pointing at /root/.microsoft/usersecrets, which is exactly where Asp.NET reads the secrets from at start up time as the last piece of puzzle (config.AddKeyPerFile($”/root/.microsoft/usersecrets”)):

Time Waster Alert: Missing CSI Driver in Helm Chart

The fact that I am using ready Helm charts, doesn’t mean they are complete and they have everything included. I spent some hours figuring out why my secrets are not being loaded, and then I realised my Helm Charts (the deployment one) is missing the CSI bits. Here is the complete deployment Helm chart:

Note

Configuration, plumbing and troubleshooting your software foundation take a considerable amount of time in your product development. Consider using Pellerex which is a complete foundation for your enterprise software products, providing source-included Identity and Payment functions across UI (React), API (.NET), Pipeline (Azure DevOps) and Infrastructure (Kubernetes).

5. DNS

Now the last piece of traffic management: We have everything setup, however we need to point our https://domain.com to our Ingress Controller, which is done using a Azure DNS Zone, and here are the steps:

  1. Create Azure DNS Zone:
  1. Grab the Name Servers, and insert them in your Domain provider (e.g. GoDaddy)
  2. Create a A Record in the Azure DNS Zone, and point it to your Ingress External IP Address, which you can get using
kubectl get ingress

6. Pipeline

With this set up that we had, pipeline is a rather easy piece using Azure DevOps. I will divide it into two parts to cover CI and CD.

6.1 CI

The pipeline is very much similar to the pipeline we had earlier, when we deployed apps/apis to conventional Azure App/API Service. However we need to apply some changes to make it container friendly. This is the final result, and I will explain one part shortly related to Helm Charts:

All this pipeline does is to build and push a Docker image to Docker Hub, which I will pull down at the time of deployment in the Kubernetes cluster. The step I mentioned about Helm Chart, is how I create a deployment to my Kubernetes cluster, and all I will need in that step is just the Helm Charts I set up throughout this post.

The way it works is by copying the Helm Charts at the CI stage to the Staging Build Artefact package, and then use them at the CD time to apply them to my cluster. In the below snippet, “Infrastructure” is the folder that contains all my Helm charts.

6.2 CD

My CD pipeline is simpler that my CI pipeline, as I simply grab the Helm Charts from the CI, as well as the SecretProviderClass for that Micro-service, and apply them using the below shell script through an Azure CLI task:

Pellerex Kits: Infrastructure Foundation for Your Next Enterprise Software

How are you building your current software today? Build everything from scratch or use a foundation to save on development time, budget and resources? For an enterprise software MVP, which might take 8–12 months with a small team, you might indeed spend 6 months on your foundation. Things like Identity, Payment, Infrastructure, DevOps, etc. they all take time, while contributing not much to your actual product. These features are needed, but they are not your differentiators.

Pellerex does just that. It provides a foundation that save you a lot development time and effort at a fraction of the cost. It gives you source-included Identity, Payment, Infrastructure, and DevOps to build Web, Api and Mobile apps all-integrated and ready-to-go on day 1.

Check out Pellerex and talk to our team today to start building your next enterprise software fast.

--

--