Minikube for Asp.NET 5.0 APIs and React Apps (Local Kubernetes)

Mahdi Karimipour
14 min readSep 23, 2021

Standing up one isolated container on Minikube is easy, but when it comes to setting up a multi-container enterprise application with all the secret and config management, it could take some of your time. This post is a complete walk-through of all the steps to build and run your entire ecosystem on kubernetes locally.

The Need

Minikube is a great local environment to test Kubernetes clusters with all the capabilities that resemble a production environment such as routing, deployments, secret and config management. However unlike running a single container cluster, when it comes down to building a kubernetes cluster for multiple applications to run along with all their other cloud-native needs, the set up could get a bit complicated, and that is the purpose of this post.

End Result

Eventually we will be deploying the below micro-services and web apps in our cluster which also include configs, secrets, certificates, ingress, health checks, and namespaces:

  • Web App: a React application that depends on all below APIs, and we containerised it here
  • Identity Api: an Asp.NET API which we containerised here
  • Subscription API: that also connects to Identity API within our cluster
  • Location API: which is a dependency for our React web app
  • Messaging API: which is a dependency for our Identity and Subscription APIs

The job is to make all above applications work together within the cluster, and expose them outside the cluster using Ingress, so a client can consume our React Web App and all other APIs.

By the way, this topic belongs to the series to set up Infrastructure for your React Apps and Asp.NET APIs.

  1. Containerise Asp.NET 5.0 API
  2. Containerise React Apps Hosted by Asp.NET 5.0
  3. Use Docker-Compose to Run a Multi-Container Application
  4. Setup Minikube for Asp.NET APIs and React Apps
  5. Kubernetes with Asp.NET, React and Azure DevOps

What we are covering

For the purpose of this post, I am going to cover the below topics, along with some of the challenges you might face when setting up your cluster. This post won’t be an introduction into Kubernetes concepts, but to provide a ready to use solution with all the aspects of a set up built-in.

  • Helm
  • Probes & Health Checks
  • Fetching Artifacts
  • Config Management
  • Secret Management
  • Deployment
  • Services
  • Ingress
  • TLS

F5 Principle

One principle we have followed in all our designs is the fact that Apps and APIs should work inside and outside the container/cluster with no or minimal config changes. This means if you cloned the repository and hit F5, the API should run as expected. Also if you built the Docker image at the same time, the container should run as expected as well. This will indeed provide a good developer experience.

1. Helm

If you have worked with yaml files, and you might have felt frustrated at times by all the errors you might get setting up all different resources such as the white-space and indentation errors, then Helm is for you. Consider it as a templating engine that you provide all the values, and it spits out all the yaml files (ingress, deployment, services, etc.) you need to set up your cluster.

So if you are considering if it is worth the time to learn it and implement it, in my view it is a time saver, and worth the effort.

To begin with, set up Helm for your application using the below command.

helm create test-app

Which will create the below yaml resources, and I will briefly explain what there are:

  • Chart.yaml: Contains basic information about your application.
  • Templates: are the template files with placeholders which will be filled by the values you provide, and eventually will create the final yaml files.
  • Values.yaml: the values specific to your application which you want to be replaced in the template.
  • Charts.yaml: sub charts (if any) that you would use within your charts.

Build Once, Deploy Many

To take advantage of all offerings of containers, we need to avoid rebuilding our images across different environments, and Helm can help us here again. The approach would be to have same template files with multiple values.yaml for each environment, and when we install our charts we will use the respective file:

  • values.dev.yaml
  • values.uat.yaml
  • values.prod.yaml

When you create the charts using the above create command, they come with the default settings, however I have modified the charts, and added elements to handle some environment variables, health checks, etc. Here is the final deployment template file:

And you can see I have some elements on how to handle environment variables. Having the deployment.yaml template ready, we now need to provide our values for the chart.

Note

Configuration, plumbing and troubleshooting your software foundation take a considerable amount of time in your product development. Consider using Pellerex which is a complete foundation for your enterprise software products, providing source-included Identity and Payment functions across UI (React), API (.NET), Pipeline (Azure DevOps) and Infrastructure (Kubernetes).

Values.yaml

Values provide all the parameters we need to specify related to our micro-service:

  • Name of the Docker image
  • Environment Variables
  • Image pull secrets
  • Routing settings
  • Secret References (e.g. certificates)

And here are two examples of such file one for a React Web App, and the other for an Asp.NET API.

1. React Web App: Values.yaml

Have a look at the file first, and then I will explain all the elements in detail:

Please consider the below points here:

  • Repository: is the Docker image which you have pushed to a registry like Docker Hub.
  • Environment Variables: all the variables you need to run your application. These could override application settings and the defaults you have specified in your application. To learn more about these variables and how I have defined them, refer to here. Please also note that there are a bunch of API addresses I have specified here, which I will come back later, after I covered the API one. They are simply the dependencies to my React app.
  • Image Pull Secret: is the secret the cluster will use to authenticate to your container registry, and if you don’s specify this, you would get Pull Access Denied error. The way it works is you simply define a secret in your cluster, and reference it here. Here is the command:
kubectl create secret docker-registry technologyleads-registry-key --docker-server=docker.io docker-username=<username> --docker-password='<password>' docker-email=<email>
  • Service: here we specify how we want the corresponding Kubernetes services to be run. I have specified ClusterIP to listen at port 80, and talk to the respective containers at port 8001.
  • Ingress: this is a very important setting which would expose your services outside the cluster. In the above example, I am setting the host to localhost, and provided that I have configured host settings on my machine, all the https://localhost will be directed to my cluster. The host settings on your machine are also easy to configure. Just go to C:\Windows\System32\drivers\etc, and add a record to point localhost to your cluster’s IP address, which you can get by running minikube ip. In my case, that address is 127.0.0.1.
  • Also look at the path attribute of ingress. As you see I can define a set of paths for my ingress, and I can redirect each path to a different service. For my web app however, I have set the path to / which matches all URLs. There is a good article here which explains Ingress at length with very good examples.
  • TLS: If I want the traffic to be Https and encrypted with TLS, I need to provide the certificate and the key, which is nothing but another secret to be added to Minikube. However it would be a specific type of secret aka tls, and here is how to add it:
create a secret: kubectl create secret tls technologyleads-web-app --key key.pem --cert certificate

For localhost and Minikube, you don’t need to buy a certificate, and a self-signed certificate would be enough for our purpose. To generate the pem file and the certificate, you could use OpenSSL. Here is the command that generates what you need:

openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem

It will ask you a few questions, along with the domain you want the certificate for (localhost), and it generates the key (PEM) and the certificate to use in the above secret creation.

  • Resource Limits: You can also specify the resource limits for your micro-service, and how many instances you want to run, that I haven't changed.

2. Asp.NET API: Values.yaml

When it comes down to the API, everything is the same, however Environment Variables and Ingress need a bit of attention. But first, let’s have a look at the Values file for an Asp.NET API.

Now let’s cover the two topics I mentioned.

  • Environment Variables: It is a good idea to keep your apps and APIs functioning out of the container as well. Hence, the environment dependant configurations hard-coded in your applications (appsettings.json) should be all the configs needed to run the application outside a container. For every config needed for the container, we can inject them using environment variables. There could also be a view that configurations should be injected using ConfigMaps, which is valid. It is just not the approach we have taken as Asp.NET APIs are pretty strong in handling configuration. To read more, refer to Config Management for Asp.NET APIs.
  • Please also note that with the right configuration, you can override application settings using Environment Variables. So if you have defined your API configurations like here, you can override them if you pass them as environment variables like below:
- SubscriptionApiSettings__BaseUrl=https://host.docker.internal:8021
  • Where __ means the hierarchy in your appsettings file, and the above line refers to SubscriptionApiSettings.BaseUrl in your settings.
  • Ingress: While the structure is similar to the WebApp I just covered above, the routing could get a bit tricky. For a multi container application like ours, there needs to be a way to distinguish different routes, and direct to them to the respective service. Hence I have adopted the below structure:
  • Identity API: https://localhost/identity/xyz
  • Subscription API: https://localhost/subscription/xyz
  • Messaging API: https://localhost/messaging/xyz

As we have one set of Helm charts per micro-service, you can now see why I have set the path to /identity(/|$)(.*).

Let me also mention the Url Rewriter (nginx.ingress.kubernetes.io/rewrite-target: /$2) that is a useful service, and this is how it works. I had commented it out for the Web App, as I didn’t want the routes to be altered when being passed to the container from Ingress, but for my Asp.NET Web API, I am going to need it as I have multiple APIs and I need to drop some segments as I pass the traffic. This is how it works:

Note

Configuration, plumbing and troubleshooting your software foundation take a considerable amount of time in your product development. Consider using Pellerex which is a complete foundation for your enterprise software products, providing source-included Identity and Payment functions across UI (React), API (.NET), Pipeline (Azure DevOps) and Infrastructure (Kubernetes).

2. Probes and Health Checks

Probes and Health Checks are essential to the operation of Kubernetes, and it determines if a POD should stay in the mix, or be killed and replaced by a new one. There are three types of health checks:

  • Start Up: Happens when the POD is starting up.
  • Readiness: Checks if the POD is ready to receive traffic.
  • Liveness: Checks if the has stayed alive and should keep receiving traffic.

In terms of defining these health checks, we are relying on Asp.NET built-in mechanism to define them, as opposed to creating new custom controllers. For that purpose I have created a ApiHealthCheck like below:

and here is how I inject the health checks into my API. You can implement any logic such as DB Connectivity or network connection checks per your app/api requirement.

At the end the outcome looks like below, that not only gives back the status of the endpoint, but contains some more information on duration, meta data, etc.

The health checks are consistent across all my APIs, and hence I have defined them in the template, and not the values.yaml

3. Configuration Management

As discussed earlier, there are two ways to manage configuration for Apps and APIs in a cluster:

  • Hardcoding the configs for all the environments in the Docker image, and reading them at run time based on the current environment, using ASPNET_Environment environment variable.
  • Feeding the configs for the respective environments through kubernetes config maps and make it available using volumes.

We have chosen the first scenario, as Asp.NET APIs and React provide mature ways to manage configurations. So all we need to do is to inject what environment we are on, through env variables defined in values.yaml (as above), and the rest works out of the box.

It should also be pointed out that environment variables with the same name will override values in appsettings.json, and that is the reason our application works inside and outside of the container without making changes to the physical configs stored in the image.

Connecting Containers through Services

The question remains however on how the containers communicate within the cluster? A good example would be when Identity Api wants to send messages using Messaging Api, and the URLs for these communications need to be determined at run time.

Primary mode of communication between containers in different PODs is not through direct IP to IP method, but through accessing the PODs and containers through services. This way the specifics of each container can be abstracted.

Therefore as you can see when it comes to down to talking to other services, I simply use http://servicename, which in this case would be http://ecosystem-messaging-api-release injected through environment variables.

4. Secret Management

Secret Management is perhaps the area where you could spend some time figuring out how to handle, when you move the APIs from traditional hosting models to containerised. Even if you had key vault present in all the scenarios, secret management could be delicate to handle.

For our API, we have considered 4 scenarios:

  1. API outside container, in Development
  2. API outside container, in Higher Environments
  3. API inside container, in Development
  4. API inside container, in Higher Environments

I will cover scenarios 1, 2, and 3 here, and scenario 4 will be for another post, where I cover Kubernetes in Production soon.

Secret Management is mainly set up at start up time, where Program.CreateHostBuilder is called. In that function, we have four cases to handle the above four scenarios. Before I get into the details, let’s have a look at the code first.

Inside CreateHostBuilder, you can see the above four scenarios, and please note to determine the Inside/Outside container scenarios, I have set an environment variable called ContainerMode, which I read it at the beginning, using AddEnvironmentVariables.

1. Outside Container, Development Mode

In this scenario we do nothing, as Asp.NET API reads the secrets from the local secret store which we have defined using secrets stored in plain text. CreateHostBuilder calls config.AddUserSecrets at start up by default, which loads the secrets from that json file into our App. For more information on this, refer to Secret Management in Asp.NET 5.0 APIs.

2. Outside Container, Higher Environments

In this case, we are using a KeyVault, secured by certificate authentication, to store and read secrets at run time. Again more information on how it is done, can be found here.

3. Inside Container, Development Mode (Minikube)

This is our main case here for local Minikube, and the way it works is we set up the Opaque secrets inside the cluster manually using the create secret command into the SecretMap, and then when the container loads we read them from SecretMap.

Define the Secrets: to do this, I have defined all the secrets for all the APIs in a yaml file, which I will then run using kubectl apply -f filename.yaml, and here is the file:

Now if I wanted to refer to these secrets, I will use volumes and volume mounts to inject these secrets into my API. You can find it the Values.yaml file, however here it is again:

Note

Configuration, plumbing and troubleshooting your software foundation take a considerable amount of time in your product development. Consider using Pellerex which is a complete foundation for your enterprise software products, providing source-included Identity and Payment functions across UI (React), API (.NET), Pipeline (Azure DevOps) and Infrastructure (Kubernetes).

Here is how it reads: Find a secret in Kubernetes’ secret map with reference = technologyleads-identity-api, and copy all the secrets from that into the volumeMount address (/root/.microsoft/usersecrets). This is the same address that we use to store secrets in local user store outside container mode, only this time, we are in the container. When Kubernetes copies the secrets into this volume, it creates a separate file for each secret, with the file name being the secret name, and the secret value being inside the file.

Now the last step is to read them from that volume into our application, which is done through:

config.AddKeyPerFile($'/root/.microsoft/usersecrets');

Please note AddKeyPerFile will read all individual secrets from the volume, treat the file name as secret name, and the value inside it as the actual secret.

Now if I had a class to maintain secret reference like AppSecret such as below:

every time I read the secrets using:

Configuration.Get<AppSecrets>()

I will get all the secrets loaded provided provided that I have used the proper naming convention such as:

5. Putting it All Together

Now that we have all the building blocks ready, we need to execute and install our charts into the cluster which means running our deployments and standing up our pods.

The process of installation needs to be repeated for each API, and now you can see what I mentioned using Helm makes our jobs way easier.

1. Start the Minikube

Minikube start

2. Enable Ingress for the Whole Cluster

minikube addons enable ingress

3. Install the Helm Charts in Dev Environment

Please note, we now have another opportunity to override the parameters we had defined in values.yaml, by using — set parameter.

helm upgrade --install ecosystem-identity-api-release .\Infrastructure\Helm\ --set image.tag="v1.0.0" --set global.env.ASPNETCORE_ENVIRONMENT="Development

4. Expose Services through Ingress to World Outside Cluser

minikube tunnel

5. Uninstalling a Deployment

In case you wanted to uninstall and reinstall a deployment:

helm uninstall ecosystem-identity-api-release

6. Troubleshooting

404

When exposing your services outside your cluster, 404 is a common error you might get from NGINX, which you can fix by adjusting your routes and paths. There is however a different kind of 404 you might get which looks like:

I have been using Linux images for my APIs, and this error is clearly coming from IIS. This simply means localhost is captured by Windows IIS before it is headed to your Cluster. Without config changes, stopping your local IIS server would solve the problem.

Pellerex: Infrastructure Foundation for Your Next Enterprise Software

How are you building your current software today? Build everything from scratch or use a foundation to save on development time, budget and resources? For an enterprise software MVP, which might take 8–12 months with a small team, you might indeed spend 6 months on your foundation. Things like Identity, Payment, Infrastructure, DevOps, etc. they all take time, while contributing not much to your actual product. These features are needed, but they are not your differentiators.

Pellerex does just that. It provides a foundation that save you a lot development time and effort at a fraction of the cost. It gives you source-included Identity, Payment, Infrastructure, and DevOps to build Web, Api and Mobile apps all-integrated and ready-to-go on day 1.

Check out Pellerex and talk to our team today to start building your next enterprise software fast.

--

--