Containerise a React App hosted by Asp.NET 5.0 & Azure Devops

Mahdi Karimipour
9 min readSep 6, 2021

Containerising React Apps can be a bit tricky, specially if you want to stick to the principle of building your Docker images once for all environments instead of building them multiple times for each environment due to the varying environment configurations. This is a complete walk-through of the process along with all the challenges you might face to Dockerise an Asp.NET backed React app.

React + Asp.NET + Docker

There are not many complete guides covering containerisation of React apps with with Asp.NET 5.0 as the host and Azure DevOps, and there are some interesting challenges you need to solve for along the way. In this post I have covered the whole process including the below areas:

  • Multi Stage Docker Files
  • Fetching Artifacts from Private Feeds
  • Reducing React Image Size
  • Deploying the same Docker image to multiple environments
  • Troubleshooting

By the way, this topic belongs to the series to set up Infrastructure for your React Apps and Asp.NET APIs.

  1. Containerise Asp.NET 5.0 API
  2. Containerise React Apps Hosted by Asp.NET 5.0
  3. Use Docker-Compose to Run a Multi-Container Application
  4. Setup Minikube for Asp.NET APIs and React Apps
  5. Kubernetes with Asp.NET, React and Azure DevOps

Prerequisite

Based on Pellerex stack, we have chosen Asp.NET 5.0 as our host webserver, so before you continue with this post, have a look at our end to end guide on deploying React Apps on .NET 5.0 stack.

Project Structure

I assume that similar to many React apps, you would have a structure like below, which would impact some of the commands in your Dockerfile.

And based on that project structure here is the final Dockerfile, which I will fully explain below:

1. Multi Stage Builds

To keep my final image size low, I am using multi stage build, and by that I am referring to multiple From statements in my Dockerfile. It helps me to manage the layers I add to the final image, by separating what I need to ‘build’ my image (build), from what I need to ‘run’ my image (base).

2. Authenticate to Private Feeds

I am using Azure DevOps to manage my pipelines and artifacts, however all my feeds are private. So in order to fetch the artifacts and libraries that I have used in my API, and exist in my private feed, I need to authenticate to that feed, right from inside the process that is building my image. For that purpose, I am using Artifact Credential Provider. I have also chosen to use Personal Access Tokens (PAT) to perform my authentication.

Hardcode or Fetch

Personal Access Tokens can be generated through Azure DevOps portal, and I will explain how shortly, but before that we should note that the tokens must NOT be hardcoded and kept as cleartext in our repositories. For that reason, I have created a build argument (PAT), which will hold the PAT, passed from the command line when someone is executing the docker build to build the image.

Credential Provider

When fetching artifacts from a private feed, Nuget will need to be injected with the credentials. The credential provider needs to be installed beforehand, and that is the purpose of run wget in my Dockerfile. To prepare the credentials then, I am using an environment variable (VSS_NUGET_EXTERNAL_FEED_ENDPOINTS), and through that I am passing my PAT and also the endpoints to the private feed.

nuget.config

Note that I am not copying the nuget.config file to my docker image, as it interferes with my credential provider and feed settings. That is the reason I am passing the feed address through an environment variable here, and when I try to restore my packages.

Pass the PAT

At the time that you want to run build your image, you can use the below line to pass the PAT, and the name of your image to docker process:

docker build --tag technologyleads.api-name:latest --build-arg PAT=your-pat-without-quotation .

If we don’t authenticate to the feed, here is the error message you would get is “Response status code does not indicate success: 401 (Unauthorized)”

Note

Configuration, plumbing and troubleshooting your software foundation take a considerable amount of time in your product development. Consider using Pellerex which is a complete foundation for your enterprise software products, providing source-included Identity and Payment functions across UI (React), API (.NET), Pipeline (Azure DevOps) and Infrastructure (Kubernetes).

Create Personal Access Tokens

The thing about Personal Access Tokens is you can revoke and create a new one easily, without impacting and share the account information. With Azure DevOps, you simply go to your personal space in the portal, and create a new one:

Image Names

There is small point I need to mention about image names, and that is case sensitivity. By design the name of your image must be lowercase. There doesn’t seem to be any reason behind it, but it is what it is. However the tag (the bit that comes after : ) can be mixed.

3. Build Once, Deploy Many Times

To enable multi-environment mode for my Asp.NET backed React apps, I need to be able to pass the environment dynamically at the time I run my container. This means I won’t rebuild my Docker image for each environment. It includes setting configurations for different environments for:

  • React App: I am using .env file to hold and manage configs such as API base addresses for different environments
  • Asp.NET Host: As for my Asp.NET web server, I still need to log server events in those environments. I use file based logging for local testing, and Azure Application Insights for production. To read more refer to logging and monitoring for Asp.NET 5.0 APIs and React Apps.

3.1 For Asp.NET 5.0

The approach I am using to manage configurations for different environments is documented here for React apps, and here for Asp.NET APIs.

As far as my Asp.NET Host is concerned, I only need to set ASPNETCORE_ENVIRONMENT environment variable, and that is done when I run my container through the below command:

docker run -it -d -e ASPNETCORE_ENVIRONMENT=Production -e NODE_ENV=production -e REACT_APP_LOCATION_API_URL=test -p 8500:80 <container-id>

Don’t worry about the rest of arguments, I will explain them later, just note how I set the environment when I run a container with a specific container Id. Once I set the environment, the rest of the functions such as Monitoring and Config Management start working out of the box, as I have explained in those links.

3.2 For React App

This is where it gets a bit tricky. React doesn’t set the environment variables from the .env file at Runtime, and it sets them at build time read from .env file, so if I wanted to jump to conclusion quickly, then it means I need to build multiple images for multiple environments.

However there is a way to get around it, and the approach suggested by Krunoslav Banovac works pretty well.

Most of the React apps built by create-react-app works by reading the configs from the .env file, and all those configs are accessed by process.env.REACT_APP_xyz within the code. The fact is that process.env is a node concept, and an abstraction over .env file, and that works by injecting values at build time. So if we want to read those value based on the deployed environment at run time, the approach needs to change

For that reason the alternative is, when the container is run (or when the server starts),

1. Read from Environment Variables: try to find an environment variable with the same name as your configurable element, such as an API base address.

2. Read from .env file: If there was no value in the environment variable for that element, try to read it from the .env file then.

To do so, we need to abstract the above logic in a script that we run when the container (or webserver) starts, and that happens only once per the lifetime of the container, or webserver. Once that script is run, generate a set of final config values for your apps, put them in env-config.js, and inject them into window._env_.REACT_APP_xyz.

From now on, your application needs to read the configurable values from window._env_.REACT_APP_xyz, which is injected in index.html like below:

<script src='%PUBLIC_URL%/env-config.js'></script>

However please note, that the env-config.js is just a set of key value pairs of your configs produced by env.sh and not the logic to build such set of configs that I explained above.

That logic (env.sh) is here which is executed at run time at the end of your Docker file:

Now if you have a look at the Dockerfile again, it now makes sense why I am copying those .env files to my Docker image when I am building it:

Also when it comes down to accessing those configuration, this is how I do it in my React components:

<p>{window._env_.REACT_APP_xyz}</p>

Note

Configuration, plumbing and troubleshooting your software foundation take a considerable amount of time in your product development. Consider using Pellerex which is a complete foundation for your enterprise software products, providing source-included Identity and Payment functions across UI (React), API (.NET), Pipeline (Azure DevOps) and Infrastructure (Kubernetes).

4. Troubleshooting

Now let’s look at some of the problems you might face to containerise a React app backed by Asp.NET.

4.1 NPM Not Found

My base Docker image to build my app is mcr.microsoft.com/dotnet/sdk:5.0, and that doesn’t come with NPM, and hence I have to install it as part of my Docker instructions. This is done using apt-get and installing node 14.

4.2 Exclude Node Modules

At the time you build your React image, you transfer all the context (necessary files) to Docker engine, however node-modules are not necessary to be transferred, as I fetch them during the Docker build process anyway. That’s why I have a .dockerignore file to exclude all those unnecessary files.

Time Waster Alert

Please make sure .env files are not included in your Dockerignore file. As you saw above we need them as the fallback option to generate the runtime config set (if there was no environment variable injected at run time), and if they are ignored, you will spend some time wondering why they are not copied into your Docker image.

Also the .env files are hidden when they are copied to your Docker image, and if you want to see them, you should run ls -a.

4.3 React Blank Screen

It could happen that after you run your containerised React app, you just see a blank screen. The solution for that is to set the “start_url”: “.” in React manifest file, as opposed to having a FQDN such as https://pellerex.com

4.4 CMD vs RUN

CMD (or Entrypoint) is to execute a command at container run time, as opposed to RUN command which is to execute commands at build time to make changes to the image like adding files. In our case, env.sh needs to be executed when the container starts, and hence it should be part part of CMD (or Entrypoint).

Pellerex: Infrastructure Foundation for Your Next Enterprise Software

How are you building your current software today? Build everything from scratch or use a foundation to save on development time, budget and resources? For an enterprise software MVP, which might take 8–12 months with a small team, you might indeed spend 6 months on your foundation. Things like Identity, Payment, Infrastructure, DevOps, etc. they all take time, while contributing not much to your actual product. These features are needed, but they are not your differentiators.

Pellerex does just that. It provides a foundation that save you a lot development time and effort at a fraction of the cost. It gives you source-included Identity, Payment, Infrastructure, and DevOps to build Web, Api and Mobile apps all-integrated and ready-to-go on day 1.

Check out Pellerex and talk to our team today to start building your next enterprise software fast.

--

--