Nucleus Logo


How to Containerize your Applications

Evis Drenova




single cube


If you're a Software Engineer, DevOps Engineer or Platform Engineer then you've likely heard about containers and container orchestration. Most people started hearing about containers in the early to mid-2010s but containers have actually been around for 40 years. It originally started in 1979 with UNIX's chroot command which was an OS-level call for changing the root directory of a process and it's children to a new location in the filesystem. The idea being that processes should have an isolated disk space for run in (sound familiar?). Over the years, containers evolved and with the launch of Docker in 2013, they reached the mainstream.

Containers have changed how companies build and ship their code to make their applications more scalable, resilient and flexible. It's estimated by 2027, over 90% of global organizations will use containerized applications in some form.

In this blog, we'll take a look at what containerization is, why it’s revolutionized software development, and how you can containerize your applications.

Let's jump in.

What is Containerization?

Containerization is the process of packaging up code and all of it's dependencies (libraries, binaries, config files etc.) along with a virtualized run time environment into an isolated and portable package that can be run on nearly any underlying operating system or infrastructure. The code and all of it's dependencies are called 'images' which are just read-only snapshots of your code and it's dependencies, tools, config files, libraries and other files that is needed for your application to run.


The most common analogy (and why it's called a container) is a shipping container. A shipping container can hold many things of different shapes and sizes inside it and it provides a nice uniform shape on the outside to the ships carrying it. It doesn't matter who made the ship or how it works, as long it can support containers, it can hold a shipping container. The software version is similar. The container provides a standardized interface to the underlying system or infrastructure regardless of the code and dependencies that it carries inside of it.

This approach of providing a standardized interface to run on top of nearly any underlying infrastructure has a number of benefits:

  1. Greater portability and smoother integrations, as the application and its dependencies are isolated from the underlying infrastructure
  2. Improved resource utilization, since multiple containers can run on the same physical or virtual machine without interfering with each other
  3. Increased scalability and efficiency, as containers can be quickly started, stopped, or replicated on demand
  4. Faster startup times & reduced resourced use, due to reduced footprint compared to traditional virtual machines

What is Container Orchestration?

As containerization became more popular developers were suddenly faced with the problem of figuring out how to schedule, deploy and manage all of their containers. Especially for teams with microservices that are running tens or even hundreds of containers, this becomes a big problem. The answer came in the form of a container orchestration platform which is designed to automate the operational effort required to run and maintain containerized workloads & services. There are a number of container orchestration platforms but Kubernetes is the most well-known. Kubernetes, based on Google's internal platform, Borg, makes it easy to schedule, deploy and manage containers at scale and has gained widespread adoption at companies of all sizes. If you're a tech history geek like I am then check out this Kubernetes documentary!


The magic of Kubernetes is that it encapsulates container(s) in pods, which are isolated, ephemeral units that share compute, networking and other resources with other pods. Pods are designed to be ephemeral meaning that if a pod or the node that it runs on fails, Kubernetes can automatically create a new replica of that pod so that there is minimal downtime or service interruption. By default, Kubernetes supports Docker containers and the docker runtime which is how pods actually run those containers. There is a lot more detail that we can go into here but we'll save that for another blog.

How do you Containerize your applications?

Now that we're familiar with containers let's get into the fun stuff. How do we actually containerize our applications? We'll check out two ways: the traditional approach to containerization and how Nucleus can help containerize your applications automatically.

Traditional Containerization approach

To containerize your application, you'll first need to create a Docker image. A Docker image is a snapshot of your application and its dependencies at a particular point in time. Here's how to create a Docker image:

  1. Create a Dockerfile: A Dockerfile is a simple text file that contains instructions for building your Docker image. The Dockerfile tells Docker what base image to use, what files to copy into the container, and what commands to run inside the container. Here's an example Dockerfile for a simple go application:
FROM golang:1.19 AS builder
COPY ./go.mod ./go.mod
RUN go mod download && go mod verify
COPY ./ ./
RUN CGO_ENABLED=0 go build -o bin/main .
FROM AS final
USER nonroot:nonroot
COPY --from=builder --chown=nonroot:nonroot /app/bin/main /
CMD ["/main"]

Let's look at this Dockerfile and what it's doing step by step:

  1. FROM golang:1.19 AS builder: This sets the base image to use for the build stage. In this case, it's using the official Go 1.19 image as the builder stage.
  2. WORKDIR /app: This sets the working directory inside the container to /app. Any subsequent commands will be executed in this directory.
  3. COPY ./go.mod ./go.mod: This copies the go.mod file from the local directory (the directory where the Dockerfile resides) to the /app/go.mod path inside the container.
  4. RUN go mod download && go mod verify: This command downloads the Go module dependencies specified in the go.mod file and verifies their checksums. It ensures that all required dependencies are available for building the Go application.
  5. COPY ./ ./: This copies the entire local directory (excluding the ./go.mod file) to the current working directory (/app) in the container
  6. RUN CGO_ENABLED=0 go build -o bin/main .: This command builds the Go application inside the container. The CGO_ENABLED=0 flag disables the use of cgo, which is used for linking with C libraries. The -o bin/main flag specifies the output binary name as main inside the bin directory. The . represents the current directory, so it builds the Go application located in the current working directory (/app).
  7. FROM AS final: This sets the base image for the final stage of the container image. In this case, it's using the image, which provides a minimal and secure base image for running statically-linked binaries.
  8. USER nonroot:nonroot: This sets the user and group to nonroot. It is a best practice to run containers with non-root privileges for security reasons.
  9. COPY --from=builder --chown=nonroot:nonroot /app/bin/main /: This copies the binary file (main) generated in the builder stage from the /app/bin/main path to the root directory (/) in the final stage. The --from=builder flag specifies to copy from the builder stage, and --chown=nonroot:nonroot sets the ownership of the copied file to nonroot:nonroot.
  10. CMD ["/main"]: This sets the default command to execute when the container starts. It specifies that the main binary should be executed.

Now that we have the Dockerfile built we can build the Docker image using the docker build command. The command takes the path to the directory containing the Dockerfile and builds the Docker image. Here's it is:

docker build -t my-go-app .

We can verify it by looking at the logs and seeing that everything finished successfully with no errors:

~/code/nucleus/samples/go/go (main ✗) docker build -t my-go-app .
[+] Building 10.5s (14/14) FINISHED
 => [internal] load .dockerignore                                                               0.0s
 => => transferring context: 2B                                                                 0.0s
 => [internal] load build definition from Dockerfile                                            0.0s
 => => transferring dockerfile: 518B                                                            0.0s
 => [internal] load metadata for                                1.3s
 => [internal] load metadata for                                  1.6s
 => [builder 1/6] FROM  6.8s
 => => resolve  0.0s
 => => sha256:f5f87148817740981d80db32e4925f355c1cfb71c2e4685b3e938861d5cfbdb9 2.36kB / 2.36kB  0.0s
 => => sha256:e4ca8b1947f76414e4d88765ce5f35dcbeccdfdd3e553f35a99fb96d10881548 6.88kB / 6.88kB  0.0s
 => => sha256:9a0518ec57568a70561f7c04650f9554c88dada973f54d88e36f65b0796d35 23.57MB / 23.57MB  1.9s
 => => sha256:356172c718acf9930d9465b170864319079e2d2ebac0ddef781d64e8578953 63.98MB / 63.98MB  1.4s
 => => sha256:30d7237853d06cd335c1b739a6ff89492e0472ea13896bf8f3b1466ed55113d4 1.58kB / 1.58kB  0.0s
 => => sha256:42cbebf8bc116ba1aed7897e2d0566bf49da9d5c70be71b6a7cb07805d2f5b 49.57MB / 49.57MB  1.2s
 => => sha256:a3c1d40c82551fded3ae8435595284e568a5c425b275785f3a78b95ed6f25b 86.26MB / 86.26MB  2.7s
 => => extracting sha256:42cbebf8bc116ba1aed7897e2d0566bf49da9d5c70be71b6a7cb07805d2f5b7a       0.8s
 => => sha256:f034f919eecacae7eb5d93a7be8f118ae729f2a721c8ebbb8ac763064b09 115.33MB / 115.33MB  3.7s
 => => sha256:69f6c51b42d6fcf584ff24ccf81da45c809780325a080416d2e4dc4b17d41de9 155B / 155B      2.1s
 => => extracting sha256:9a0518ec57568a70561f7c04650f9554c88dada973f54d88e36f65b0796d35de       0.3s
 => => extracting sha256:356172c718acf9930d9465b170864319079e2d2ebac0ddef781d64e85789531e       1.2s
 => => extracting sha256:a3c1d40c82551fded3ae8435595284e568a5c425b275785f3a78b95ed6f25b15       1.1s
 => => extracting sha256:f034f919eecacae7eb5d93a7be8f118ae729f2a721c8ebbb8ac763064b095bcb       1.6s
 => => extracting sha256:69f6c51b42d6fcf584ff24ccf81da45c809780325a080416d2e4dc4b17d41de9       0.0s
 => [internal] load build context                                                               0.0s
 => => transferring context: 2.28kB                                                             0.0s
 => [final 1/2] FROM  4.3s
 => => resolve  0.0s
 => => sha256:7198a357ff3a8ef750b041324873960cf2153c11cc50abb9d8d5f8bb089f6b4e 1.46kB / 1.46kB  0.0s
 => => sha256:56a360f359814800d5d4f1df868ed15b2142dbfa7b2565a712f35bafebe438a6 1.65kB / 1.65kB  0.0s
 => => sha256:d70ca864bac51915145185c334cc391574195f2aca09296f24734544adacc48c 1.27kB / 1.27kB  0.0s
 => => sha256:0b41f743fd4d78cb50ba86dd3b951b51458744109e1f5063a76bc5a792c3 103.73kB / 103.73kB  2.7s
 => => extracting sha256:0b41f743fd4d78cb50ba86dd3b951b51458744109e1f5063a76bc5a792c3d8e7       0.0s
 => => sha256:b02a7525f878e61fc1ef8a7405a2cc17f866e8de222c1c98fd6681aff6e5 716.49kB / 716.49kB  3.5s
 => => sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b 21.20kB / 21.20kB  3.2s
 => => extracting sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58       0.0s
 => => sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265 317B / 317B      3.5s
 => => sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0 198B / 198B      3.7s
 => => extracting sha256:b02a7525f878e61fc1ef8a7405a2cc17f866e8de222c1c98fd6681aff6e509db       0.1s
 => => sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c 113B / 113B      3.8s
 => => sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f 385B / 385B      3.9s
 => => sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c 355B / 355B      3.9s
 => => extracting sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265       0.0s
 => => extracting sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0       0.0s
 => => extracting sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c       0.0s
 => => sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024 130.56kB / 130.56kB  4.1s
 => => extracting sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f       0.0s
 => => extracting sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c       0.0s
 => => extracting sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a       0.0s
 => [builder 2/6] WORKDIR /app                                                                  0.1s
 => [builder 3/6] COPY ./go.mod ./go.mod                                                        0.0s
 => [builder 4/6] RUN go mod download && go mod verify                                          0.2s
 => [builder 5/6] COPY ./ ./                                                                    0.0s
 => [builder 6/6] RUN CGO_ENABLED=0 go build -o bin/main .                                      1.8s
 => [final 2/2] COPY --from=builder --chown=nonroot:nonroot /app/bin/main /                     0.0s
 => exporting to image                                                                          0.0s
 => => exporting layers                                                                         0.0s
 => => writing image sha256:35927e1ffe6b58dba3f15ca495a10dd5df61ea56ad1d2879b4cc7d63a662bdd0    0.0s
 => => naming to                                                    0.0s

Now that we've built our Docker image, we can run it as a Docker container. A Docker container is a running instance of a Docker image. Assuming that you have the Docker desktop application, here's how to run your Docker container:

docker run -p 3000:3000 my-go-app

This command tells Docker to run a Docker container from the my-go-app Docker image and map port 3000 in the container to port 3000 on the host system. You can now access your Go application by going to http://localhost:3000 in your web browser. Now you should be able to see your application running. And we're done!

Creating Containers with Nucleus

Above we showed how you can easily create a Dockerfile and build a container. If you're running a simple application or service then the basics should be fine but what if you're doing something a little more complicated? This is where the Dockerfile can get tricky and you can spend hours messing around with it.

Using Nucleus, you can containerize an application or service and get a Docker image without ever needing to install Docker or write a Dockerfile. Nucleus handles all of this behind the scenes.


  1. Sign into Nucleus and navigate to Services and click on '+ New Service'.
  2. Set the environment that you want to deploy the service to, give your service a name, and set the Access to 'Public' or 'Private.
  3. In the 'Enable Deployment Type' select 'Github'.
  4. If you haven't already done so, you can follow the UI guide to authorize Nucleus with Github.
  5. Once Github is ready to go, select an organization, repo and branch that you want to deploy and that's it!

Behind the scenes, Nucleus is pulling down that Github repo, detecting the language, writing a dockerfile and deploying that service for you. All in less than two minutes. You’ve now containerized your application, saving hours of headaches and gaining back time to invest in more features, improved functionality, and even more products.

What are some potential downsides of Containerization?

We've talked a lot about how containerization can be a really powerful way to make your applications flexible and portable but with everything there are also trade-offs. So let's talk about some of the downsides of containerization.

  1. Increased complexity in application infrastructure & orchestration
  2. Learning curves associated with new technology & infrastructure
  3. Different network and storage configurations from traditional virtual environments
  4. Security mirages — where organizations assume that the isolated and distributed nature of containers makes them immune to security threats, which is not the case
  5. Increased performance overhead due to additional abstraction layers
  6. Vendor lock-in, especially for organizations that rely on specific container platforms with vendor-specific features
  7. Persistent data management in containerized environments does not occur naturally, and requires additional strategies and resources—which, in turn, introduce additional layers of complexity

Most of the trade-offs come in the form of added complexity and new technologies that teams have to learn and become comfortable with. As with any technology decision, your team should evaluate if containers make sense for your application and development style.

Wrapping up

There’s no doubt that containerization has transformed how engineering teams build and run their applications. However, it hasn’t been without its own bumps in the road, and there are still challenges to overcome. I'm looking forward to seeing more innovation in this space which makes it even easier for teams to work with containers.

Table of Contents

Latest Articles



3 types of Zero-Downtime Deployments in Kubernetes

A guide to the 3 types of zero-downtime deployments in Kubernetes




Subscribe to new blogs from Nucleus.