You are currently viewing Mastering Kubernetes: Step-by-Step Guide to Deploying a Containerized Go App

Mastering Kubernetes: Step-by-Step Guide to Deploying a Containerized Go App

Kubernetes, a powerful open-source container orchestrator developed by Google, revolutionizes the way containerized applications are managed and scaled in the cloud. With its rich set of features, including horizontal auto-scaling, service discovery, rolling updates, self-healing mechanisms, and secret/configuration management, Kubernetes streamlines the deployment and operation of modern applications.

Leading cloud providers such as Google Cloud, AWS, Azure, and DigitalOcean offer managed Kubernetes platforms, enabling seamless migration between different cloud environments without architectural modifications.

In this comprehensive article, we delve into the intricacies of deploying, managing, and scaling a Go web application on Kubernetes. By leveraging the flexibility of minikube, a tool that facilitates the creation of a local Kubernetes cluster within a VM on your machine, you can swiftly gain hands-on experience with Kubernetes. Alongside step-by-step instructions, we share expert tips to ensure your Go application thrives in the Kubernetes ecosystem, maximizing efficiency and performance.

Crafting a Powerful Web Application in Go: A Step-by-Step Guide

Let’s build a simple Go web app to deploy on Kubernetes. Fire up your terminal and create a new folder for the project:

mkdir go-kubernetes

Next, Initialize Go modules by running the following command

cd go-kubernetes
go mod init

Now, Create a file named main.go and copy the following code

package main

import (


func handler(w http.ResponseWriter, r *http.Request) {
	query := r.URL.Query()
	name := query.Get("name")
	if name == "" {
		name = "Guest"
	log.Printf("Received request for %s\n", name)
	w.Write([]byte(fmt.Sprintf("Hello, %s\n", name)))

func healthHandler(w http.ResponseWriter, r *http.Request) {

func readinessHandler(w http.ResponseWriter, r *http.Request) {

func main() {
	// Create Server and Route Handlers
	r := mux.NewRouter()

	r.HandleFunc("/", handler)
	r.HandleFunc("/health", healthHandler)
	r.HandleFunc("/readiness", readinessHandler)

	srv := &http.Server{
		Handler:      r,
		Addr:         ":8080",
		ReadTimeout:  10 * time.Second,
		WriteTimeout: 10 * time.Second,

	// Start Server
	go func() {
		log.Println("Starting Server")
		if err := srv.ListenAndServe(); err != nil {

	// Graceful Shutdown

func waitForShutdown(srv *http.Server) {
	interruptChan := make(chan os.Signal, 1)
	signal.Notify(interruptChan, os.Interrupt, syscall.SIGINT, syscall.SIGTERM)

	// Block until we receive our signal.

	// Create a deadline to wait for.
	ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
	defer cancel()

	log.Println("Shutting down")

The app uses gorilla mux library for routing. It also has /health and /readiness endpoints apart from the / endpoint. You’ll find out what is the use of these endpoints in the later section.

Let’s now build and run the app locally:

go build
2023/06/19 11:51:58 Starting Server
curl localhost:8080?name=codeacademia
Hello, codeacademia

Dockerizing the Go application

To deploy our app on Kubernetes, we need to first containerize it. Create a file named Dockerfile inside the project’s folder and add the following configurations in the Dockerfile.

# Dockerfile References:

# Start from the latest golang base image
FROM golang:latest as builder

# Add Maintainer Info
LABEL maintainer="Code  Academia <>"

# Set the Current Working Directory inside the container

# Copy go mod and sum files
COPY go.mod go.sum ./

# Download all dependancies. Dependencies will be cached if the go.mod and go.sum files are not changed
RUN go mod download

# Copy the source from the current directory to the Working Directory inside the container
COPY . .

# Build the Go app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

######## Start a new stage from scratch #######
FROM alpine:latest  

RUN apk --no-cache add ca-certificates

WORKDIR /root/

# Copy the Pre-built binary file from the previous stage
COPY --from=builder /app/main .

# Expose port 8080 to the outside world

# Command to run the executable
CMD ["./main"] 

Seamless Docker Image Building and Pushing to Docker Hub: A Comprehensive Guide

Let’s build and push the docker image of our Go app on docker hub so that we can later use this image while deploying the app on Kubernetes

# Build the docker image
docker build -t go-kubernetes .

# Tag the image
docker tag go-kubernetes codeacademia/go-hello-world:1.0.0

# Login to docker with your docker Id
docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don\'t have a Docker ID, head over to to create one.
Username (codeacademia): codeacademia
Login Succeeded

# Push the image to docker hub
docker push codeacademia/go-hello-world:1.0.0

Creating a Kubernetes deployment

To effortlessly manage the creation and updates of your application instances in Kubernetes, we will now delve into creating a Kubernetes deployment. Deployments offer a declarative approach, providing clear instructions on how Kubernetes should handle your application instances. Within a deployment, you will find a collection of identical and indistinguishable Pods.

Each Pod represents a deployment unit, serving as a single instance of your application within Kubernetes. It can comprise either a single container or a tightly-coupled group of containers that share resources.

Deployments abstract the lower-level intricacies of managing Pods, such as the specific node on which the Pod is running. Since Pods are linked to the lifespan of the node, they cease to exist when the node shuts down. Deployments play a crucial role in maintaining the desired number of Pods, ensuring it matches the current number of Pods.

To define the Pod’s characteristics, including the desired number of Pods, container specifications, and health checks, we utilize a manifest file, commonly known as a YAML file. This YAML file encapsulates various configurations, outlining the desired state of your application.


apiVersion: apps/v1
kind: Deployment                 # Type of Kubernetes resource
  name: go-hello-world           # Name of the Kubernetes resource
  replicas: 3                    # Number of pods to run at any given time
      app: go-hello-world        # This deployment applies to any Pods matching the specified label
  template:                      # This deployment will create a set of pods using the configurations in this template
      labels:                    # The labels that will be applied to all of the pods in this deployment
        app: go-hello-world 
    spec:                        # Spec for the container which will run in the Pod
      - name: go-hello-world
        image: codeacademia/go-hello-world:1.0.0 
        imagePullPolicy: IfNotPresent
          - containerPort: 8080  # Should match the port number that the Go application listens on
        livenessProbe:           # To check the health of the Pod
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 15
          timeoutSeconds: 5
        readinessProbe:          # To check if the Pod is ready to serve traffic or not
            path: /readiness
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 5
          timeoutSeconds: 1  

I’ve added comments alongside each configuration in the above deployment manifest file. But I want to talk more about some of them.

Notice the configuration replicas: 3 in the above file. It instructs Kubernetes to run 3 instances of our application at any given time. If an instance dies, Kubernetes automatically spins up another instance.

Let’s also talk about the livenessProbe and readinessProbe. Sometimes a container on a pod can be running but the application inside of the container might be malfunctioning. For instance, if your code was deadlocked.

Kubernetes has built-in support to make sure that your application is running correctly with user implemented application health and readiness checks.

Readiness probes indicate when an application is ready to serve traffic. If a readiness check fails then the container will be marked as not ready and will be removed from any load balancers.

Liveness probes indicate a container is alive. If a liveness probe fails multiple times, then the container will be restarted.

Local Kubernetes Cluster Setup with Minikube: Deploying Your Application

You’ll need to install and set up kubectl (Kubernetes command-line tool) and Minikube to proceed further. Please follow the instructions on the official Kubernetes website to install kubectl and minikube.

Once the installation is complete, type the following command to start a Kubernetes cluster:

minikube start

Let’s now deploy our app to the minikube cluster by applying the deployment manifest using kubectl.

kubectl apply -f k8s-deployment.yml
deployment.apps/go-hello-world created

That’s it! The deployment is created. You can get the deployments like this:

kubectl get deployments
go-hello-world   3/3     3            3           25s

You can type the following command to get the pods in the cluster:

kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
go-hello-world-69b45499fb-7fh87   1/1     Running   0          37s
go-hello-world-69b45499fb-rt2xj   1/1     Running   0          37s
go-hello-world-69b45499fb-xjmlq   1/1     Running   0          37s

Pods are allocated a private IP address by default and cannot be reached outside of the cluster. You can use the kubectl port-forward command to map a local port to a port inside the pod like this:

kubectl port-forward go-hello-world-69b45499fb-7fh87 8080:8080
Forwarding from -> 8080
Forwarding from [::1]:8080 -> 8080

You can now interact with the Pod on the forwarded port:

Building a Kubernetes Service: Connecting and Exposing Your Application

The port-forward command is good for testing the pods directly. But in production, you would want to expose the pod using services.

Pods can be restarted for all kinds of reasons like failed liveliness checks, readiness checks or they can be killed if the node they are running on dies.

Instead of relying on the Pods IP addresses which change, Kubernetes provides services as stable endpoint for pods. The pods that the service exposes are based on a set of labels. If Pods have the correct labels, they are automatically picked up and exposed by our services.

The level of access the service provides to the set of pods depends on the service type which can be:

  • ClusterIP: Internal only.
  • NodePort: Gives each node an external IP that’s accessible from outside the cluster and also opens a Port. A kube-proxy component that runs on each node of the Kubernetes cluster listens for incoming traffic on the port and forwards them to the selected pods in a round-robin fashion.
  • LoadBalancer: Adds a load balancer from the cloud provider which forwards traffic from the service to the nodes within it.

Let’s expose our Pods by creating a service. Add the following configurations in the k8s-deployment.yml file

apiVersion: v1
kind: Service                    # Type of kubernetes resource
  name: go-hello-world-service   # Name of the resource
  type: NodePort                 # A port is opened on each node in your cluster via Kube proxy.
  ports:                         # Take incoming HTTP requests on port 9090 and forward them to the targetPort of 8080
  - name: http
    port: 9090
    targetPort: 8080
    app: go-hello-world         # Map any pod with label `app=go-hello-world` to this service

Let’s now apply the above configurations by typing the following command:

kubectl apply -f k8s-deployment.yml
deployment.apps/go-hello-world unchanged
service/go-hello-world-service created

A service is created for exposing the Pods. You can get the list of services in the kubernetes cluster like this:

kubectl get services
NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
go-hello-world-service   NodePort   <none>        9090:32550/TCP   35s
kubernetes               ClusterIP       <none>        443/TCP          13h

Type the following command to get the URL for the service in the minikube cluster:

minikube service go-hello-world-service --url

That’s all! You can now interact with the service on the above URL:

Hello, Guest

Scaling a Kubernetes deployment

You can scale the number of Pods by increasing the number of replicas in the kubernetes deployment manifest and applying the changes using kubectl.

You can also use kubectl scale command to increase the number of pods:

kubectl scale --replicas=4 deployment/go-hello-world
deployment.extensions/go-hello-world scaled
kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
go-hello-world-69b45499fb-7fh87   1/1     Running   0          112m
go-hello-world-69b45499fb-hzb6v   1/1     Running   0          10s
go-hello-world-69b45499fb-rt2xj   1/1     Running   0          112m
go-hello-world-69b45499fb-xjmlq   1/1     Running   0          112m

Deleting Kubernetes resources

Deleting a Pod

kubectl delete pod go-hello-world-69b45499fb-7fh87
pod "go-hello-world-69b45499fb-7fh87" deleted

Deleting a Service

kubectl delete service go-hello-world-service
service "go-hello-world-service" deleted

Deleting a Deployment

kubectl delete deployment go-hello-world
deployment.extensions "go-hello-world" deleted

Stopping and Deleting the Minikube cluster

Stopping the minikube kubernetes cluster

minikube stop

Deleting the minikube kubernetes cluster

minikube delete


Thank you for taking the time to read this article. I genuinely hope you found it valuable and informative. I would love to hear your thoughts, suggestions, and feedback in the comment section below. Your insights are highly appreciated, and they will help us improve and provide more relevant content in the future. Thank you once again for your support, and I look forward to engaging in fruitful discussions with you.

Leave a Reply