Kubernetes Blue-Green Deployments

Our critical application requires a swift version upgrade without downtime

Table of Contents


Summary

Our application is currently running on version one. The image used is the "httpd: alpine" image. We planned to update it to the latest version using the "nginx: alpine" image. The change needs to be implemented immediately, without any delays. Once the update is complete, the new requests should redirect to the container running with the version 2 "nginx" image.

Creating Deployments and Services

Create version 1 of our application with the deployment and service definition file given here.

Table of Contents


Summary

Our application is currently running on version one. The image used is the "httpd: alpine" image. We planned to update it to the latest version using the "nginx: alpine" image. The change needs to be implemented immediately, without any delays. Once the update is complete, the new requests should redirect to the container running with the version 2 "nginx" image.

Creating Deployments and Services

Create version 1 ( Blue ) of our application with the deployment and service definition file given here.

Creating deployment

Apply the Deployment:

Creating Services

Now let's create the Service that points to the bluegreen Deployment:

Create the Service:

Testing the Deployment and Service

  1. Create a ubuntu pod:

  2. Execute a curl command to check if the ClusterIP service is listening to the Nginx webpage:

  3. Use the node's IP address and the specified nodePort (e.g., 30080) to access the Nginx service externally:

Creating new Deployment bluegreen-v2

Next, let's create a new Deployment bluegreen-v2 which uses image nginx:alpine with 4 replicas. Its Pods should have labels "app: bluegreen" and version: v2

Create the Deployment:

Modify the Service

Now let's modify the Service to points to the bluegreenv2 Deployment:

Apply the change:

Testing the Deployment and Service

  1. Create a ubuntu pod:

  2. Execute a curl command to check if the ClusterIP service is listening to the Nginx webpage:

  3. Use the node's IP address and the specified nodePort (e.g., 30080) to access the Nginx service externally:

Comparing Endpoint IPs with Pods

  • Endpoints represent the IP addresses of one or more Pods dynamically assigned to a Service.

  1. First, let's assume we have two Pods associated with our Nginx Deployment. We'll use the following Pod names:

    • nginx-pod-1

    • nginx-pod-2

  2. Now, let's retrieve the IP addresses of these Pods using the kubectl get pods -o wide command:

    In this example:

    • nginx-pod-1 has an IP address of 10.244.1.10.

    • nginx-pod-2 has an IP address of 10.244.2.20.

  3. Let's get more information about the nginx-service using kubectl describe:

This command will provide detailed information about the service, including its IP address, ports, and associated endpoints.

  1. Next, let's check the Endpoints associated with our nginx-service using the kubectl get endpoints nginx-service command:

    Here, the nginx-service has endpoints corresponding to both Pods:

    • 10.244.1.10:80 (associated with nginx-pod-1)

    • 10.244.2.20:80 (associated with nginx-pod-2)

  2. Finally, we can compare the IP addresses from the Endpoints with the Pod IPs to verify that they match.

Remember that these IP addresses are internal to the cluster and are used for communication between services and Pods. The Service abstraction ensures seamless connectivity without exposing individual Pod IPs externally.

References

Last updated