Kubernetes Blue-Green Deployments
Our critical application requires a swift version upgrade without downtime
Table of Contents
Summary
Our application is currently running on version one. The image used is the "httpd: alpine" image. We planned to update it to the latest version using the "nginx: alpine" image. The change needs to be implemented immediately, without any delays. Once the update is complete, the new requests should redirect to the container running with the version 2 "nginx" image.
Creating Deployments and Services
Create version 1 of our application with the deployment and service definition file given here.
Table of Contents
Summary
Our application is currently running on version one. The image used is the "httpd: alpine" image. We planned to update it to the latest version using the "nginx: alpine" image. The change needs to be implemented immediately, without any delays. Once the update is complete, the new requests should redirect to the container running with the version 2 "nginx" image.
Creating Deployments and Services
Create version 1 ( Blue ) of our application with the deployment and service definition file given here.
Creating deployment
Apply the Deployment:
Creating Services
Now let's create the Service that points to the bluegreen Deployment:
Create the Service:
Testing the Deployment and Service
Create a ubuntu pod:
Execute a curl command to check if the ClusterIP service is listening to the Nginx webpage:
Use the node's IP address and the specified nodePort (e.g., 30080) to access the Nginx service externally:
Creating new Deployment bluegreen-v2
Next, let's create a new Deployment bluegreen-v2 which uses image nginx:alpine with 4 replicas. Its Pods should have labels "app: bluegreen" and version: v2
Create the Deployment:
Modify the Service
Now let's modify the Service to points to the bluegreenv2 Deployment:
Apply the change:
Testing the Deployment and Service
Create a ubuntu pod:
Execute a curl command to check if the ClusterIP service is listening to the Nginx webpage:
Use the node's IP address and the specified nodePort (e.g., 30080) to access the Nginx service externally:
Comparing Endpoint IPs with Pods
Endpoints represent the IP addresses of one or more Pods dynamically assigned to a Service.
First, let's assume we have two Pods associated with our Nginx Deployment. We'll use the following Pod names:
nginx-pod-1nginx-pod-2
Now, let's retrieve the IP addresses of these Pods using the
kubectl get pods -o widecommand:In this example:
nginx-pod-1has an IP address of10.244.1.10.nginx-pod-2has an IP address of10.244.2.20.
Let's get more information about the
nginx-serviceusingkubectl describe:
This command will provide detailed information about the service, including its IP address, ports, and associated endpoints.
Next, let's check the Endpoints associated with our
nginx-serviceusing thekubectl get endpoints nginx-servicecommand:Here, the
nginx-servicehas endpoints corresponding to both Pods:10.244.1.10:80(associated withnginx-pod-1)10.244.2.20:80(associated withnginx-pod-2)
Finally, we can compare the IP addresses from the Endpoints with the Pod IPs to verify that they match.
Remember that these IP addresses are internal to the cluster and are used for communication between services and Pods. The Service abstraction ensures seamless connectivity without exposing individual Pod IPs externally.
References
Last updated