Let's explore the concept of Canary deployment
We have an existing application. That runs on the "httpd: alpine" image. Now, let's switch to the "nginx: alpine" image. The change should take place gradually. Let us make use of the Canary deployment approach here.
First, create a new Deployment called deploymentCanary-v2 and configure it to use the "nginx: alpine" image.
Next, set up traffic splitting so that 20% of incoming requests go to the new "Nginx: alpine" version. The remaining 80% continue to hit the old "httpd: alpine" version.
Monitor the behavior of both versions while ensuring that the total number of Pods across both Deployments remains at 10.
This gradual transition allows us to test the new image without disrupting the system. It's like experimenting in a controlled lab environment!
Creating Deployments and Services
Create version 1 of our application with the deployment and service definition file given here.
Creating deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: deploymentCanary
name: deploymentcanary-v1
spec:
replicas: 10
selector:
matchLabels:
app: deploymentCanary
template:
metadata:
labels:
app: deploymentCanary
spec:
containers:
- image: httpd:alpine
name: httpdApply the Deployment:
Creating Services
Now let's create the Service that points to the Deployment:
Create the Service:
Here we have the "NodePort" Service named "serviceCanary". It has the Pods of Deployment of the app "deploymentCanary" as the selector.
Testing the Deployment and Service
Create a ubuntu pod:
Execute a curl command to check if the ClusterIP service is listening to the Nginx webpage:
Use the node's IP address and the specified nodePort (e.g., 30080) to access the Nginx service externally:
Reduce the replicas of the old deployment to 8
Change the number of replicas to 8.
Create the Deployment:
Create a new Deployment with the same label
Now, both deployments will point to the same service.
Apply the change:
Testing the Deployment and Service
Create a busybox pod:
Execute a curl command to check if the ClusterIP service is listening to the Nginx webpage:
Use the node's IP address and the specified nodePort (e.g., 30080) to access the Nginx service externally:
Comparing Endpoint IPs with Pods
Endpoints represent the IP addresses of one or more Pods dynamically assigned to a Service.
First, let's assume we have two Pods associated with our Nginx Deployment. We'll use the following Pod names:
nginx-pod-1nginx-pod-2
Now, let's retrieve the IP addresses of these Pods using the
kubectl get pods -o widecommand:In this example:
nginx-pod-1has an IP address of10.244.1.10.nginx-pod-2has an IP address of10.244.2.20.
Let's get more information about the
nginx-serviceusingkubectl describe:
This command will provide detailed information about the service, including its IP address, ports, and associated endpoints.
Next, let's check the Endpoints associated with our
nginx-serviceusing thekubectl get endpoints nginx-servicecommand:Here, the
nginx-servicehas endpoints corresponding to both Pods:10.244.1.10:80(associated withnginx-pod-1)10.244.2.20:80(associated withnginx-pod-2)
Finally, we can compare the IP addresses from the Endpoints with the Pod IPs to verify that they match.
Remember that these IP addresses are internal to the cluster and are used for communication between services and Pods. The Service abstraction ensures seamless connectivity without exposing individual Pod IPs externally.
References
Last updated