
- Kubernetes Tutorial
- Kubernetes - Home
- Kubernetes - Overview
- Kubernetes - Architecture
- Kubernetes - Setup
- Kubernetes - Setup on Ubuntu
- Kubernetes - Images
- Kubernetes - Jobs
- Kubernetes - Labels & Selectors
- Kubernetes - Namespace
- Kubernetes - Node
- Kubernetes - Service
- Kubernetes - POD
- Kubernetes - Replication Controller
- Kubernetes - Replica Sets
- Kubernetes - Deployments
- Kubernetes - Volumes
- Kubernetes - Secrets
- Kubernetes - Network Policy
- Advanced Kubernetes
- Kubernetes - API
- Kubernetes - Kubectl
- Kubernetes - Kubectl Commands
- Kubernetes - Creating an App
- Kubernetes - App Deployment
- Kubernetes - Autoscaling
- Kubernetes - Dasard Setup
- Kubernetes - Helm Package Management
- Kubernetes - CI/CD Integration
- Kubernetes - Persistent Storage and PVCs
- Kubernetes - RBAC
- Kubernetes - Logging & Monitoring
- Kubernetes - Service Mesh with Istio
- Kubernetes - Backup and Disaster Recovery
- Managing ConfigMaps and Secrets
- Running Stateful Applications
- Multi-Cluster Management
- Security Best Practices
- Kubernetes CRDs
- Debugging Pods and Nodes
- K9s for Cluster Management
- Managing Taints and Tolerations
- Horizontal and Vertical Pod Autoscaling
- Minikube for Local Development
- Kubernetes in Docker
- Deploying Microservices
- Blue-Green Deployments
- Canary Deployments with Commands
- Troubleshooting Kubernetes with Commands
- Scaling Applications with Kubectl
- Advanced Scheduling Techniques
- Upgrading Kubernetes Clusters
- Kubernetes Useful Resources
- Kubernetes - Quick Guide
- Kubernetes - Useful Resources
- Kubernetes - Discussion
Kubernetes - Canary Deployments with Commands
In real-world applications, we often want to roll out new versions slowly, not all at once. This helps us test new features safely, reduce risk, and catch issues early before they affect everyone. This is where Canary Deployments come in.
Read this chapter to learn what Canary Deployments are and why they are useful. We will also highlight the differences between Blue-Green and Canary Deployment and how to implement Canary Deployments step-by-step on Kubernetes using easy-to-follow commands.
What is a Canary Deployment?
A Canary Deployment means we release a small portion of traffic to a new version of our app while the majority still uses the stable version. If everything looks good, we slowly increase the traffic to the new version. If problems appear, we can easily roll back with minimal impact. It's like sending a few "canary" birds into a coal mine − if they are safe, it's a good sign for everyone else.
Benefits of Canary Deployments
Given below are the major advantages of using Canary Deployments −
- Minimize Risk − Only a small group of users see the new version initially.
- Quick Rollback − Easy to revert if something breaks.
- Real User Testing − Monitor real usage before full rollout.
Blue-Green vs Canary Deployment: What's the Difference?
The following table highlights the differences between Blue-Green and Canary Deployments −
Aspect | Blue-Green Deployment | Canary Deployment |
---|---|---|
Traffic switch | Instant, 100% switch from old (Blue) to new (Green) | Gradual, small % of traffic goes to new version first |
Risk | Higher (if there's a bug, 100% users are hit immediately) | Lower (only a few users get the new version first) |
Rollback | Just switch Service back to Blue | Scale back canary replicas or increase old version replicas |
Strategy | Two complete environments running at the same time | Two versions running, traffic proportionally shared |
Complexity | Simple to understand and manage | Slightly more complex (balancing traffic % over time) |
How Does Canary Deployment Work in Kubernetes?
In Kubernetes, we typically use Deployments and Services to handle Canary rollouts. Here's the basic idea −
- We create two Deployments: Stable (current version) and Canary (new version).
- Both Deployments are behind the same Service.
- We control the number of Pods for each Deployment to adjust the traffic split.
Step-by-Step: Implementing a Simple Canary Deployment
Let's set up a very simple canary rollout together.
Here are the prerequisites -
- Kubernetes cluster (Minikube, KIND, kubeadm, or cloud cluster)
- kubectl installed and connected
- Docker
Create the Stable Version (v1)
First, let's deploy the stable version of our app.
Create deployment-blue.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: webapp-blue spec: replicas: 5 selector: matchLabels: app: webapp version: blue template: metadata: labels: app: webapp version: blue spec: containers: - name: webapp image: tutorialspoint/webapp:v1 ports: - containerPort: 3000
This creates 5 Pods of the blue (v1) version.
Create service-webapp.yaml
apiVersion: v1 kind: Service metadata: name: webapp-service spec: selector: app: webapp ports: - protocol: TCP port: 80 targetPort: 3000 type: ClusterIP
This Service will load-balance traffic to all Pods with the label app: webapp.
Apply the Manifests
$ kubectl apply -f deployment-blue.yaml Output: deployment.apps/webapp-blue created $ kubectl apply -f service-webapp.yaml Output: service/webapp-service created
Now, check if everything is running properly −
$ kubectl get pods Output: webapp-blue-75c548b4df-5gdr2 1/1 Running 0 79s webapp-blue-75c548b4df-frtmb 1/1 Running 0 79s webapp-blue-75c548b4df-gmx8n 1/1 Running 0 79s webapp-blue-75c548b4df-h8kck 1/1 Running 0 79s webapp-blue-75c548b4df-lxgwb 1/1 Running 0 79s $ kubectl get svc
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h34m webapp-service ClusterIP 10.96.111.253 <none> 80/TCP 74s
Deploy the Canary Version (v2)
Now let's create the canary version (small % of traffic).
Create deployment-green.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: webapp-green spec: replicas: 1 selector: matchLabels: app: webapp version: green template: metadata: labels: app: webapp version: green spec: containers: - name: webapp image: tutorialspoint/webapp:v2 ports: - containerPort: 3000
We only create 1 Pod for the green (v2) version.
Apply the Manifest
$ kubectl apply -f deployment-green.yaml
Output
deployment.apps/webapp-green created
Verify Both Versions Are Running
$ kubectl get pods -l app=webapp
Output
NAME READY STATUS RESTARTS AGE webapp-blue-75c548b4df-5gdr2 1/1 Running 0 3m19s webapp-blue-75c548b4df-frtmb 1/1 Running 0 3m19s webapp-blue-75c548b4df-gmx8n 1/1 Running 0 3m19s webapp-blue-75c548b4df-h8kck 1/1 Running 0 3m19s webapp-blue-75c548b4df-lxgwb 1/1 Running 0 3m19s webapp-green-58bbfc8cbf-xqbr2 1/1 Running 0 24s
Send Traffic to Both Versions
Since both the blue and green Deployments have the same app: webapp label, our Service (webapp-service) automatically load-balances traffic between them.
Traffic distribution −
- ~ 83% (5 out of 6) will hit the blue Pods
- ~ 17% (1 out of 6) will hit the green Pod
Testing the Canary Deployment
We can test by port-forwarding −
$ kubectl port-forward webapp-blue-75c548b4df-29xzs 8080:80
Output
Forwarding from 127.0.0.1:8080 -> 80 Handling connection for 8080 Handling connection for 8080
Now open the browser and go to http://localhost:8080

Gradually Increase the Replicas for Green
Example
$ kubectl scale deployment webapp-green --replicas=3
Output
deployment.apps/webapp-green scaled $ kubectl scale deployment webapp-blue --replicas=3
Output
deployment.apps/webapp-blue scaled
Now, 50% of traffic goes to green.
Quick Rolling Back
If something goes wrong with the green deployment −
- Scale down green Pods
- Scale up blue Pods
Example
$ kubectl scale deployment webapp-green --replicas=0
Its output would be −
deployment.apps/webapp-green scaled
Now, let's see how to scale up blue pods −
$ kubectl scale deployment webapp-blue --replicas=6
Its output would be −
deployment.apps/webapp-blue scaled
Full Rollout (Optional)
To fully roll out the Blue Deployment −
$ kubectl delete deployment webapp-blue
Its output would be −
deployment.apps "webapp-blue" deleted
Now, apply the following command −
$ kubectl scale deployment webapp-green --replicas=6
Its output would be −
deployment.apps/webapp-green scaled
Now 100% of users are on the green version.
Advanced Canary Techniques (Beyond Basics)
In production, teams often make Canary Deployments even smarter by using −
- Progressive delivery controllers like Argo Rollouts or Flagger
- Advanced routing using Service Meshes like Istio
- Metrics-based promotion (automatic scaling based on health)
These tools allow: traffic splitting by percentages, metrics-based analysis, and automatic promotion or rollback.
Conclusion
Canary Deployments provide a safe and controlled approach to rolling out updates in Kubernetes environments. By gradually introducing a new version to a subset of users, developers can minimize risks, quickly identify and address issues, and test new features under real-world conditions.
While slightly more complex than Blue-Green Deployments, the benefits of reduced impact during rollbacks and real-time feedback from actual usage make Canary Deployments a valuable strategy for modern application delivery.
Using advanced tools like Argo Rollouts, Flagger, and Service Meshes can further optimize and automate this process, making it an integral part of progressive delivery pipelines.