
- Kubernetes Tutorial
- Kubernetes - Home
- Kubernetes - Overview
- Kubernetes - Architecture
- Kubernetes - Setup
- Kubernetes - Setup on Ubuntu
- Kubernetes - Images
- Kubernetes - Jobs
- Kubernetes - Labels & Selectors
- Kubernetes - Namespace
- Kubernetes - Node
- Kubernetes - Service
- Kubernetes - POD
- Kubernetes - Replication Controller
- Kubernetes - Replica Sets
- Kubernetes - Deployments
- Kubernetes - Volumes
- Kubernetes - Secrets
- Kubernetes - Network Policy
- Advanced Kubernetes
- Kubernetes - API
- Kubernetes - Kubectl
- Kubernetes - Kubectl Commands
- Kubernetes - Creating an App
- Kubernetes - App Deployment
- Kubernetes - Autoscaling
- Kubernetes - Dasard Setup
- Kubernetes - Helm Package Management
- Kubernetes - CI/CD Integration
- Kubernetes - Persistent Storage and PVCs
- Kubernetes - RBAC
- Kubernetes - Logging & Monitoring
- Kubernetes - Service Mesh with Istio
- Kubernetes - Backup and Disaster Recovery
- Managing ConfigMaps and Secrets
- Running Stateful Applications
- Multi-Cluster Management
- Security Best Practices
- Kubernetes CRDs
- Debugging Pods and Nodes
- K9s for Cluster Management
- Managing Taints and Tolerations
- Horizontal and Vertical Pod Autoscaling
- Minikube for Local Development
- Kubernetes in Docker
- Deploying Microservices
- Blue-Green Deployments
- Canary Deployments with Commands
- Troubleshooting Kubernetes with Commands
- Scaling Applications with Kubectl
- Advanced Scheduling Techniques
- Upgrading Kubernetes Clusters
- Kubernetes Useful Resources
- Kubernetes - Quick Guide
- Kubernetes - Useful Resources
- Kubernetes - Discussion
Kubernetes - Multi-Cluster Management
Multi-Cluster Management refers to the practice of operating multiple Kubernetes clusters, which can be distributed across various environments such as on-premises data centers, public clouds, or edge locations. Each cluster operates independently, but collectively, they serve the broader objectives of scalability, redundancy, and specialized workload management.
However, managing multiple Kubernetes clusters can be complex, requiring efficient orchestration, security, and resource management.
In this chapter, we'll explore how to set up and use Rancher and KubeFed to streamline multi-cluster management. These tools provide centralized control, automated cluster provisioning, workload distribution, and high availability across multiple clusters.
Benefits of Multi-Cluster Management
Before diving into the setup, let's look at why multi-cluster management is necessary:
- High Availability: Distributing workloads across multiple clusters prevents single points of failure.
- Disaster Recovery: Having multiple clusters ensures business continuity in case of cluster failures.
- Geographical Distribution:Running clusters in different locations improves latency and compliance with data regulations.
- Resource Optimization: Different workloads can be deployed in clusters based on performance and cost efficiency.
Centralized Management through Rancher
Rancher is an open-source platform that provides centralized management for multiple Kubernetes clusters. It offers:
- Unified Cluster Management: A single pane of glass to manage all clusters, regardless of their location or provider.
- Integrated Monitoring and Alerts: Built-in tools to monitor cluster health and set up alerts for anomalies.
- Application Catalog: A curated list of applications that can be deployed across clusters with ease.
Installing and Using Rancher for Multi-Cluster Management
To get started with Rancher, we'll deploy it using Helm:
$ helm repo add rancher-latest https://releases.rancher.com/server-charts/latest "rancher-latest" has been added to your repositories
Update the repositories:
$ helm repo update
Output
Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "metrics-server" chart repository ...Successfully got an update from the "kubernetes-dasard" chart repository ...Successfully got an update from the "kubelet-csr-approver" chart repository ...Successfully got an update from the "rimusz" chart repository ...Successfully got an update from the "rancher-latest" chart repository Update Complete. âHappy Helming!â
Create a namespace:
$ kubectl create namespace cattle-system
Output
namespace/cattle-system created
Install Cert-Manager
Cert-Manager is essential for managing TLS certificates in Rancher deployments:
$ kubectl apply -f https://.com/cert-manager/cert-manager/releases/latest/download/cert-manager.crds.yaml
Output
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
Add jetstack to your repository:
$ helm repo add jetstack https://charts.jetstack.io
Output
"jetstack" has been added to your repositories $ helm repo update
Output
Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "metrics-server" chart repository ...Successfully got an update from the "kubelet-csr-approver" chart repository ...Successfully got an update from the "kubernetes-dasard" chart repository ...Successfully got an update from the "jetstack" chart repository ...Successfully got an update from the "rimusz" chart repository $ helm install cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --set installCRDs=true
Output
NAME: cert-manager LAST DEPLOYED: Sat Mar 22 15:55:56 2025 NAMESPACE: cert-manager STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: cert-manager v1.17.1 has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://cert-manager.io/docs/configuration/ For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://cert-manager.io/docs/usage/ingress/
Verify Installation
Let's now verify the installation. Check if cert-manager is running:
$ kubectl get pods -n cert-manager
Output
NAME READY STATUS RESTARTS AGE cert-manager-6794b8d569-lpbd8 1/1 Running 0 2m40s cert-manager-cainjector-7f69cd69f7-ntqmf 1/1 Running 0 2m40s cert-manager-webhook-6cc5dccc4b-dj5k7 1/1 Running 0 2m40s
Install Rancher with TLS (self-signed certificates):
$ helm install rancher rancher-latest/rancher \ --namespace cattle-system \ --create-namespace \ --set hostname=rancher.local \ --set bootstrapPassword=admin \ --set tls=ingress
Output
NAME: rancher LAST DEPLOYED: Sat Mar 22 16:00:07 2025 NAMESPACE: cattle-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Rancher Server has been installed.
Verify Rancher Installation
Check if Rancher pods are running:
$ kubectl get pods -n cattle-system
Output
NAME READY STATUS RESTARTS AGE rancher-78f6c5b87b-vq2kd 1/1 Running 0 5m rancher-78f6c5b87b-x5l7q 1/1 Running 0 5m rancher-78f6c5b87b-znshd 1/1 Running 0 5m
Access the Rancher UI
$ kubectl get svc -n cattle-system
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rancher LoadBalancer 10.43.200.1 172.28.55.22 443:32443/TCP 5m
Note the external IP address, then open the browser and visit: https://172.28.55.22
Log in as Admin:
- Username:admin
- Password:Run this command to retrieve the bootstrap password:
$ kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
Copy the generated password and use it to log in. After logging in, change the default password for security reasons.

Add Clusters to Rancher
Once logged into Rancher, click on "Cluster Management" from the left sidebar.

In the Clusters section, click "Create" to add a new cluster or "Import Existing" to bring in an existing Kubernetes cluster.

Choose the cluster type based on your setup:
- Existing Kubernetes Cluster:Import using the kubectl command provided by Rancher.
- Cloud Provider: Deploy on AWS, GCP, Azure, or other cloud environments.
- Custom Cluster: Set up on-premise or bare metal infrastructure.
Fill in the required details such as:
- Cluster Name
- Kubernetes Version
- Container Network Plugin (e.g., Calico, Flannel)
- Cloud Provider (if applicable)
Click "Create" to initialize the cluster setup.

Now, you can simply follow the on-screen instructions to add multiple clusters and register them with Rancher.
Deploy Workloads Across Clusters
Once your clusters are added, you can deploy workloads by following these steps:
- Navigate to "Cluster Management" → Select the cluster where you want to deploy workloads.
- Click on "Projects/Namespaces" to manage workloads within a specific namespace.
- Choose a target Project or create a new one if needed.
- Deploy applications using one of the following methods:
- Helm Charts:Navigate to "Apps" → "Charts", select a Helm chart, configure values, and deploy.
- YAML Manifests: Click "Deploy", upload or paste YAML configuration files, and apply them.
- Rancher UI: Use the "Workloads" section to create deployments, stateful sets, or daemon sets directly.
- Monitor deployments using the "Workloads" and "Monitoring" tabs to check logs, resource usage, and pod statuses.
Installing and Using KubeFed (Kubernetes Federation)
KubeFed allows us to control multiple clusters from a single Kubernetes API server.
Create the Namespace
$ kubectl create namespace kube-federation-system Output: namespace/kube-federation-system created
Add Helm Chart Repository and Install KubeFed
$ helm repo add kubefed-charts https://raw.usercontent.com/kubernetes-sigs/kubefed/master/charts
Output
"kubefed-charts" has been added to your repositories $ helm install kubefed kubefed-charts/kubefed \ --namespace kube-federation-system \ --set controller.manager.replicaCount=2
Output
NAME: kubefed LAST DEPLOYED: Sun Mar 23 11:07:27 2025 NAMESPACE: kube-federation-system STATUS: deployed REVISION: 1 TEST SUITE: None This command installs KubeFed with two replicas of the controller manager.
Verify Installation
$ kubectl get pods -n kube-federation-system
Output
NAME READY STATUS RESTARTS AGE kubefed-admission-webhook-7d6b469599-tf949 1/1 Running 0 5m8s kubefed-controller-manager-6d48f577c6-286l5 1/1 Running 0 4m30s kubefed-controller-manager-6d48f577c6-l8ng4 1/1 Running 0 4m32s
Register Multiple Clusters with KubeFed
We'll first set the context:
$ kubectl config use-context tutorialspoint
Output
Switched to context "tuturialspoint"
We'll now register another cluster (e.g., master-node) with KubeFed:
$ kubefedctl join master-node --host-cluster-context=tuturialspoint --v=2
Output
Creating a federated cluster resource "master-node" in namespace "kube-federation-system"... Successfully created federated cluster "master-node". Creating a cluster registry entry for "master-node"... Successfully created cluster registry entry for "master-node". Creating RBAC resources to allow the KubeFed control plane to access "master-node"... Successfully created RBAC resources. Creating a service account in "master-node"... Successfully created service account. Creating a cluster role and binding for the service account in "master-node"... Successfully created cluster role and binding. Creating a kubeconfig secret to store cluster credentials in the host cluster... Successfully created kubeconfig secret. Federation of cluster "master-node" is complete.
Replace:
- <tutorialspoint> with the name of the target cluster to be federated.
- <master-node> with the name of the primary control cluster.
Verify Registered Clusters
$ kubectl get federatedclusters -n kube-federation-system
Output
NAME READY AGE master-node True 5s
Deploy Applications Across Clusters
Create a Federated Namespace
$ kubectl create ns my-app
Output
namespace/my-app created
Create and Apply a Federated Deployment
Now, using an editor, create a new YAML file:
$ nano my-app-federated-deployment.yaml
Add the following content to the file:
apiVersion: types.kubefed.io/v1beta1 kind: FederatedDeployment metadata: name: my-app-deployment namespace: my-app spec: template: metadata: labels: app: my-app spec: replicas: 2 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: nginx ports: - containerPort: 80 placement: clusters: - name: master-node - name: tuturialspoint
Save the file and exit.
Now, apply the configuration:
$ kubectl apply -f my-app-federated-deployment.yaml
Output
federateddeployment.types.kubefed.io/my-app-deployment created
Verify Deployment Across Clusters
$ kubectl get pods -n my-app --all-clusters
Output
CLUSTER NAME READY STATUS RESTARTS AGE tuturialspoint my-app-deployment-6c4f67d4cc-m9k8x 1/1 Running 0 10s master-node my-app-deployment-6c4f67d4cc-px2jb 1/1 Running 0 10s
This confirms that the application is successfully deployed across both clusters (tuturialspoint and master-node).
Best Practices for Managing Multiple Clusters
Make a note of the following best practices and apply them while managing multiple clusters:
- Use GitOps for Configuration: Manage cluster configurations with Git for version control and consistency. Tools like Argo CD and Flux keep deployments in sync automatically.
- Centralize IAM: Ensure consistent authentication and security policies across clusters by integrating with AWS IAM, Azure AD, or similar platforms.
- Secure Networking: Use service meshes like Istio or VPNs to enable safe and reliable communication between clusters.
- Monitor Proactively:Set up centralized monitoring with tools like Prometheus and ELK Stack to detect and resolve issues quickly.
- Automate Backups and Recovery: Regular backups prevent data loss. Tools like Velero simplify backup and disaster recovery.
Conclusion
Managing multiple Kubernetes clusters is essential for achieving high availability, disaster recovery, and workload distribution across different environments. This guide has demonstrated how Rancher and KubeFed simplify multi-cluster management by providing centralized control, automated provisioning, and efficient workload distribution.
Rancher offers a unified management interface, making it easy to deploy, monitor, and operate multiple clusters. Meanwhile, KubeFed enables federated control, allowing workloads to be synchronized across clusters seamlessly. By leveraging these tools, organizations can enhance resilience, optimize resource utilization, and streamline Kubernetes operations across hybrid and multi-cloud environments.