
- Kubernetes Tutorial
- Kubernetes - Home
- Kubernetes - Overview
- Kubernetes - Architecture
- Kubernetes - Setup
- Kubernetes - Setup on Ubuntu
- Kubernetes - Images
- Kubernetes - Jobs
- Kubernetes - Labels & Selectors
- Kubernetes - Namespace
- Kubernetes - Node
- Kubernetes - Service
- Kubernetes - POD
- Kubernetes - Replication Controller
- Kubernetes - Replica Sets
- Kubernetes - Deployments
- Kubernetes - Volumes
- Kubernetes - Secrets
- Kubernetes - Network Policy
- Advanced Kubernetes
- Kubernetes - API
- Kubernetes - Kubectl
- Kubernetes - Kubectl Commands
- Kubernetes - Creating an App
- Kubernetes - App Deployment
- Kubernetes - Autoscaling
- Kubernetes - Dasard Setup
- Kubernetes - Helm Package Management
- Kubernetes - CI/CD Integration
- Kubernetes - Persistent Storage and PVCs
- Kubernetes - RBAC
- Kubernetes - Logging & Monitoring
- Kubernetes - Service Mesh with Istio
- Kubernetes - Backup and Disaster Recovery
- Managing ConfigMaps and Secrets
- Running Stateful Applications
- Multi-Cluster Management
- Security Best Practices
- Kubernetes CRDs
- Debugging Pods and Nodes
- K9s for Cluster Management
- Managing Taints and Tolerations
- Horizontal and Vertical Pod Autoscaling
- Minikube for Local Development
- Kubernetes in Docker
- Deploying Microservices
- Blue-Green Deployments
- Canary Deployments with Commands
- Troubleshooting Kubernetes with Commands
- Scaling Applications with Kubectl
- Advanced Scheduling Techniques
- Upgrading Kubernetes Clusters
- Kubernetes Useful Resources
- Kubernetes - Quick Guide
- Kubernetes - Useful Resources
- Kubernetes - Discussion
Kubernetes - Deploying Microservices
As developers, we're constantly working to make our applications more scalable, modular, and easier to manage. Microservices architecture helps us achieve that by breaking down a big monolithic application into smaller, independently deployable services. And when it comes to managing and scaling those services, Kubernetes is the go-to platform.
In this guide, we'll walk through how to deploy a basic microservices application on a Kubernetes cluster. We'll cover the setup, deployment of services, and how they talk to each other. Let's dive in!
What are Microservices?
Before we get hands-on, let's quickly understand what microservices are. Instead of building a single, massive application (called a monolith), we can break it into small, self-contained services. Each service does one thing − like user management, payments, or product catalog − and communicates with others via APIs (usually REST or gRPC).
Each microservice:
- Can be developed and deployed independently
- Has its own database or storage
- Can scale independently
This gives us flexibility and fault isolation − if one service fails, the whole app doesn't crash.
Prerequisites
Before we deploy, here's what we need:
- A running Kubernetes cluster (Minikube, KIND, or cloud-based like GKE/EKS/AKS)
- kubectl installed and configured
- Docker (to build and push images if needed)
- Basic knowledge of YAML and Kubernetes objects (Deployments, Services, etc.)
If you're new to Minikube or KIND, check out our earlier chapters to set them up.
Microservices Demo App
We'll deploy a simple app with two microservices:
- frontend-service: A Node.js or Python-based web frontend.
- backend-service: An API that returns data − say, a list of products.
We'll containerize them, deploy each in its own pod, and expose them so they can communicate within the cluster.
Prepare Docker Images
Let's assume we have two Docker images ready:
- neviillle/backend:1.0
- neviillle/frontend:1.0
You can also check our earlier chapters on how to set up Docker images.
Building and pushing Docker images:
# Backend
$ cd backend/ $ docker build -t neviillle/backend:1.0 . Output: [+] Building 41.8s (10/10) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 169B 0.0s => [internal] load metadata for docker.io/library/python:3.9-slim 30.7s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/5] FROM docker.io/library/python:3.9-slim@sha256:9aa5793609640ecea2f06451a0d6f37 4.9s => => resolve docker.io/library/python:3.9-slim@sha256:9aa5793609640ecea2f06451a0d6f37 0.0s $ docker push neviillle/backend:1.0 Output: 970f7cb6a2b1: Pushed c79ef58278e8: Pushed 2c39b83bbed7: Pushed e0d134baee1f: Pushed
# Frontend
$ cd frontend/ $ docker build -t neviillle/frontend:1.0 . Output: => [internal] load build context 0.0s => => transferring context: 959B 0.0s => [2/5] WORKDIR /app 0.2s => [3/5] COPY package.json . 0.0s => [4/5] RUN npm install 3.9s => [5/5] COPY . . 0.0s => exporting to image 0.4s => => exporting layers 0.3s => => writing image sha256:ab1eeb42aab2e35ea90195c1e4ddc3dd9ff8175cfdcac0321991e59f3e78d5c4 0.0s => => naming to docker.io/neviillle/frontend:1.0 $ docker push neviillle/frontend:1.0 Output: The push refers to repository [docker.io/neviillle/frontend] defaaed6b602: Pushed cf9cae260d96: Pushed 73e3f926c148: Pushed d9a57774634d: Pushed 82140d9a70a7: Mounted from library/node f3b40b0cdb1c: Mounted from library/node 0b1f26057bd0: Mounted from library/node 08000c18d16d: Mounted from library/node 1.0: digest: sha256:fb75803435f2ba2889733835326602a8cabd69766d0d77292fb2a5a797b142da size: 1989
Create the Backend Deployment
Let's start by deploying the backend API. Create a YAML file called backend-deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: backend spec: replicas: 2 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: backend image: neviillle/backend:1.0 ports: - containerPort: 5000
Apply,
$ kubectl apply -f backend-deployment.yaml
Output
deployment.apps/backend created
Now create a Service (backend-service.yaml) so other pods can access it:
apiVersion: v1 kind: Service metadata: name: backend spec: selector: app: backend ports: - protocol: TCP port: 80 targetPort: 5000
What it does:
- name:backend → creates a service named backend
- selector:app: backend → connects to pods that have app: backend label
- port:80 → the port exposed inside the cluster
- targetPort:5000 → the port your backend container actually listens on
Apply it,
$ kubectl apply -f backend-service.yaml
Output
service/backend created
Create the Frontend Deployment
The frontend will call the backend using the service name http://backend.
Here's the frontend-deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 2 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend image: neviillle/frontend:1.0 env: - name: BACKEND_URL value: "http://backend" ports: - containerPort: 3000
Apply,
$ kubectl apply -f frontend-deployment.yaml
Output
deployment.apps/frontend created
Now we'll expose it:
Here's the frontend-service.yaml:
apiVersion: v1 kind: Service metadata: name: frontend spec: type: NodePort selector: app: frontend ports: - protocol: TCP port: 80 targetPort: 3000 nodePort: 30036
What it does:
- name: frontend → names the service frontend
- type: NodePort → allows access externally via the Node's IP on port 30036
- selector: app: frontend → targets pods labeled with app: frontend
- port: 80 → accessible inside the cluster
- targetPort: 3000 → the actual port your container listens on
- nodePort: 30036 → exposed on this port of the host machine
Apply:
$ kubectl apply -f frontend-service.yaml
Output
service/frontend created
If using Minikube:
$ minikube service frontend
Verifying the Microservices
Check if everything is running:
$ kubectl get pods
Output
NAME READY STATUS RESTARTS AGE backend-799f58997c-dlb8j 1/1 Running 0 4m39s backend-799f58997c-zrqjb 1/1 Running 0 4m39s frontend-77b45bdbb7-fn6q5 1/1 Running 0 107s frontend-77b45bdbb7-wh2lf 1/1 Running 0 107s
Check the services:
$ kubectl get svc
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE backend ClusterIP 10.99.232.4 <none> 80/TCP 3m36s frontend NodePort 10.106.125.152 <none> 80:30036/TCP 70s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26m
Scaling Services
To scale the backend to 5 replicas:
$ kubectl scale deployment backend --replicas=5
Output
deployment.apps/backend scaled
To scale down:
$ kubectl scale deployment backend --replicas=2
Output
deployment.apps/backend scaled
Monitoring and Logs
View logs:
$ kubectl logs deployment/frontend
Output
> [email protected] start > node server.js Frontend server is running on port 3000 Connected to backend at http://backend
Troubleshoot issues:
$ kubectl describe pod <pod-name>
Optional: Add Network Policies or Ingress
If you want to control traffic or expose services using a domain name, look into:
- Ingress controllers (like NGINX Ingress)
- Network policies for controlling communication between services
These are great for production setups.
Conclusion
We've successfully deployed a basic microservices app on Kubernetes. Each microservice runs independently, and Kubernetes takes care of deployment, scaling, and internal networking. This approach is powerful for building modern, cloud-native applications.
As we move forward, we can explore adding a database, securing the APIs, using Helm charts, and setting up CI/CD pipelines.