Kubernetes - Security Best Practices



Kubernetes is a powerful container orchestration platform, but with great power comes great responsibility. As we deploy applications in a Kubernetes cluster, securing our environment becomes a top priority. Security breaches can lead to data s, service outages, and unauthorized access to sensitive workloads.

In this guide, we'll explore essential security best practices for Kubernetes, including authentication, authorization, network policies, pod security, runtime protection, and data encryption.

Securing Kubernetes API Access

The Kubernetes API is the control plane's gateway, making it a critical component to secure. We need to ensure that only authorized users and services can access it.

Enable Role-Based Access Control (RBAC)

RBAC helps define what users and services can operate within the cluster, preventing unauthorized access.

We'll start by creating the following role definition (role.yaml):

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]

Create the role binding (rolebinding.yaml):

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-binding
namespace: default
subjects:
- kind: User
name: example-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

Apply these configurations:

$ kubectl apply -f role.yaml

Output

role.rbac.authorization.k8s.io/pod-reader created

This creates a role that grants read-only access to pods within the default namespace.

$ kubectl apply -f rolebinding.yaml

Output

rolebinding.rbac.authorization.k8s.io/pod-reader-binding created

This binds the role to a specific user, allowing them to perform the defined actions.

Enforcing Network Policies

Network policies help control traffic between pods, limiting exposure to potential threats.

Define a Network Policy

Define a network policy:

$ nano network-policy.yaml

Add the following contents:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-namespace
namespace: default
spec:
podSelector:
matchLabels:
role: frontend
ingress:
- from:
- namespaceSelector:
matchLabels:
project: my-app

Apply it:

$ kubectl apply -f network-policy.yaml

Output

networkpolicy.networking.k8s.io/allow-from-namespace created

This creates a network policy that only allows traffic from a specific namespace to pods labeled as frontend.

Using Kyverno for Security Policies in Kubernetes

Kyverno is a policy engine designed specifically for Kubernetes. It allows administrators to define, validate, and enforce security policies without requiring custom admission controllers or webhooks.

Installing Kyverno

To get started, we'll install Kyverno in Kubernetes cluster by applying the official deployment manifest:

$ kubectl create -f https://.com/kyverno/kyverno/releases/latest/download/install.yaml

Output

service/kyverno-cleanup-controller-metrics created
service/kyverno-reports-controller-metrics created
deployment.apps/kyverno-admission-controller created
deployment.apps/kyverno-background-controller created
deployment.apps/kyverno-cleanup-controller created
deployment.apps/kyverno-reports-controller created

Verify that the Kyverno pods are running:

$ kubectl get pods -n kyverno

Output

NAME                                            READY   STATUS    RESTARTS   AGE
kyverno-admission-controller-6f6b464fd-dvzn7    1/1     Running   0          2m
kyverno-background-controller-8857bcdc6-wfxl5   1/1     Running   0          2m
kyverno-cleanup-controller-698b56fb69-zmrfp     1/1     Running   0          2m
kyverno-reports-controller-76ccc7bd59-pw72d     1/1     Running   0          2m

Enforcing a Security Policy: Requiring Non-Root Users

By default, Kubernetes does not enforce user restrictions in containers. To ensure workloads run as a non-root user, apply the following Kyverno policy.

Create a Kyverno Policy

Create a file named kyverno-policy.yaml and add the following content:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-non-root-user
spec:
validationFailureAction: Enforce
rules:
- name: require-run-as-nonroot
match:
resources:
kinds:
- Pod
validate:
message: "Running as root is not allowed."
pattern:
spec:
securityContext:
runAsNonRoot: true

Apply the policy to the cluster:

$ kubectl apply -f kyverno-policy.yaml

Output

clusterpolicy.kyverno.io/require-non-root-user created

This policy prevents workloads from running as root.

Enable Seccomp and AppArmor

Seccomp and AppArmor add extra security layers by restricting system calls and enforcing application profiles.

Define a Seccomp Profile

Define a seccomp profile:

$ nano seccomp-pod.yaml

Add the following contents:

apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: secure-container
image: nginx
securityContext:
appArmorProfile:
type: RuntimeDefault

Apply it:

$ kubectl apply -f seccomp-pod.yaml

Output

pod/secure-pod created

This ensures the pod runs with a Seccomp and AppArmor profile for enhanced security.

Runtime Security and Monitoring

To maintain a secure Kubernetes environment, we must continuously monitor and protect running workloads. Runtime security focuses on detecting anomalies, auditing system activities, and responding to potential threats in real time.

Define an Audit Policy

Kubernetes provides an audit logging mechanism to track API requests and system activities. To enable auditing, we must define an audit policy.

Using an editor, create the following file:

$ nano /etc/kubernetes/audit-policy.yaml

Then add the following contents:

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
verbs: ["create", "delete", "update"]

This policy logs metadata for critical API operations such as creating, deleting, and updating resources.

Apply the Audit Policy to the API Server

To enable auditing, modify the Kubernetes API server configuration.

Edit the API server manifest file:

$ nano /etc/kubernetes/manifests/kube-apiserver.yaml

Add the following lines:

--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--audit-log-path=/var/log/kubernetes-audit.log

Save the file and restart the API server for changes to take effect.

Deploy Falco for Runtime Threat Detection

Falco monitors container behaviors by analyzing system calls (syscalls) made by containers and comparing them against a set of predefined security rules. When it detects suspicious or unauthorized activity, it generates alerts and logs them.

Install Falco using Helm

Install Falco using the following commands:

$ helm repo add falcosecurity https://falcosecurity..io/charts

Output

"falcosecurity" has been added to your repositories

$ helm repo update

Output

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "metrics-server" chart repository
...Successfully got an update from the "kubelet-csr-approver" chart repository
...Successfully got an update from the "kubernetes-dasard" chart repository
...Successfully got an update from the "falcosecurity" chart repository
...Successfully got an update from the "rimusz" chart repository
Update Complete. ⎈Happy Helming!⎈
$ kubectl create namespace falco

Output

namespace/falco created
$ helm install falco falcosecurity/falco

Output

NAME: falco
LAST DEPLOYED: Mon Mar 24 13:40:33 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Verify that Falco is Running

$ kubectl get pods -n falco

Output

NAME          READY   STATUS    RESTARTS      AGE
falco-7skjt   1/2     Running   2 (13s ago)   47s
falco-qv8pv   1/2     Running   2 (13s ago)   47s

Checking Falco Logs for Security Events

Once Falco is deployed, we can check its logs to see detected threats:

$ kubectl logs -l app=falco -n falco

Output

Mon Mar 24 13:59:55 2025: Falco version: 0.40.0 (x86_64)
Mon Mar 24 13:59:55 2025: Falco initialized with configuration files:
Mon Mar 24 13:59:55 2025:    /etc/falco/config.d/engine-kind-falcoctl.yaml | schema validation: ok
Mon Mar 24 13:59:55 2025:    /etc/falco/falco.yaml | schema validation: ok
Mon Mar 24 13:59:55 2025: Loading rules from:
Mon Mar 24 13:59:55 2025:    /etc/falco/falco_rules.yaml | schema validation: ok

This shows real-time security alerts generated by Falco.

Securing Persistent Data

Encrypting persistent data ensures that sensitive information, such as Kubernetes Secrets, remains protected from unauthorized access. By enabling encryption at rest, we can safeguard data stored in etcd, the key-value store used by Kubernetes.

Creating an Encryption Configuration

We need to create an encryption configuration file that specifies how Kubernetes should encrypt secrets stored in etcd.

Create the Encryption Configuration File

Using an editor, create the following file:

$ nano /etc/kubernetes/encryption-config.yaml

Add the following contents:

kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: T6tyKgf145bt84EOcYqF5vhp6A1z0UfXBLBw/WWshm4=
- identity: {}

Secure File Permissions and Encryption

The encryption configuration file should not be world-readable. Set the correct permissions:

$ chmod 600 /etc/kubernetes/encryption-config.yaml
$ chown root:root /etc/kubernetes/encryption-config.yaml

Verifying Encryption at Rest

To confirm that encryption is working, follow these steps:

Create a Secret

$ kubectl create secret generic my-secret --from-literal=password=SuperSecure123

Output

secret/my-secret created

Retrieve the Secret in Plaintext

$ kubectl get secret my-secret -o yaml

Output

apiVersion: v1
data:
password: U3VwZXJTZWN1cmUxMjM=
kind: Secret
metadata:
creationTimestamp: "2025-03-24T18:03:09Z"
name: my-secret
namespace: default
resourceVersion: "1290"
uid: b78e48d9-d41d-4e2e-9c1b-b29a4e4ea838
type: Opaque

This returns the secret in a readable format (base64-encoded).

Conclusion

By following these security best practices, we can ensure that our Kubernetes cluster is protected against unauthorized access, network threats, and supply chain attacks. Kubernetes security is an ongoing process, and we must continuously monitor, audit, and update our configurations to stay ahead of emerging threats.

Implementing strong authentication, RBAC, network policies, pod security measures, and runtime monitoring fortifies our clusters against vulnerabilities and ensures a robust security setup.