DevOps - Kubernetes



Kubernetes helps us automate how we deploy, scale, and manage containerized applications. This makes it an important tool for us in DevOps.

In this chapter, we will look at the basics of Kubernetes. We will talk about its architecture and how we can set up a cluster. We will also discuss how to deploy applications, manage configurations, integrate CI/CD, and strategies for monitoring.

Understanding Kubernetes Architecture

We can think of Kubernetes architecture like a master-slave setup. It has many parts that work together to manage applications in containers. The main parts are:

1. Control Plane

  • API Server − This is the front part of the Kubernetes control plane. It handles all the REST operations.
  • Scheduler − This helps assign workloads to nodes. It looks at what resources are available and follows certain rules.
  • Controller Manager − This manages controllers that keep the cluster's state in check. For example, it works with the replication controller.
  • etcd − This is a distributed key-value store. It keeps all the cluster data and is the main source of truth.

2. Worker Nodes

Worker Nodes run the applications. They have −

  • Kubelet − This is an agent that talks with the control plane. It manages the lifecycles of containers.
  • Kube Proxy − This takes care of network routing and load balancing for services.
  • Container Runtime − This is the software that runs containers. Examples include Docker and containerd.

3. Networking

Kubernetes networking helps pods and services talk to each other. It uses:

  • Cluster IP − This is the internal IP for accessing services.
  • NodePort − This exposes a service on a fixed port on each node's IP.
  • LoadBalancer − This works with cloud providers to set up a load balancer for the service.

We need to understand this architecture well. It helps us manage and deploy applications in Kubernetes effectively.

Setting Up a Kubernetes Cluster

We can set up a Kubernetes cluster in a few steps. We can do this on local machines, cloud services, or on our own servers. Here, we will show the steps to set up a Kubernetes cluster using kubeadm. This is a common tool for starting the cluster.

Following are the prerequisites

  • Operating System − We need Ubuntu, CentOS, or other Linux systems.
  • Hardware − Each node should have at least 2 CPUs and 2GB of RAM.
  • Docker − We must install Docker and make sure it is running to manage container images.

Steps to Set Up a Kubernetes Cluster

Install Kubernetes Components − First, we run these commands −

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Initialize the Cluster − On the master node, we run −

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Set Up Local Kubeconfig − We need to set up the kubeconfig for our user −

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install a Pod Network Add-on (like Calico) − We can install Calico with this command −

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Join Worker Nodes − For each worker node, we use the token from the initialization step −

kubeadm join <master-ip>:6443 --token <token> 
   --discovery-token-ca-cert-hash sha256:<hash>

Verification − To check the status of the nodes, we run −

kubectl get nodes

This command shows us the status of the master and worker nodes. This way, we can confirm that we have set up our Kubernetes cluster correctly.

Deploying Applications on Kubernetes

We can deploy applications on Kubernetes by defining how the application should look using Kubernetes manifests. We usually write these manifests in YAML. The most common resources we use for deployment are Pods, ReplicaSets, and Deployments. In this section, we will discuss the key steps for deployment −

Create a Deployment − We need to define what our application should look like in a Deployment manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: my-app-image:latest
        ports:
        - containerPort: 80

Apply the Manifest − We use kubectl to apply the deployment −

kubectl apply -f deployment.yaml

Expose the Application − We create a Service to let outside traffic reach our application.

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: my-app

Verify Deployment − We check the status of our Pods and Services −

kubectl get pods
kubectl get services

By following these steps, we can successfully deploy and manage applications on a Kubernetes cluster.

Managing Configurations and Secrets

In Kubernetes, we see that configurations and secrets are very important. They help us manage app settings and keep sensitive data safe. Kubernetes gives us ConfigMaps to handle non-sensitive configuration data. It also gives us Secrets for sensitive information like passwords or API keys.

ConfigMaps

ConfigMaps hold configuration settings as key-value pairs. We can create ConfigMaps from files, folders, or direct values.

kubectl create configmap my-config --from-literal=key1=value1 
   --from-file=my-config-file.conf

Usage in Pods − We can mount ConfigMaps as volumes or use them as environment variables.

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: my-image
    env:
    - name: CONFIG_KEY
      valueFrom:
        configMapKeyRef:
          name: my-config
          key: key1

Secrets

Secrets keep sensitive data safe. They are encoded in Base64. We create Secrets like ConfigMaps but use the kubectl create secret command.

kubectl create secret generic my-secret --from-literal=password=my-password

Usage in Pods − Secrets can also be mounted as volumes or used as environment variables.

apiVersion: v1
kind: Pod
metadata:
  name: my-secure-pod
spec:
  containers:
  - name: my-secure-container
    image: my-secure-image
    env:
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: my-secret
          key: password

Implementing CI/CD Pipelines with Kubernetes

We know Continuous Integration (CI) and Continuous Deployment (CD) pipelines are very important in DevOps. They help us automate how we deliver applications. Kubernetes makes CI/CD better by giving us a strong platform to deploy, manage, and scale our applications.

Key Components

  • Source Control − We use Git repositories to store our application code.
  • CI/CD Tools − The tools that we use include: Jenkins, GitLab CI, and ArgoCD.
  • Container Registry − We can use Docker Hub or private registries to keep our images.

CI/CD Process

Code Commit − We push code changes to the repository.

Build Stage − Our CI tools build Docker images.

docker build -t myapp:latest .

Test Stage − Automated tests run to check our code.

Push to Registry − We push successful builds to a container registry.

docker push myapp:latest

Deployment − We use Kubernetes manifests (YAML files) for deployment.

 	apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:latest

Tools for CI/CD on Kubernetes

  • Helm − It is a package manager for Kubernetes.
  • Tekton − This is a Kubernetes-native CI/CD framework.
  • ArgoCD − We use this GitOps continuous delivery tool.

By putting all these parts together, we can make our software deployment process in Kubernetes automated, efficient, and reliable.

Monitoring and Logging in Kubernetes

We know that good monitoring and logging are very important for keeping Kubernetes clusters healthy and working well. These tools help us find problems, check performance, and fix issues quickly.

Monitoring Tools

  • Prometheus − It is an open-source tool for monitoring and alerting. It collects data using a pull model over HTTP. It allows us to work with multi-dimensional data and flexible queries.
  • Grafana − This is a tool for visualization that works with Prometheus. We can create dashboards to see our metrics clearly.
  • Kube-state-metrics − This tool gives us metrics about the state of Kubernetes objects like deployments and pods. It gives us detailed info for monitoring.

Logging Solutions

  • Fluentd − It is a data collector that helps us combine logs from different places. It helps us gather logs from nodes and containers easily.
  • Elasticsearch & KibanaElasticsearch stores the logs. Kibana helps us visualize them. These tools are great for searching and checking logs.

Example Prometheus Configuration

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
    scrape_configs:
      - job_name: 'kubernetes-nodes'
        kubernetes_sd_configs:
          - role: node

Conclusion

In this chapter, we looked at the basics of Kubernetes. We talked about its structure, how to set up a cluster, deploy applications, manage configurations, integrate CI/CD, and monitor systems.

Advertisements