Kubernetes Design and Components
Kubernetes, which is also known as k8s, is a platform for managing containers. It is a complex system focused on the complete life cycle of containers, including configuration, installation, health checking, troubleshooting, and scaling. With Kubernetes, it is possible to run microservices in a scalable, flexible, and reliable way. Let's assume you are a DevOps engineer at a fin-tech company, focusing on online banking for your customers.
You can configure and install the complete backend and frontend of an online bank application to Kubernetes in a secure and cloud-native way. With the Kubernetes controllers, you can manually or automatically scale your services up and down to match customer demand. Also, you can check the logs, perform health checks on each service, and even SSH into the containers of your applications.
In this section, we will focus on how Kubernetes is designed and how its components work in harmony.
Kubernetes clusters consist of one or more servers, and each server is assigned with a set of logical roles. There are two essential roles assigned to the servers of a cluster: master and node. If the server is in the master role, control plane components of the Kubernetes run on these nodes. Control plane components are the primary set of services used to run the Kubernetes API, including REST operations, authentication, authorization, scheduling, and cloud operations. With the recent version of Kubernetes, four services are running as the control plane:
- etcd: etcd is an open source key/value store, and it is the database of all Kubernetes resources.
- kube-apiserver: API server is the component that runs the Kubernetes REST API. It is the most critical component for interacting with other parts of the plane and client tools.
- kube-scheduler: A scheduler assigns workloads to nodes based on the workload requirements and node status.
- kube-controller-manager: kube-controller-manager is the control plane component used to manage core controllers of Kubernetes resources. Controllers are the primary life cycle managers of the Kubernetes resources. For each Kubernetes resource, there is one or more controller that works in the observe, decide, and act loop diagrammed in Figure 4.1. Controllers check the current status of the resources in the observe stage and then analyze and decide on the required actions to reach the desired state. In the act stage, they execute the actions and continue to observe the resources.
Figure 4.1: Controller loop in Kubernetes
Servers with the node role are responsible for running the workload in Kubernetes. Therefore, there are two essential Kubernetes components required in every node:
- kubelet: kubelet is the management gateway of the control plane in the nodes. kubelet communicates with the API server and implements actions needed on the nodes. For instance, when a new workload is assigned to a node, kubelet creates the container by interacting with the container runtime, such as Docker.
- kube-proxy: Containers run on the server nodes, but they interact with each other as they are running in a unified networking setup. kube-proxy makes it possible for containers to communicate, although they are running on different nodes.
The control plane and the roles, such as master and node, are logical groupings of components. However, it is recommended to have a highly available control plane with multiple master role servers. Besides, servers with node roles are connected to the control plane to create a scalable and cloud-native environment. The relationship and interaction of the control plane and the master and node servers are presented in the following figure:
Figure 4.2: The control plane and the master and node servers in a Kubernetes cluster
In the following exercise, a Kubernetes cluster will be created locally, and Kubernetes components will be checked. Kubernetes clusters are sets of servers with master or worker nodes. On these nodes, both control plane components and user applications are running in a scalable and highly available way. With the help of local Kubernetes cluster tools, it is possible to create single-node clusters for development and testing. minikube is the officially supported and maintained local Kubernetes solution, and it will be used in the following exercise.
Note
You will use minikube in the following exercise as the official local Kubernetes solution, and it runs the Kubernetes components on hypervisors. Hence you must install a hypervisor such as Virtualbox, Parallels, VMWareFusion, Hyperkit, or VMWare. Refer to this link for more information:
https://kubernetes.io/docs/tasks/tools/install-minikube/#install-a-hypervisor
Exercise 10: Starting a Local Kubernetes Cluster
In this exercise, we will install minikube and use it to start a one-node Kubernetes cluster. When the cluster is up and running, it will be possible to check the master and node components.
To complete the exercise, we need to ensure the following steps are executed:
- Install minikube to the local system by running these commands in your Terminal:
# Linux
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
# MacOS
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin
These commands download the binary file of minikube, make it executable, and move it into the bin folder for Terminal access.
- Start the minikube cluster by running the following command:
minikube start
This command downloads the images and creates a single-node virtual machine. Following that, it configures the machine and waits until the Kubernetes control plane is up and running, as shown in the following figure:
Figure 4.3: Starting a new cluster in minikube
- Check the status of Kubernetes cluster:
minikube status
As the output in the following figure indicates, the host system, kubelet, and apiserver are running:
Figure 4.4: Kubernetes cluster status
- Connect to the virtual machine of minikube by running the following command:
minikube ssh
You should see the output shown in the following figure:
Figure 4.5: minikube virtual machine
- Check for the four control-plane components with the following command:
pgrep -l etcd && pgrep -l kube-apiserver && pgrep -l kube-scheduler && pgrep -l controller
This command lists the processes and captures the mentioned command names. There are total of four lines corresponding to each control plane component and its process IDs, as depicted in the following figure:
Figure 4.6: Control plane components
- Check for the node components with the following command:
pgrep -l kubelet && pgrep -l kube-proxy
This command lists two components running in the node role, with their process IDs, as shown in the following figure:
Figure 4.7: Node components
- Exit the terminal started in Step 4 with the following command:
exit
You should see the output shown in the following figure:
Figure 4.8: Exiting the minikube virtual machine
In this exercise, we installed a single-node Kubernetes cluster using minikube. In the next section, we will discuss using the official client tool of Kubernetes to connect to and operate the cluster from the previous exercise.