Introduction
The reason behind the k8s acronym is Kubernetes has a total of 10 letters so the letters between "K" and "S" are 8 so it is called K8s.
Kubernetes is an open-source container orchestration system developed by Google, which is designed to automate the deployment, scaling, and management of containerized applications. It was originally designed by Google and later donated to the Cloud Native Computing Foundation (CNCF) in 2015.
Kubernetes is designed to manage and coordinate the work of individual containers running within a cluster. A cluster is a group of individual computers (also known as nodes) that work together to provide a platform for running and managing containerized applications. Kubernetes provides a layer of abstraction between the application and the underlying infrastructure, allowing developers to focus on writing code without worrying about the underlying infrastructure.
Kubernetes provides a range of features, including automated deployment and scaling, service discovery and load balancing, and storage orchestration. It also provides a platform for deploying and managing applications across multiple environments, including on-premise, public cloud, and hybrid cloud environments.
Kubernetes has become increasingly popular in recent years, as more organizations adopt containerization as a way to streamline their application deployment processes. It is also used by many cloud providers, including Google Cloud, Microsoft Azure, and Amazon Web Services, as a way to provide managed Kubernetes services to their customers.
Docker vs k8s
k8s offers 4 major different advantages over docker which are
Auto Scaling
cluster in nature
Auto healing
Enterprise-level standards: like advanced load balancing, security, advanced networking, etc.
Kubernetes architecture
There are hundreds of videos out there about Kubernetes architecture everybody talks about the control plane and data plane of Kubernetes. I am also discussing those things only. If you read the documentation there are many components of Kubernetes in the control plane and data plane. If I explain those things directly you never understand if you are a beginner. Every beginner needs real-time examples for the understanding part. So I'm going to compare all the components to docker so that you can have a clear idea of which component is meant to do what purpose. In docker, the simplest thing is a container, and in k8s the simplest thing is Pod so what happens when a container is created in docker and what happens a pod is created in k8s I'll compare these two things that automatically you will understand the architecture of the K8s. And you will know why k8s need this much of components.
In a docker container what happens if you build a container nothing will happen that container won't run if there is no docker container run time. Know If we move to K8s
You create a master and you create a worker. In general, there are multiple masters and multiple workers. So right now I just have one master and one worker node. In Kubernetes, you will not request workers your request goes through master. The request is always gthe the oes through something called a control plane. When the user tries to deploy a pod similar to the docker container. Your pod gets deployed on the specific worker node what happens is k8s have a component called Kubelet so it is responsible for running your pod. In Docker, you have docker shim and in Kubernetes, you have kubelet which is responsible for maintaining this pod. If the pod is not running in the worker node it will inform us that the pod is not running in the worker node. Even in a pod, you have container run time but the only difference is in k8s docker shim is not mandatory you can use container -d, cri - o, docker shim, or any other container runtimes which implement the k8s interface.
In docker, there is something called default networking and that is bridge networking. Networking is mandatory for running your pod so for that there are component k8s that have "kube-proxy" This kube-proxy provides you with networking for every pod that you create it has to be allocated with the IP address it has to be provided with the default load balancing. So these are the components are consisted of the worker node or the data plane. There are three k8s components in the data plane container runtime, kubelet, and kube-proxy.
Kubernetes components
The k8s components are divided into two parts to explain the architecture which are the control plane and data plane. In the data plane: k8s have 3 components kubelet, kube-proxy, and container runtime.
kubelet:
Kubelet is an agent that runs on each node in a Kubernetes cluster. Its primary responsibility is to ensure that containers are running and healthy on their assigned nodes.
Kubelet works in conjunction with the Kubernetes API server and the container runtime (such as Docker or container-d) to manage the containers running on a node. It receives pod specifications from the Kubernetes API server and ensures that the containers specified in the pod specification are running and healthy. It also monitors the state of the containers and reports any changes in the container state back to the API server.
In addition to managing containers, Kubelet also handles node-level tasks such as managing local storage, managing the network configuration, and managing the node's overall health. It communicates with other Kubernetes components, such as the Kubernetes scheduler, to ensure that pods are scheduled onto the appropriate nodes.
Overall, Kubelet is a critical component of a Kubernetes cluster, responsible for managing and maintaining the health of containers running on each node in the cluster.
Kube-proxy:
Kube-proxy is a network proxy that runs on each node in a Kubernetes cluster. Its primary responsibility is to provide network connectivity to services that are running inside the cluster.
Kube-proxy works by creating virtual IP addresses (also known as ClusterIPs) for each service running in the cluster. These virtual IP addresses are used to route traffic to the appropriate endpoints (i.e., the pods running the containers that make up the service).
Kube-proxy provides three different modes of operation: userspace, IP tables, and IPVS. In userspace mode, Kube-proxy runs as a userspace process and uses a custom networking implementation to forward traffic. In IP tables mode, Kube-proxy uses the Linux IP tables firewall to forward traffic. In IPVS mode, Kube-proxy uses the Linux IP Virtual Server (IPVS) to perform load balancing.
Kube-proxy also handles service discovery within the cluster. When a new service is created, Kube-proxy registers the service with the cluster's DNS service, allowing other pods in the cluster to discover and communicate with the service using its DNS name.
Overall, Kube-proxy plays a critical role in ensuring that services within a Kubernetes cluster can communicate with each other and that traffic is routed to the appropriate endpoints.
container runtime:
A container runtime is software that runs on a worker node in a Kubernetes cluster and is responsible for managing containers. The container runtime is responsible for starting and stopping containers, as well as providing isolation between containers running on the same node.
Kubernetes supports multiple container runtimes, including Docker, containers, CRI-O, and others. Each container runtime provides a set of features and capabilities for managing containers.
When a pod is scheduled onto a node in the cluster, Kubernetes communicates with the container runtime to start the containers specified in the pod specification. The container runtime creates the container and sets up the necessary namespaces, cgroups, and other settings to provide isolation between containers.
The container runtime also manages the storage for each container, including providing access to volumes and managing the lifecycle of the container's file system.
In addition, the container runtime handles networking for the containers. It creates a virtual network interface for each container and manages the routing and network settings necessary for the containers to communicate with each other and with external resources.
Overall, the container runtime is a critical component of a Kubernetes cluster, responsible for managing and maintaining the containers running on each node in the cluster.
In the control plane, the k8s have components like API server, scheduler, etcd, control manager, and cloud control manager. These components will appear on the master node in the Kubernetes architecture.
API server :
The API server is the primary control plane component of a Kubernetes cluster that runs on the master node. Its primary role is to expose the Kubernetes API, which is used by cluster administrators, developers, and other Kubernetes components to manage the cluster and its resources.
The API server provides a RESTful interface for interacting with the Kubernetes API, allowing users to create, update, and delete Kubernetes resources such as pods, services, and deployments. It also provides authentication and authorization mechanisms to control access to the API.
In addition to exposing the Kubernetes API, the API server performs a number of important functions within the cluster, including:
Serving as the central hub for communication between other control plane components, such as the Kubernetes scheduler, controller manager, etcd.
Validating incoming requests to ensure that they meet the Kubernetes API's schema and other validation rules.
Storing the current state of the cluster and its resources in etcd, a distributed key-value store. This state information is used by other control plane components and worker nodes to ensure that the cluster is always in the desired state.
Managing the admission control process, which determines whether incoming requests should be allowed or rejected based on factors such as resource limits, security policies, and other constraints.
Overall, the API server is a critical component of a Kubernetes cluster, responsible for providing access to the Kubernetes API and coordinating communication between other control plane components.
Scheduler :
The scheduler is a Kubernetes control plane component that runs on the master node. Its primary role is to schedule pods onto worker nodes in the cluster based on resource availability, constraints, and other policies.
When a new pod is created, the scheduler is responsible for selecting an appropriate node on which to run the pod. The scheduler considers factors such as available resources (CPU, memory, storage), node affinity and anti-affinity rules, pod affinity and anti-affinity rules, and other scheduling constraints to make this decision.
The scheduler uses a pluggable architecture, which allows it to support different scheduling algorithms and policies. Users can configure and customize the scheduler by specifying custom scheduling policies and priorities.
Once the scheduler has selected a node for a pod, it updates the Kubernetes API server with the scheduling decision. The kubelet running on the chosen node then starts the container(s) for the pod.
The scheduler continually monitors the cluster and automatically rebalances pods as needed. For example, if a worker node fails or becomes unavailable, the scheduler can reschedule the affected pods onto other nodes in the cluster to ensure that they remain available.
Overall, the scheduler is a critical component of a Kubernetes cluster, responsible for ensuring that pods are scheduled onto worker nodes in a way that optimizes resource utilization and meets various scheduling constraints and policies.
etcd:
etcd is a distributed key-value store that is used by Kubernetes to store the cluster's configuration data, state information, and other important data. It is a highly available, fault-tolerant system that is designed to provide reliable storage for the cluster's critical data.
etcd is a separate process that runs on the master nodes of a Kubernetes cluster. It uses a distributed consensus algorithm to maintain a consistent view of the cluster's state across all nodes in the cluster.
When a component of the Kubernetes control plane (such as the API server or scheduler) needs to access or update the cluster's state information, it communicates with etcd to read or modify the relevant data. This ensures that all control plane components are working with the same, up-to-date view of the cluster's state.
etcd also provides a watch API that allows components to receive notifications when specific parts of the cluster's state change. This is used by various Kubernetes components to detect changes in the cluster's state and take appropriate actions.
Overall, etcd is a critical component of a Kubernetes cluster, providing reliable and consistent storage for the cluster's critical data. Its distributed, fault-tolerant architecture ensures that the cluster's state information is always available and up-to-date, even in the face of node failures or other issues.
control manager:
The control manager is a Kubernetes control plane component that runs on the master node. Its primary role is to ensure that the desired state of the cluster is maintained and to perform cluster-level functions that help achieve this.
The control manager is composed of several sub-components, each responsible for a specific aspect of cluster management:
Node Controller: Responsible for monitoring the status of nodes in the cluster and performing actions such as marking nodes as un schedulable if they become unresponsive.
Replication Controller: Responsible for managing the lifecycle of ReplicaSets, ensuring that the desired number of replicas of each pod is running at all times.
Endpoints Controller: Responsible for populating the Endpoints resource, which provides information about the network endpoints (IP addresses and ports) of services in the cluster.
Service Account & Token Controllers: Responsible for creating default accounts and access tokens for pods, and rotating those tokens periodically to ensure security.
The control manager also includes several other controllers that are responsible for managing specific Kubernetes resources, such as jobs, stateful sets, and daemon sets.
Overall, the control manager is a critical component of a Kubernetes cluster, responsible for ensuring that the desired state of the cluster is maintained and for performing various cluster-level functions that help achieve this. Its sub-components work together to monitor and manage different aspects of the cluster, ensuring that it is always running smoothly and in the desired state.
cloud control manager (CCM)
The Cloud Controller Manager (CCM) is a Kubernetes control plane component that runs on the master node and is responsible for managing interactions between the cluster and the cloud provider's APIs.
The CCM's primary role is to provide a uniform interface for Kubernetes to interact with different cloud providers, abstracting away the details of the underlying cloud infrastructure. It translates Kubernetes API calls into cloud-specific API calls, such as creating a load balancer or provisioning a storage volume.
The CCM includes cloud-specific controllers that are responsible for managing resources that are specific to a particular cloud provider. For example, it might include a controller for managing load balancers in a cloud provider's load balancer service.
The CCM also manages the lifecycle of cloud resources that are created by Kubernetes, such as persistent volumes and load balancers. When a resource is no longer needed, the CCM deletes it from the cloud provider's API.
In addition, the CCM can also provide features that are specific to a particular cloud provider. For example, it might provide an integration with a cloud provider's managed Kubernetes service or enable Kubernetes to use a cloud provider's native identity and access management (IAM) system.
Overall, the CCM is an important component of a Kubernetes cluster that enables Kubernetes to work seamlessly with different cloud providers. By abstracting away the details of the underlying cloud infrastructure and providing a uniform interface, the CCM makes it easier to deploy and manage Kubernetes clusters on various cloud platforms. So these are the components of the K8s.