Kubernetes Architecture (2024) | TechGeekNext


Kubernetes Architecture (2024)

Kubernetes is an open-source container orchestration platform that allows the operation of a cloud application's elastic web server framework. Kubernetes can also be used to support data centre outsourcing to public cloud service providers or for large-scale web hosting.

A working Kubernetes deployment is a cluster. A Kubernetes cluster is divided into two parts:

  1. Control Plane
  2. Compute Machines, OR Nodes
Each node is a separate Linux environment that can be either physical or virtual. Pods, which are mainly composed of containers, run on each node.

Let's take a closer look at all of the Kubernetes components that make this possible.

  1. Control plane
    Let's start with the control plane, which is at the heart of our Kubernetes cluster. The Kubernetes components that control the cluster, as well as data on the cluster's state and configuration, are all found here. These Kubernetes basic components are in charge of ensuring that your containers are running in sufficient numbers and with the resources they require.

    Your compute machines are constantly in contact with the control plane. You've set up your cluster to work in a specific way. It is ensured by the control plane.

  2. Kube-apiserver
    Internal and external requests are handled via the Kubernetes API, which is the front end of the Kubernetes control plane. Whenever a request is valid, the API server checks it and handles it. If you want to interact with the Kubernetes cluster, then you can use REST calls, the kubectl command-line interface, or even other command-line tools like kubeadm to access the API.
  3. Kubernetes Architecture
  4. Kube-scheduler
    The scheduler monitors a pod's resource requirements, such as CPU or memory, as well as the cluster's overall health. The pod is then assigned to an appropriate compute node.
  5. Kube-controller-manager
    The controller-manager is in charge of ensuring that the cluster's shared state is running normally. The controller manager, to be more precise, is in charge of multiple controllers that respond to events (e.g, if a node goes down).

    The Kubernetes controller-manager combines numerous controller functions into one. One controller checks with the scheduler to ensure that the proper amount of pods are running. Another controller notices and replies if a pod goes down. A controller connects services to pods, ensuring that requests are directed to the appropriate endpoints. There are also controllers for creating accounts and tokens for API access.

  6. etcd
    Kubernetes uses etcd, a distributed key-value store, to share information about a cluster's overall state. Additionally, when nodes are restored, they can resort to the global configuration data stored there to set themselves up.
  7. Nodes
    A Kubernetes cluster must have at least one compute node, but most will have several. Pods are scheduled and orchestrated to run on nodes. If users want to expand the capacity of your cluster then users can increase the number of nodes.
  8. Pod
    In the Kubernetes object model, a pod is the smallest and most basic unit. It represents a single application instance. Each pod consists of a container or a set of tightly related containers, as well as options that control how the containers operate. To run stateful applications, pods can be attached to persistent storage.
  9. Container runtime engine
    Each compute node has a container runtime engine that runs the containers. Kubernetes supports various Open Container Initiative-compliant runtimes, like rkt and CRI-O, in addition to Docker.
  10. kubelet
    A kubelet, a small application that connects with the control plane, is installed on each compute node. The kublet ensures that containers run in a pod.

    A Kubelet monitors a pod's state to verify that all containers are running. It sends a heartbeat message to the control plane every few seconds. The node is marked as unhealthy if a replication controller does not receive that notification.

  11. Kube proxy
    The Kube proxy is used to redirect traffic from the service into a node. It routes work requests to the appropriate containers.
  12. Persistent storage
    Kubernetes can manage the application data associated to a cluster in addition to the containers that run the application. Users can request storage resources using Kubernetes without needing to understand the underlying storage architecture. Persistent volumes are cluster-specific rather than pod-specific, and hence can outlast the life of a pod.
  13. Container registry
    Kubernetes uses container images that are saved in a container registry. This can be the user 's own registry that the user configured or a third-party registry.
  14. Underlying infrastructure
    It's entirely up to you where you run Kubernetes. One of Kubernetes' main advantages is that it can run on a variety of infrastructure like on virtual machines, public cloud providers, private clouds, and hybrid cloud environments based on requirements.








Recommendation for Top Popular Post :