Skip to main content

Kubernetes


Kubernetes is a fantastic container orchestrator tool that can manage Docker containers at a large scale.

It is open source project that enable software teams of all sizes, from small startup to fortune 100 company, to automate deploying, scaling and managing applications on a group or cluster of server machines.

These applications can include everything from internal-facing web applications like a content management system to marquee web properties like gmail to big data processing.

Kubernetes is made up of several distributed components, each of which plays a specific role in the execution of Docker containers. To understand the role of each Kubernetes component, we will follow the life cycle of a Docker container as it is created and managed by Kubernetes: that is, from the moment you execute the command to create the container to the point when it is actually executed on a machine that is part of your Kubernetes cluster.

Kubernetes Features


Multi Host Container Scheduling

  • Done by kube-scheduler
  • Assign pods to nodes at runtime
  • Check resources, quality of service, policies and user specifications before scheduling

Scalability && Availability

  • Kubernetes master can be deplyed ina highly available configuration
  • Multi regions deployments available

Flexibility && Modularization

  • Plug and play architecture
  • Extend architecture when needed
  • Add-ons: network drivers, service discovery, container runtime, visualization and command

Kubernetes Architecture


Kubernetes Architecture

Clustering technologies use the above architecture a lot. They define two types of nodes: masters and workers.

The master(s) nodes are responsible for the management of the cluster and all operational tasks according to the instructions received from the administrator.

The worker(s) nodes are responsible for the execution of the actual workload based on instructions received from the master(s).

Kubernetes basic building blocks


  • Nodes : Nodes serves as a worker machines in K8's cluster. It can be a physical computer or a virtual machine. it should have below requirements
    • Each node must have kubelet running
    • Container tooling like docker
    • A kube-proxy process running
    • The process like SupervisorD can restart containers
  • Pods : Pod is simplest unit that you can interact with. You can create, deploy and delete pods and it represents one running process on your cluster.The pods contains following things
    • Your docker application container
    • Storage resources
    • Unique network IP
    • Options that govern how container(s) should run
    • States (Pending, running, Succeeded, Failed and CrashLoopBackOff)
  • Controllers
    • ReplicaSets : It ensures a specified number of replicas for a pod are running at all times
    • Deployments : it provides declarative updates for pods and replica sets
    • DaemonSets : It ensures that all nodes runa copy of a specific pod. As nodes are added or removed from the cluster, a DeamonSet will add or remove the required pods
    • Jobs : Supervisor process for pods carrying out batch jobs
    • Services : Allow the communication between one set of deployments with another

Kubernetes Controllers Kubernetes Controllers

  • Labels : Labeles are key/value pairs that are attached to objects like pods, services, and deployments.Labels are for users to identify attributs for objects
  • Selectors : =, !=, IN,NOTIN and EXISTS
  • Namespaces:

Understanding the difference between the master and worker nodes


To run Kubernetes, you will require Linux machines, which are called nodes in Kubernetes. A node could be a physical machine or a virtual machine on a cloud provider, such as an EC2 instance. There are two types of nodes in Kubernetes:

  • Master nodes

  • Worker nodes

Master nodes are responsible for maintaining the state of the Kubernetes cluster, whereas worker nodes are responsible for executing your Docker containers.

The good news is that you can also use Windows-based nodes to launch Windows-based containers in your Kubernetes cluster. The thing to know is that you can mix Linux and Windows machines on your cluster and it will work the same, but you cannot launch a Windows container on a Linux worker node and vice versa.

Kubernetes is a collection of small projects. Each project is written in Go and forms part of the overall project that is Kubernetes.

To get a fully functional Kubernetes cluster, you need to set up each of these components by installing and configuring them separately and have them communicate with each other. When these two requirements are met, you can start running your containers using the Kubernetes orchestrator.

By spreading the different components across multiple machines, you gain two benefits:

  • You make your cluster highly available and fault-tolerant.

  • You make your cluster a lot more scalable. Components have their own lifecycle, they can be scaled without impacting others.

Each Kubernetes component has its own clearly defined responsibility. It is important for you to understand each component's responsibility and how it articulates with the other components to understand how Kubernetes works overall.

Depending on its role, a component will have to be deployed on a master node or a worker node. While some components are responsible for maintaining the state of a whole cluster and operating the cluster itself, others are responsible for running our application containers by interacting with Docker daemons directly. Therefore, the components of Kubernetes can be grouped into two families:

info

***Components belonging to the Control Plane:***These components are responsible for maintaining the state of the cluster. They should be installed on a master node. These are the components that will keep the list of containers executed by your Kubernetes cluster or the number of machines that are part of the cluster. As an administrator, when you interact with Kubernetes, you actually interact with the control plane components.

info

***Components belonging to the Worker Nodes:***These components are responsible for interacting with the Docker daemon in order to launch containers according to the instructions they receive from the control plane components. Worker node components must be installed on a Linux machine running a Docker daemon. You are not supposed to interact with these components directly. It's possible to have hundreds or thousands of worker nodes in a Kubernetes cluster.

Kubernetes works in relatively the same way. You are not supposed to launch your Docker containers by yourself, and therefore, you do not interact directly with the worker nodes. Instead, you send your instructions to the control plane. Then, it will delegate the actual container creation and maintenance to the worker node on your behalf. You never run a docker command directly:

A typical Kubernetes workflow. The client interacts with the master node/control plane components, which delegate container creation to a worker node. There is no communication between the client and the worker node

When using Kubernetes you'll notice here and there the concepts of control plane and the master node. They're almost the same: both expressions are meant to designate the Kubernetes components responsible of cluster administration, and by extension, the machines (or nodes) on which these components have been installed. In Kubernetes, we generally try to avoid talking about master nodes. Instead, we talk about the control plane.

The reason is because saying "master node" supposes the components allowing the management of the cluster are installed on the same machine and have a strong coupling with the machine that is running them. However, due to the distributed nature of Kubernetes, its master node components can actually be spread across multiple machines. This is quite tricky, but there are, in fact, two ways in which to set up the control plane components:

  • You run all of them on the same machine, and you have a master node.

  • You run them on different machines, and you no longer have a master node.

Kubelet && kube proxy


Kubelet is the kubernetes node agent that runs on each node

  • Kubelet Roles:
    • Communicates with API Server to see if pods have been assigned to nodes
    • Executes pod containers via a container engine
    • Mounts and runs pod volumes and secrets
    • Executes health checks to identify pod/node status

podspec: YAML file that describes pod

The kubelet takes set of podspecs that are provided by the kube-apiserver and ensures that the containers described in those podspecs are running and healthy

kubelet only manages the containers that were created by the API Server

  • Kubelet Proxy:
    • Process that runs on all worker nodes
    • Reflets services as defined in each node and can do simple network stream or round robin forwarding across a set of backends.
    • Service cluster IPs and ports are currently found through Docker --link compatible environment variables specifying ports opened by the service proxy

User Space mode

Iptables mode

Ipvs mode

The kubectl command-line tool and YAML syntax


The Etcd datastore


The kubelet and worker node components


The kube-scheduler component


The kube-controller-manager component


How to make Kubernetes highly available