1 of 27

Kubernetes

Getting started with

2 of 27

Table of contents

01

02

03

What is kubernetes

Main Components & their Role

Kubernetes Resources

3 of 27

What is Kubernetes?

01

4 of 27

Kubernetes is an application orchestrator

An orchestrator is a system that deploys and manages applications. It can deploy your applications and dynamically respond to changes.

For example, Kubernetes can:

• Deploy your application

• Scale it up and down dynamically based on demand

• Self-heal it when things break

• Perform zero-downtime rolling updates and rollbacks

• Lots more…

5 of 27

What’s with the Name?

The name Kubernetes (koo-ber-net-eez) comes from the Greek word meaning Helmsman – the person who steers a seafaring ship. This theme is reflected in the logo, which is the wheel (helm control) of a ship. It shortened to “K8s” (pronounced “kates”).

6 of 27

Kubernetes & Docker

Docker is the low-level technology that starts and stops the containerised applications. Kubernetes is the higher-level technology that looks after the bigger picture, such as deciding which nodes to run containers on, deciding when to scale up or down, and executing updates. K8s has a CRI which can support not just Docker but other container runtimes as well.

7 of 27

Main Components & their Role

02

8 of 27

9 of 27

Control Plane

Kubernetes control plane node is a server running collection of system services that make up the control plane of the cluster. Sometimes we call them Masters, Heads or Head nodes.

  • For test environments a single control plane is fine, but for production prefer 3 or 5 to achieve HA

  • Don’t run user applications on master nodes. This frees them up to concentrate entirely on managing the cluster. �

10 of 27

Controller & Controller Manager

Kubernetes uses controllers to implement a lot of the cluster intelligence. They all run on the control plane. Controllers ensure the cluster runs what you asked it to run.

  • if you ask for three replicas of an app, a controller will ensure three healthy replicas are running and take appropriate actions if they aren’t.

  • Kubernetes also runs a controller manager that is responsible for spawning and managing the individual controllers.

  • Examples: Deployment Controller, StatefulSet Controller, Replica Set Controller etc.�

11 of 27

The cloud controller manager

If your cluster is on a public cloud, such as AWS, Azure, GCP, or Civo Cloud, it will run a cloud controller manager that integrates the cluster with cloud services, such as instances, load balancers, and storage. For example, if you’re on a cloud and an application requests a load balancer, the cloud controller manager provisions one of the cloud’s load balancers and connects it to your app.

12 of 27

Scheduler

The scheduler watches the API server for new work tasks and assigns them to healthy worker nodes. It implements the following process: watch the API server for new tasks, identify capable nodes,and assign tasks to nodes

  • Identifying capable nodes involves predicate checks, filtering, and a ranking algorithm.

  • The scheduler marks tasks as pending if it can’t find a suitable node.

  • If the cluster is configured for node autoscaling, the pending task kicks off a cluster autoscaling event that adds a new node and schedules the task to the new node.

13 of 27

API Server

The API server is the Grand Central of Kubernetes. All communication, between all components, must go through the API server. Internal system components, as well as external user components, all communicate via the API server – all roads lead to the API Server.

  • It exposes a RESTful API that you POST YAML configuration files to over HTTPS. These YAML files, which we sometimes call manifests, describe the desired state of an application. This desired state includes things like which container image to use, which ports to expose, and how many Pod replicas to run.

  • All requests to the API server are subject to authentication and authorization checks. Once these are done, the config in the YAML file is validated, persisted to the cluster store, and work is scheduled to the cluster. �

14 of 27

The cluster store

The cluster store is the only stateful part of the control plane and persistently stores the entire configuration and state of the cluster. As such, it’s a vital component of every Kubernetes cluster – no cluster store, no cluster.

  • Kubernetes uses etcd as its cluster store, requiring 3-5 replicas for high availability, with automatic configuration for HA. Every control plane will have on replica.

  • Etcd prioritizes consistency over availability, halting updates to maintain consistency but allowing user applications to continue working. It uses the RAFT consensus algorithm to ensure consistency of writes.

15 of 27

Kubelet

The kubelet is the main Kubernetes agent and handles all communication with the cluster. It is part of each worker node. If a task won’t run, the kubelet reports the problem to the API server and lets the control plane decide what actions to take.

  • Watches the API server for new tasks

  • Instructs the appropriate runtime to execute tasks

  • Reports the status of tasks to the API server

16 of 27

Runtime

Every worker node has one or more runtimes for executing tasks. Most new Kubernetes clusters pre-install the containerd runtime and use it to execute

tasks.

  • Pulling container images

  • Managing lifecycle operations such as starting and stopping containers

  • They must adhere to CRI which is an specification managed by CNCF

17 of 27

Kubeproxy

Every worker node runs a kube-proxy service that implements cluster networking and load balances traffic to tasks running on the node. It monitors the changes that happen to Service objects and their endpoints. If changes occur, it translates them into actual network rules inside the node.

  • When new Services or endpoints are added or removed, the API server communicates these changes to the Kube-Proxy.

  • Kube-Proxy then applies these changes as NAT rules inside the node. These NAT rules are simply mappings of Service IP to Pod IP. When a request is sent to a Service, it is redirected to a backend Pod based on these rules.

18 of 27

Kubernetes Resources

03

19 of 27

Pods

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. Kubernetes runs containers but they all need to be wrapped in Pods.

  • Pods are immutable, which If any update onto them destroys them and creates new ones. You should never log into a pod and update it.

  • Pods are mortal — they’re created, they live, and they die. Anytime one dies, Kubernetes replaces it with a new one

  • A Pod can have multiple containers e.g. in case of service mesh implementation

20 of 27

More on Pods…

  • All containers in a Pod are always scheduled to the same node. Because Pods are a shared execution environment, and you can’t easily share memory, networking, and volumes across nodes.

  • Pod is also an atomic operation. Kubernetes only ever marks a Pod as running when all its containers are started.

  • Pods are the minimum unit of scheduling in Kubernetes. As such, scaling an application up adds more Pods and scaling it down deletes Pods. You do not scale by adding more containers to existing.

21 of 27

Pod Create Flow

22 of 27

Deployments

Kubernetes works with Pods, you’ll almost always deploy them via higher- level controllers such as Deployments.

Deployments add self-healing, scaling, rolling updates, and versioned rollbacks to stateless apps.

  • Rollouts replace old Pods with new ones with new IPs

  • Scaling up adds new Pods with new IPs

  • Scaling down deletes existing Pods.

23 of 27

Services

A pod has a lifecycle they are created and eventually they are destroyed. This makes communication to pods unreliable because their IP addresses keep changing. �Services come into play by providing reliable networking for groups of Pods.

  • Services can be think of as having a front end and back end. The front end has a DNS name, IP address, and network port. The back end uses labels to load balance traffic across a dynamic set of Pods.

  • Services keep a list of healthy Pods as scaling events, rollouts, and failures cause Pods to come and go.

  • The Service also guarantees the name, IP, and port on the front end will never change.

24 of 27

Ingress

Ingress is an kubernetes resource that defines the routing rules.Kubernetes doesn’t have a built-in Ingress controller, meaning you need to install one. This differs from Deployments, ReplicaSets, Services, and most other resources that have built-in pre-configured controllers.

  • Ingress operates at layer 7 of the OSI model, also known as the application layer. This means it can inspect HTTP headers and forward traffic based on hostnames and paths.

  • Ingress classes allow you to run multiple Ingress controllers on a single cluster.

  • The Service also guarantees the name, IP, and port on the front end will never change.

25 of 27

Ingress controller makes one Kubernetes service using that get exposed as LoadBalancer.

26 of 27

Thanks!

CREDITS: This presentation template was created by Slidesgo, and includes icons by Flaticon, infographics & images by Freepik and content by Swetha Tandri

27 of 27

Resources