1 of 51

CWT Digital Kubernetes

On-premise solution

Confidential

Customized for Lorem Ipsum LLC

Version 1.0

2 of 51

TOC

Kubespray

Kubernetes

Kubectl

Helm

Istio

Troubleshooting

Known Issues

Tips & Tricks

Confidential

Customized for Lorem Ipsum LLC

Version 1.0

3 of 51

Overview

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google

4 of 51

Problems to solve

Container orchestration

Horizontal scaling

Self-healing

Service discovery and load balancing

1

2

3

4

5 of 51

6 of 51

Kubernetes

Kubernetes is quickly becoming the new standard for deploying and managing software in the cloud. With all the power Kubernetes provides, however, comes a steep learning curve.

7 of 51

Kubespray

Deploy a Production Ready Kubernetes Cluster using Ansible.

  • Can be deployed on AWS, GCE, Azure, OpenStack, vSphere or Baremetal
  • Highly available cluster
  • Composable (Choice of the network plugin for instance)
  • Supports most popular Linux distributions
  • Continuous integration tests

I have forked Kubespray, http://git/Operations/kubespray to be able to customize inventory.

In the main project, http://git.mobimate.local/Operations/autobots-k8s-onprem it is integrated as a Git submodule.

8 of 51

Kubespray

Following is the custom STAGING inventory:

Cluster installation:

$ ansible-playbook -i inventory/stg/hosts.ini cluster.yml -b -u ratchet --private-key=~/.ssh/worldmate/ratchet_rsa -v

Use the “scale.yml” playbook to add nodes, use “remove-node.yml” to remove nodes from cluster.

9 of 51

Kubespray

After installation Kubespray will have installed all required components:

  • Docker engine
  • Linux users and groups
  • etcd
  • Kubelet process (via systemd)
  • Cluster certificates & roles
  • Calico (network plugin)
  • Kubernetes configuration (/etc/kubernetes)
  • kube-apiserver, kube-scheduler, kube-dns, kube controllers, kube-proxy (as containers)

For more information, browse the docs.

10 of 51

Understanding Kubernetes

11 of 51

Components

Master Components

  • kube-apiserver - Exposes the Kubernetes API
  • etcd - Consistent and highly-available key value store
  • kube-scheduler - Selects nodes for newly created pods
  • kube-controller-manager - Component on the master that runs controllers
  • cloud-controller-manager - Interact with the underlying cloud providers

Node Components

  • kubelet - An agent that runs on each node in the cluster
  • kube-proxy - Enables the Kubernetes service abstraction
  • Container runtime - Kubernetes supports several runtimes: Docker, rkt, runc

12 of 51

Node

01

A node is the smallest unit of computing hardware in Kubernetes. It is a representation of a single machine in your cluster.

Thinking of a machine as a “node” allows us to insert a layer of abstraction. Now, instead of worrying about the unique characteristics of any individual machine, we can instead simply view each machine as a set of CPU and RAM resources that can be utilized. In this way, any machine can substitute any other machine in a Kubernetes cluster.

13 of 51

The Cluster

02

Although working with individual nodes can be useful, it’s not the Kubernetes way. In general, you should think about the cluster as a whole, instead of worrying about the state of individual nodes.

In Kubernetes, nodes pool together their resources to form a more powerful machine. When you deploy programs onto the cluster, it intelligently handles distributing work to the individual nodes for you. It shouldn’t matter to the program, or the programmer, which individual machines are actually running the code.

14 of 51

Persistent Volumes

03

Because programs running on your cluster aren’t guaranteed to run on a specific node, data can’t be saved to any arbitrary place in the file system. If a program tries to save data to a file for later, but is then relocated onto a new node, the file will no longer be where the program expects it to be.

To store data permanently, Kubernetes uses Persistent Volumes. Persistent Volumes provide a file system that can be mounted to the cluster, without being associated with any particular node.

15 of 51

Containers

04

Programs running on Kubernetes are packaged as Linux containers. Containers are a widely accepted standard

Containerization allows you to create self-contained Linux execution environments. Any program and all its dependencies can be bundled up into a single file and then shared on the internet.

16 of 51

Pods

05

Kubernetes doesn’t run containers directly; instead it wraps one or more containers into a higher-level structure called a pod. Any containers in the same pod will share the same resources and local network. Containers can easily communicate with other containers in the same pod as though they were on the same machine while maintaining a degree of isolation from others.

Pods are used as the unit of replication in Kubernetes.

17 of 51

Deployments

06

Although pods are the basic unit of computation in Kubernetes, they are not typically directly launched on a cluster. Instead, pods are usually managed by one more layer of abstraction: the deployment.

A deployment’s primary purpose is to declare how many replicas of a pod should be running at a time. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it.

18 of 51

Ingress

07

Using the concepts described above, you can create a cluster of nodes, and launch deployments of pods onto the cluster. There is one last problem to solve, however: allowing external traffic to your application.

By default, Kubernetes provides isolation between pods and the outside world. If you want to communicate with a service running in a pod, you have to open up a channel for communication. This is referred to as ingress.

19 of 51

Kubernetes Objects

Kubernetes objects are represented in the Kubernetes API with the .yaml format.

A Kubernetes object is a “record of intent”–once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you’re effectively telling the Kubernetes system what you want your cluster’s workload to look like; this is your cluster’s desired state.

20 of 51

Kubernetes Objects

Most often, you provide the information to kubectl in a .yaml file.

Required fields: apiVersion, kind, metadata, spec.

Example of deployment creation:

kubectl create -f application/deployment.yaml

21 of 51

Kubernetes API

To work with Kubernetes objects–whether to create, modify, or delete them–you’ll need to use the Kubernetes API. When you use the kubectl command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you.

22 of 51

kubectl

Let’s install kubectl.

If you are on macOS and using Homebrew package manager, you can install with:

$ brew install kubernetes-cli

Run kubectl version to verify that the version you’ve installed is sufficiently up-to-date.

API Server

kubectl

23 of 51

kubectl

kubectl is a command line interface for running commands against Kubernetes clusters. Use the following syntax:

kubectl [command] [TYPE] [NAME] [flags]

For TYPE, you can use singular, plural, or abbreviated forms, for example:

$ kubectl get pod pod1� $ kubectl get pods pod1� $ kubectl get po pod1

Use -o=<output_format> to use different output formats (wide, yaml, json, etc.)

API Server

kubectl

24 of 51

Kubectl - Contexts

When running kubectl, you are targeting a specific context.

What’s a Context? It’s a Cluster host + User setting.

To create a new context, first define a cluster & user using these commands:

$ kubectl config set-cluster <name> --server=... --certificate-authority=...�$ kubectl config set-credentials <name> --client-certificate=... --client-key=...�$ kubectl config set-context <name> --cluster=... --user=...

Then use kubectl config use-context <name>

See repository’s README.md on how to add STAGING context

API Server

kubectl

25 of 51

Kubectl - Namespaces

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.

Namespaces provide a scope for names and divide cluster resources.

$ kubectl get namespaces�$ kubectl get deploy�$ kubectl get -n istio-system deploy�$ kubectl get -n kube-system deploy�$ kubectl get --all-namespaces deploy

API Server

kubectl

26 of 51

Kubectl - Proxy

To connect to Kubernetes API locally, you can use open a proxy:

$ kubectl proxy

Now you can access Kubernetes API locally from your computer: http://localhost:8001/

You can access services: http://localhost:8001/api/v1/namespaces/default/services/bplus:3000/proxy/bplus/v3/isAlive

Or login to the Kubernetes dashboard:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Get the token by: $ kubectl describe secret -n kube-system tiller-token-6bv4d

API Server

kubectl

27 of 51

YAML Examples

Service

Deployment

28 of 51

kubectl

./app.yml

Create a resource from a file:

$ kubectl create -f app.yml

Delete resources by filenames:

$ kubectl delete -f app.yml

Apply a configuration to a resource by filename:

$ kubectl apply -f app.yml

Replace a resource by filename:

$ kubectl replace -f app.yml

29 of 51

kubectl

Usage examples:

$ kubectl create -f boo.yml�$ kubectl explain services�$ kubectl get pods,svc,deploy�$ kubectl describe deploy push-service�$ kubectl delete deploy push-service�$ kubectl cluster-info�$ kubectl api-resources�$ kubectl top node�$ kubectl top pods�$ kubectl logs -f deploy/bplus -c bplus�$ kubectl exec -it bplus-6c6887547d-lhbsb -c bplus -- tail -f \� /var/log/bplus/wmapi.log

API Server

kubectl

30 of 51

Structure

As more deployments are added to the cluster, you may notice a lot of copy/paste duplicated code.

How can one reuse Kubernetes object manifests?

31 of 51

Helm

The package manager for Kubernetes.

Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.

Using Helm charts helps to produce reusable .yaml templates that can be parameterized and used for installation and upgrade of Kubernetes objects.

You can find a lot of community charts on the internet.

32 of 51

Helm

Helm is the package manager (analogous to yum and apt) and Charts are packages (analogous to debs and rpms).

We use Helm to templatize the deployment of our internal CWT services, see: http://git.mobimate.local/Operations/autobots-k8s-onprem/tree/master/helm/cwt-service

Each service is defined in the values.yaml and plucked using the parameter that is expected when running helm, for example:

$ helm template --set serviceName=push-service ./helm/cwt-service

33 of 51

Helm

There are two ways to use Helm:

  1. Using Helm only as a template generator
  2. Using Helm install/upgrade mechanisms (using Tiller)

Currently we are using Helm only as a template generator, and applying the output to kubectl for deploying into Kubernetes, for example:

$ helm template --set serviceName=bplus ./helm/cwt-service | kubectl apply -f -

34 of 51

Network

What about Ingress and traffic routing?

35 of 51

Istio

Connect, secure, control, and observe services.

  • Resiliency & efficiency
  • Traffic control
  • Network visibility
  • Security
  • Policy enforcement

In the main project, http://git.mobimate.local/Operations/autobots-k8s-onprem it is integrated as a Git submodule.

Without any changes to application code!

36 of 51

Istio - Traffic Management

A Gateway configures a load balancer for HTTP/TCP traffic, most commonly operating at the edge of the mesh to enable ingress traffic for an application.

Unlike Kubernetes Ingress, Istio Gateway only configures the L4-L6 functions (for example, ports to expose, TLS configuration). Users can then use standard Istio rules to control HTTP requests as well as TCP traffic entering a Gateway by binding a VirtualService to it.

See autobots-k8s-onprem common-gateway here.

37 of 51

Istio - Traffic Management

To configure the corresponding routes, you must define a VirtualService for the same host and bound to the Gateway using the gateways field in the configuration.

A VirtualService defines the rules that control how requests for a service are routed within an Istio service mesh.

See autobots-k8s-onprem common virtual-service here.

38 of 51

Istio - Traffic Management

A ServiceEntry is commonly used to enable requests to services outside of an Istio service mesh.

See autobots-k8s-onprem JBoss service-entry here.

39 of 51

Istio - How does it work?

Istio uses an extended version of the Envoy proxy. Envoy is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh.

Envoy is deployed as a sidecar to the relevant service in the same Kubernetes pod. This deployment allows Istio to extract a wealth of signals about traffic behavior as attributes.

40 of 51

Istio - Deploy CWT Service

Ensure you have helm, istioctl and kubectl installed.

The deployment syntax:

$ helm template --name release-name --set serviceName=<name> --set serviceVersion=<version> ./helm/cwt-service | istioctl kube-inject -f - | kubectl apply -f -

For example:

$ helm template --name release-name --set serviceName=users --set serviceVersion=3.4.3 ./helm/cwt-service | istioctl kube-inject -f - | kubectl apply -f -

41 of 51

Troubleshooting

42 of 51

Kubernetes - Troubleshooting

Debugging Pods:

$ kubectl describe pods ${POD_NAME}

Look at the state of the containers in the pod. Are they all Running? Have there been recent restarts?

Pod is crashing or otherwise unhealthy?

$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}

$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}

$ kubectl exec -it ${POD_NAME} -c ${CONTAINER_NAME} -- tail -f /var/log/trips/wmapi.log

43 of 51

Istio - Troubleshooting

Debugging service-mesh:

$ istioctl proxy-status�$ istioctl proxy-config endpoint <pod-name> clusters�$ istioctl proxy-config listeners <pod-name>

Debugging Istio components:

$ kubectl logs -f --tail 1000 -n istio-system deploy/istio-ingressgateway�$ kubectl logs -f --tail 1000 -n istio-system deploy/istio-pilot -c discovery

Istio Ops Guide: https://istio.io/help/ops/

44 of 51

Known Issues

45 of 51

Known issues

These are the known issues with our cluster implementation:

01 | A new node takes ~25 minutes to add (Kubespray)

02 | No storage plugin for persistent volumes

03 | Service Helm chart isn’t flexible enough

04 | Istio Pilot is slow. New deployment takes ~1.5m to rollup

05 | Istio monolith VirtualService configuration

46 of 51

Vision

47 of 51

Vision

Democratize distributed apps

Cattle mentality

Enable developers

Form a standard for CWT Digital distributed apps infrastructure architecture that fits our needs.

Visualize machine nodes as cattle and not pets, make the leap to consider nodes as dispensable footnotes.

Allow devs to take part, and utilize new infrastructure features that can be applied cluster-wide or service specific.

01

02

03

48 of 51

Tips & Tricks

49 of 51

Tips & Tricks

Add this to your ~/.bashrc to add auto-completion for kubectl & helm:

$ source <(kubectl completion bash)�$ source <(helm completion bash)

kubectl cheat-sheet: https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Helm commands: https://docs.helm.sh/helm

Continuously watch updates: $ kubectl get pod -w�View pods/nodes metrics: $ kubectl top nodes / $ kubectl top pods�View latest events: $ kubectl get events�View all non-running pods in cluster: $ kubectl get --all-namespaces pods | grep -v Running

50 of 51

TIPS

Cool Tools

51 of 51

Thank you.