Kubernetes

From Bauman National Library
This page was last modified on 4 November 2015, at 08:09.
</td></tr>
Kubernetes
Kubernetes logo.jpg
Initial release 17 July 2015 (2015-07-17)[1]
Stable release
1.0.6[1] / 10 September 2015; 6 years ago (2015-09-10)
Repository {{#property:P1324}}
Written in Go
Operating system Linux
Type Operating system-level virtualization
License Apache License 2.0
Website kubernetes.io

Kubernetes is an open-source system for managing and deploying containerized applications across multiple hosts. It automatically determines which node in a cluster should receive each application ("pod" in Kubernetes lingo), based on their current workload and a given redundancy target (thus also providing self-healing). It also provides some means of discovery and communication between containers.

Kubernetes sits on top of virtualization software, such as Docker or rkt, which is used to run the containers that are part of a pod. Its goal is to fill the gap between modern, container-based cluster infrastructure, and assumptions from the applications and services themselves, such as having instances of computation-heavy services run on different hosts or small related services staying close together to minimize latency.

Introduction

For understanding Kubernetes we need to understand what is containers and Docker.

Docker and containers

Docker-linux-interfaces.svg.png

Docker is an open-source engine which automates the deployment of applications as containers which are independent of hardware, language, framework, packaging system and hosting provider. Docker implements a high-level API to do that.

By using containers, resources can be isolated, services restricted, and processes provisioned to have an almost completely private view of the operating system with their own process ID space, file system structure, and network interfaces (by Linux kernel such as cgroups and kernel namespaces). Multiple containers share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.

Using Docker to create and manage containers may simplify the creation of highly distributed systems, by allowing multiple applications, worker tasks and other processes to run autonomously on a single physical machine or across multiple virtual machines. This allows the deployment of nodes to be performed as the resources become available or when more nodes are needed, allowing a platform as a service (PaaS)-style of deployment and scaling for systems. Docker also simplifies the creation and operation of task or workload queues and other distributed systems.

Problems

While the increased granularity of containers provides marvelous benefits, it doesn’t substantially increase the efficiency of any running workload, and some workloads require thousands of computers to run the application. Docker today is really only designed to operate on a single computer. How can containers and the workloads they host be coordinated, distributed, and managed as they consume infrastructure resources? How can they operate in multi-tenant network environments? How are they secured?

Google has iteratively tackled this problems, building a cluster management, networking, and naming system — the first version of Kubernetes was called Borg and its successor, called Omega — to allow container technology to operate at Google scale.

Kubernetes has created a layer of abstraction that allows the developer and administrator to work collectively on improving the behavior and performance of the desired service, rather than any of its individual component containers or infrastructure resources.

Concepts

YgsLg7gM2L.png

Kubernetes Master

Kubernetes Master

The controlling unit in a Kubernetes cluster is called the master server. It serves as the main management contact point for administrators, and it also provides many cluster-wide systems for the relatively dumb worker nodes.

The master server runs a number of unique services that are used to manage the cluster's workload and direct communications across the system. Below, we will cover the components that are specific to the master server.

  • Etcd - one of the fundamental components that Kubernetes needs to function is a globally available configuration store. The etcd project, is a lightweight, distributed key-value store that can be distributed across multiple nodes. Kubernetes uses etcd to store configuration data that can be used by each of the nodes in the cluster. This can be used for service discovery and represents the state of the cluster that each component can reference to configure or reconfigure themselves. By providing a simple HTTP/JSON API, the interface for setting or retrieving values is very straight forward.
  • API Server - one of the most important services that the master server runs is an API server. This is the main management point of the entire cluster, as it allows a user to configure many of Kubernetes' workloads and organizational units. It also is responsible for making sure that the etcd store and the service details of deployed containers are in agreement. The API server implements a RESTful interface, which means that many different tools and libraries can readily communicate with it. A client called kubecfg is packaged along with the server-side tools and can be used from a local computer or by connecting to the master server.
  • Controller Manager Server - is used to handle the replication processes defined by replication tasks. The details of these operations are written to etcd, where the controller manager watches for changes. When a change is seen, the controller manager reads the new information and implements the replication procedure that fulfills the desired state. This can involve scaling the application group up or down.
  • Scheduler Server - the process that actually assigns workloads to specific nodes in the cluster. This is used to read in a service's operating requirements, analyze the current infrastructure environment, and place the work on an acceptable node or nodes. The scheduler is responsible for tracking resource utilization on each host to make sure that workloads are not scheduled in excess of the available resources. The scheduler must know the total resources available on each server, as well as the resources allocated to existing workloads assigned on each server.

Nodes

Kubernetes Node

A Node is a physical server (or a VM) inside the cluster. Also node is known as minions. Minion servers have a few requirements that are necessary to communicate with the master, configure the networking for containers, and run the actual workloads assigned to them. Below the components that are specific to the Nodes.

  • Docker Running on a Dedicated Subnet - the first requirement of each individual minion server is docker. The docker service is used to run encapsulated application containers in a relatively isolated but lightweight operating environment. Each unit of work is, at its basic level, implemented as a series containers that must be deployed.
  • Kubelet Service - the main contact point for each minion with the cluster group is through a small service called kubelet. This service is responsible for relaying information to and from the master server, as well as interacting with the etcd store to read configuration details or write new values. The kubelet service communicates with the master server to receive commands and work. Work is received in the form of a "manifest" which defines the workload and the operating parameters. The kubelet process then assumes responsibility for maintaining the state of the work on the minion server.
  • Proxy Service - in order to deal with individual host subnetting and in order to make services available to external parties, a small proxy service is run on each minion server. This process forwards requests to the correct containers, can do primitive load balancing, and is generally responsible for making sure the networking environment is predictable and accessible, but isolated.

Pod

A pod is the basic building block in Kubernetes. Inside a pod, you can run a set of containers. Pods are groups of containers that interact with each other on a frequent bases. Typically these are placed on the same physical host and rely on every member of the pod to operate as one logical group. Pods share the same host resources (CPU, RAM, Network), and typically they only communicate with each other via localhost connections on the same physical host. There are a few different container platforms, but the most common one are Docker and Rocket.

Pods also define the type of application / containers that run in the pod. The pod also defines the shared storage to be used for the containers running on the pod. Pods also facilitate horizontal and vertical scaling for containers within the the pod, essentially a pod is just a abstraction layer, which makes it easier to manage applications than it would be to handle individual containers.

Pods are meant to host CMS like Wordpress, logging systems, snap shot managers, etc, etc, however typically a pod should not be running multiple instances of the same application. Basically you don't want to be running HHVM in the same container that runs MySQL

Kubernetes Apache Pod Example This is a basic example of a single pod that runs Apache. This may very well be out of date tomorrow, so be sure to check for updated docs.

{
  "id": "Apache-Node",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
    "manifest": {
      "version": "v1beta1",
      "id": "Apache-Node",
      "containers": [{
        "name": "master",
        "image": "dockerfile/apache",
        "ports": [{
          "containerPort": 80,
          "hostPort": 8080
        }]
      }]
    }
  },
  "labels": {
    "name": "web-01"
  }
}

For example, an nginx web server pod might be defined as such:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

Replication Controller

It watches the cluster and ensures that a given number of pods are running in the cluster all the time. It can launch new pods and remove existing pods. We can also change the number of pods assigned to a replication controller. For this to function, we need to define our pod as a template inside the replication controller.

Even though a pod is a very powerful component, it can’t handle failures itself. Let’s say the node (server) running our pod crashed. Then our pod will be removed from the cluster too. But, this is not the behavior we want. Failures are inevitable and even in such situations, we need to provide our service to the customers. That’s what the replication controller does.

Services

We know, pods are added and removed. So, we need a way to load-balance our traffic into these pods. “Service” is the solution. It can act as a dynamic load balancer for a set of pods. It’s very efficient and uses IP tables and other techniques to avoid the load-balancing overhead. Services also comes with basic sticky session support.

Labels

Labels define and organize loosely coupled pods by using a key/value pair to define the function of the pod. This metadata is used to define the role and environment, such as "production" or "front-end", or "back-end": it enable users to map their own organizational structures onto system objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.

The Label Selector allows the user to to identify a set of pods by querying the metadata set by the label. This makes it easy to find pods which handle various tasks. This can be used for identify replicas or shards, pool members or other peers in a group of containers. There are currently two objects that are supported by label selectors:

  • service: "A configuration unit for the proxies that run on every worker node. It is named and points to one or more pods." Another way to look at would be as a named load balancer that sends traffic to one or more containers via proxy. The services find the containers it should be load balancing based off the pod labels that are applied when the pod is initially created. Traffic is sent based on the "selector" name in the configuration file, you would enter in what name of the pod which should receive the traffic.
  • replicationController: "A replication controller takes a template and ensures that there is a specified number of "replicas" of that template running at any one time. If there are too many, it'll kill some. If there are too few, it'll start more."

You could label pods as Apache, PHP-FPM, HHMV or whatever makes the most sense to you to run inside of a container. You can customize and create your own definitions using labels.

For more details look: github.com

An example of how to use labels could look something like this (syntax here is not correct, this is simply a basic example):

tier: "frontend" or "backend"
environment: "production" or "staging"
version: "stable"
replication: 10

If you wanted to test out a change on 2 of these 10 nodes, you could add another label: "testing", then apply this to 2 of the 10 pods.

tier: "frontend" or "backend"
environment: "production" or "staging"
version: "testing"
replication: 2

Labels can overlap between different pods. For instance you can have many pods with a "frontend" label, but there could be a few different environments set, or versions, or whatever else you label them as. This makes it easier to query and view the overall layout of the pods and services. Labels are set when a pod is created.

Creating a Kubernetes Cluster

Kubernetes can run on a range of platforms, from your laptop, to VMs on a cloud provider, to rack of bare metal servers. The effort required to set up a cluster varies from running a single command to crafting your own customized cluster.

Local-machine Solutions

Local-machine solutions create a single cluster with one or more Kubernetes nodes on a single physical machine. Setup is completely automated and doesn't require a cloud provider account. But their size and availability is limited to that of a single machine. The local-machine solutions are:

Hosted Solutions

Use Google services:

Turn-key Cloud Solutions

These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a few commands, and have active community support.

Custom Solutions

Kubernetes can run on a wide range of Cloud providers and bare-metal environments, and with many base operating systems. You can find a guide below that matches your needs. If you do want to start from scratch, try the Getting Started from Scratch guide.

Cloud

These solutions are combinations of cloud provider and OS not covered by the above solutions.

On-Premises VMs

Bare Metal

Kubernetes Links

References

Cite error: Invalid <references> tag; parameter "group" is allowed only.

Use <references />, or <references group="..." />
  1. 1.0 1.1 "GitHub Releases page". github.com. 2015-09-10.