From Bauman National Library
This page was last modified on 19 December 2016, at 13:46.
Developer(s) Docker, Inc.
Initial release 13 March 2013
Repository {{#property:P1324}}
Written in Go
Operating system Linux
Platform x86-64 with modern Linux kernel
Available in English
Type Operating system-level virtualization
License Apache License 2.0

Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux. Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.

The Linux kernel's support for namespaces mostly isolates an application's view of the operating environment, including process trees, network, user IDs and mounted file systems, while the kernel's cgroups provide resource isolation, including the CPU, memory, block I/O and network. Since version 0.9, Docker includes the libcontainer library as its own way to directly use virtualization facilities provided by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC (Linux Containers) and systemd-nspawn.

According to industry analyst firm 451 Research, "Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server. This helps enable flexibility and portability on where the application can run, whether on premises, public cloud, private cloud, bare metal, etc."[1]


Solomon Hykes started Docker in France as an internal project within dotCloud, a platform-as-a-service company, with initial contributions by other dotCloud engineers including Andrea Luzzardi and Francois-Xavier Bourlet. Jeff Lindsay also became involved as an independent collaborator. Docker represents an evolution of dotCloud's proprietary technology, which is itself built on earlier open-source projects such as Cloudlets.

Docker was released as open source in March 2013. On March 13, 2014, with the release of version 0.9, Docker dropped LXC as the default execution environment and replaced it with its own libcontainer library written in the Go programming language. As of October 24, 2015, the project had over 25,600 GitHub stars (making it the 20th most-starred GitHub project), over 6,800 forks, and nearly 1,100 contributors.

A May 2016 analysis showed the following organizations as main contributors to Docker: The Docker team, Cisco, Google, Huawei, IBM, Microsoft, and Red Hat.


  • On September 19, 2013, Red Hat and Docker announced a significant collaboration around Fedora, Red Hat Enterprise Linux, and OpenShift.
  • On October 15, 2014, Microsoft announced integration of the Docker engine into the next (2016) Windows Server release, and native support for the Docker client role in Windows.
  • On December 4, 2014, IBM announced a strategic partnership with Docker that enables enterprises to more efficiently, quickly and cost-effectively build and run the next generation of applications in the IBM Cloud.
  • On June 22, 2015, Docker and several other companies announced that they are working on a new vendor and operating-system-independent standard for software containers.
  • On June 8, 2016, Microsoft announced that Docker now could be used natively on Windows 10 with Hyper-V Containers, to build, ship and run containers utilizing the Windows Server 2016 Technical Preview 5 Nano Server container OS image.
  • On October 4, 2016, Solomon Hykes announced InfraKit as a new self-healing container infrastructure effort for Docker container environments.

Using a Docker

Rapid putting some of your applications

Docker is good for the organization of the development cycle. Docker allows developers to use local container applications and services. This subsequently allows integration with the process of integration and putting some permanent (continuous integration and deployment workflow).

For example, developers write the code locally and share their development stack (a set of images docker) and colleagues. When they are ready, poison code and containers on the test site and run any necessary tests. On the test site, they can recover the code and images on Productions.

Docker is not a replacement for lxc[2]. "lxc" refers to capabilities of the linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations.

On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:

Portable deployment across machines Docker defines a format for bundling an application and all its dependencies into a single object which can be transferred to any docker-enabled machine, and executed there with the guarantee that the execution environment exposed to the application will be the same. Lxc implements process sandboxing, which is an important pre-requisite for portable deployment, but that alone is not enough for portable deployment. If you sent me a copy of your application installed in a custom lxc configuration, it would almost certainly not run on my machine the way it does on yours, because it is tied to your machine's specific configuration: networking, storage, logging, distro, etc. Docker defines an abstraction for these machine-specific settings, so that the exact same docker container can run - unchanged - on many different machines, with many different configurations.

Application-centric Docker is optimized for the deployment of applications, as opposed to machines. This is reflected in its API, user interface, design philosophy and documentation. By contrast, the lxc helper scripts focus on containers as lightweight machines - basically servers that boot faster and need less ram. We think there's more to containers than just that.

Automatic build Docker includes a tool for developers to automatically assemble a container from their source code, with full control over application dependencies, build tools, packaging etc. They are free to use make, maven, chef, puppet, salt, debian packages, rpms, source tarballs, or any combination of the above, regardless of the configuration of the machines.

Versioning Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between versions, committing new versions, rolling back etc. The history also includes how a container was assembled and by whom, so you get full traceability from the production server all the way back to the upstream developer. Docker also implements incremental uploads and downloads, similar to "git pull", so new versions of a container can be transferred by only sending diffs.

Component re-use Any container can be used as an "base image" to create more specialized components. This can be done manually or as part of an automated build. For example you can prepare the ideal python environment, and use it as a base for 10 different applications. Your ideal postgresql setup can be re-used for all your future projects. And so on.

Sharing Docker has access to a public registry ( where thousands of people have uploaded useful containers: anything from redis, couchdb, postgres to irc bouncers to rails app servers to hadoop to base images for various distros. The registry also includes an official "standard library" of useful containers maintained by the docker team. The registry itself is open-source, so anyone can deploy their own registry to store and transfer private containers, for internal server deployments for example.

Tool ecosystem Docker defines an API for automating and customizing the creation and deployment of containers. There are a huge number of tools integrating with docker to extend its capabilities. PaaS-like deployment (Dokku, Deis, Flynn), multi-node orchestration (maestro, salt, mesos, openstack nova), management dashboards (docker-ui, openstack horizon, shipyard), configuration management (chef, puppet), continuous integration (jenkins, strider, travis), etc. Docker is rapidly establishing itself as the standard for container-based tooling.

Easier deployment and putting some

Based on containers Docker platform makes it easy to port your payload. Docker containers can work on your local machine, both real and the virtual machine in the data center or in the cloud.

Portability and lightweight nature of Docker makes it easy to dynamically manage your stress. You can use the Docker, to expand or to pay off your application or service. Speed Docker allows you to do this in almost real time.

High loads and larger payloads

Docker lightweight and fast. It is a stable, cost-effective alternative to virtual machines based on the hypervisor. It is particularly useful in conditions of high load, for example when creating your own cloud, or platform-as-a service (platform-as-service). But it is also useful for small and medium-sized applications, when you want to get more out of existing resources.

The main components of Docker

  • Docker: virtualization platform, open source;
  • Docker Hub: Our platform-as-a service for the distribution and management of Docker containers.


Architecture Docker


As shown in the diagram, the daemon is started on the host machine. The user does not interact with a server directly, and uses for this customer.


Docker-client software Docker - the main interface to the Docker. It receives commands from the user and communicates with the docker-demon.

Inside Docker

To understand what makes up the Docker, you need to know about the three components:

  • Docker-image - a read-only template. For example, the image may contain OSUbuntu c Apache and the application on it. The images are used to create containers. Docker makes it easy to create new images, updating existing ones, or you can download the images created by other people. The images - is a component assembly Docker.
  • Docker-registry stores images. There are public and private registers, from which you can download or upload images. Public Docker-register - it Docker Hub. There is a huge collection of stored images. As you know, the images can be created by you or you can use images created by others. Registry - the component distribution.
  • Containers. They are similar to the directory. The container contains everything you need to run the application. Each container is created from the image. Containers can be created, started, stopped, moved or deleted. Each container is isolated and is a secure platform for applications. Containers - a component of the work.

Work Docker and individual items

With Docker we can:

  • Create images, which are our applications;
  • Create containers, for running applications;
  • Distribute images through Docker Hub or other register of images.

Job image

The image - a read-only template from which to create the container. Each image is a set of levels. Docker uses the union file system for the combination of these levels into a single image. Union file system allows files and directories from various file systems (different branches) transparent overlay, creating a coherent file system.

One of the reasons that Docker lightweight - is the use of such levels. When you change the image, such as upgrading the application creates a new level. So, without changing the whole image or rebuilding, as you may have to do with the virtual machine, but the level is added or updated. And you do not need to distribute a whole new way, dealt only update that allows you to distribute images faster and easier.

At the heart of each image is a basic image. For example, ubuntu, a basic image of Ubuntu, or a fedora, the base image distribution Fedora. You can also use images as a base for creating new images. For example, if you have the image of apache, you can use it as a base image for your web applications.

Note! Docker usually takes images from the register Docker Hub.

Docker images can be created from these basic images to describe the steps for creating these images we call instructions. Each statement creates a new image, or level. Instructions are as follows:

  • Run the command
  • Adding a file or directory
  • Create an environment variable
  • Instructions that run when starting a container of this image

These instructions are stored in a file Dockerfile. Docker reads it Dockerfile, when you build an image, executes the instructions, and returns the final image.

Work Docker Registry

Registry - is the repository Docker images. After creating the image, you can publish it on the public register Docker Hub or on your personal registry.

With Docker client you can search for already published images and download them to your machine with Docker to create containers.

Docker Hub provides public and private storage of images. Search and download images from public storage available to all. The contents of private storage misses the search results. Only you and your users can get these images out and create their containers.

Work container

The container consists of an operating system, user files and metadata. As we know, each container is created from the image. This image says docker-y, which is in the container, which process to launch when running the container and other configuration data. Docker image is read-only. When the container docker starts, it creates a level of read / write from the top of the image (using a union file system, as mentioned earlier), which may be running.

When running the container:

With this program, Docker, or by using the RESTful API, Docker-client says Docker-daemon to run the container.

 $ sudo docker run -i -t ubuntu /bin/bash ##i##

The customer starts using the Docker, with the option to run, which says it will launch a new container. The minimum requirements to run the container are the following attributes: which image to use to create a container. In our case, ubuntu the command you want to execute when the container is to be launched. In our case, / bin / bash

Docker does the following:

  1. Downloads a way ubuntu: docker checks the image of ubuntu on the local machine, and if it is not - then download it to the Docker Hub. If the image is, then use it to create a container;
  2. Creates the container: when the image is obtained, docker uses it to create the container; initializes the file system and mounts read-only level: the container is created in the file system, and read-only image layer is added;
  3. 'Initializes the network / bridge' creates a network interface that allows you to communicate from the docker-host machine;
  4. Set IP Address: finds and sets the address;
  5. Run this process: Launches your application;
  6. Processes and outputs the output of your applications: connected and logged standard input, output, and error output of your application that you could keep track of how your application.

Now you have a working container. You can manage your container to interact with your application. When decide to stop the application, remove the container.

Technologies used

Docker wrote on the Go and uses some of the kernel Linux, to implement the above functionality.

Namespace (namespaces)

Docker uses namespaces to organize isolated workspaces, which we call containers. When we run the container, docker creates a set of namespaces for the container.

This creates an isolated level, every aspect of the container started in his space names and does not have access to an external system.

List some of the namespaces used by docker:

  • pid: for the isolation process;
  • net: to manage network interfaces;
  • ipc: to control the IPC resources. (ICP: InterProccess Communication);
  • mnt: to control mount points;
  • utc: to isolate and control the generation of kernel versions (UTC: Unix timesharing system).

Control groups

Docker also uses cgroups or control groups. The key to the application in isolation, providing application only those resources that you want to provide. This assures that the containers will be good neighbors. Control groups allow you to share the available resources of iron and, if necessary, set limits and restrictions. For example, to limit the possible number of container storage.

Union File System

Union File System or UnionFS - is a file system that works creating levels, making it very lightweight and fast. Docker uses UnionFS to create blocks that built the container. Docker can use several options UnionFS including: AUFS, btrfs, vfs and DeviceMapper.

Container formats

Docker combines these components in a wrapper, which we call a container format. The format of the default, called libcontainer. Just docker supports traditional format containers in using Linux c LXC. In the future it will be possible to maintain Docker other container formats. For example, integrating with BSD Jails and Solaris Zones.

First steps

Installing Docker[3]

 echo deb docker main | sudo tee /etc/apt/sources.list.d/docker.list 
 sudo apt-key adv --keyserver --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 
 sudo apt-get update 
  sudo apt-get install -y lxc-docker 

Find and install a container

 docker search tutorial 
 docker pull learn/tutorial 

Execute command inside the container

 docker run apt-get update 

Эти же команды можно выполнить с помощью скрипта:

 curl -sSL 

Basic operations with containers

List of container

  docker ps 

View the output container

  docker logs 

In the tail -f:

 docker logs -f 

View the configuration of the container View the configuration of the container JSON-formatted:

  docker inspect container_name 

View a separate part of the configuration / variable:

  docker inspect -f  '{{ .NetworkSettings.IPAddress }}' container_name 

Basic commands for working with the containers (Docker cheat sheet)

Life cycle docker create - to create a container, but does not run it

docker run - to create and run a container

docker stop - stop the container

docker start - run an existing stopped container

docker restart - a container restart

docker rm - to remove the container

docker kill - Send a SIGKILL container

docker attach - to connect to a working container

docker wait - the command to block and wait until the container is stopped

Information on the containers docker ps - show the working container (or containers in general, to use an additional option)

docker inspect - to show all the information on the container, including IP-addresses

docker logs - show log output of container

docker events - show the container event

docker port - show outwardly open port of the container

docker top - show the processes running inside the container

docker stats - show the container resource usage statistics

docker diff - show the changed files in the file system of the container


  docker cp copies files or folders out of a container's filesystem. 
  docker export turns container filesystem into tarball archive stream to STDOUT. 

Executing commands

 docker exec to execute a command in container. 

Create container

We will describe the creation of docker-container, which will run flask-application. Then make the application available through nginx-server running in a separate container.

Create Docker-file

 FROM python:2.7 
 RUN mkdir -p /code 
 COPY . /code 
 VOLUME [ "/code" ] 
 WORKDIR /code 
 RUN pip install -r requirements.txt 
 EXPOSE 5000 
 CMD [ "python", "/code/" ] 

In the current directory should be placed:

 from flask import Flask 
 from redis import Redis 
 import os 
 app = Flask(__name__) 
 redis = Redis(host='redis', port=6379) 
 def hello(): 
    return 'Hello World! I have been seen %s times.' % redis.get('hits') 
 if __name__ == "__main__":"", debug=True) 



Building image

  docker build -t flaskapp .

Starting with the new way of the container

  docker run -d -P --name flaskapp flaskapp 

View, whether the container, and which port it is available:

   $ sudo docker ps 
 CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                     NAMES 
 0c8339084e07        flaskapp:latest     "python /code/   5 seconds ago       Up 3 seconds>5000/tcp   flaskapp  

If we turn to port 49154 of the host system to access the applications running inside the container.

Since our application uses for its work outsourced service (redis), we need a container with redis, which will be connected to this container.

 docker rm -f flaskapp 
 docker run -d --name redis redis 
 docker run -d -P --name flaskapp --link redis:redis flaskapp 

Now the application is available Redis-server.

Connecting additional images

If necessary, you can connect additional images Docker, create a bunch of containers.

Create nginx-container, which is a network for the flask-frontend application. Create a configuration file nginx will call flaskapp.conf:

 server { 
     listen 80; 
    location / { 
         proxy_pass http://flaskapp:5000; 

Create a Dockerfile:

 FROM nginx:1.7.8 
 COPY flaskapp.conf /etc/nginx/conf.d/default.conf 

To build and run the image:

 docker build -t nginx-flask . 
  docker run --name nginx-flask --link flaskapp:flaskapp -d -p 8080:80 nginx-flask 

It works three containers, which are interconnected with each other:

      +-------+      +-------+      +-------+ 
  8080|       |  5000|       |      |       | 
      o nginx +----->o flask +----->| redis | 
      |       |      |       |      |       | 
      +-------+      +-------+      +-------+ 

Working containers:

  $ docker ps 
 CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS              PORTS                           NAMES 
 980b4cb3002a        nginx-flask:latest   "nginx -g 'daemon of   59 minutes ago      Up 59 minutes       443/tcp,>80/tcp   nginx-flask         
 ae4320dc419a        flaskapp:latest      "python /code/   About an hour ago   Up About an hour>5000/tcp         flaskapp            
 3ecaab497403        redis:latest         "/ redi   About an hour ago   Up About an hour    6379/tcp                        redis  

Check if the service responds:

 $ curl ; echo 
 Hello World! I have been seen 1 times. 
 $ curl ; echo 
 Hello World! I have been seen 2 times. 
 $ curl ; echo 
 Hello World! I have been seen 3 times. 


There are a couple of articles in Russian to learn more about Docker: Dockerfile and communication between the containers[4]

As Google and Docker launch a "revolution of containers"[5]