This page was last modified on 28 June 2016, at 07:25.
|Openvz logo vertical.png|
|Developer(s)||Community project, supported by Odin, Inc.|
|License||GNU GPL v.2|
OpenVZ is a container-based virtualization for Linux. OpenVZ creates multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Each container performs and executes exactly like a stand-alone server; a container can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files.
OpenVZ is free open source software, available under GNU GPL.
OpenVZ is the basis of Virtuozzo, a virtualization solution offered by Odin. Virtuozzo is optimized for hosters and offers hypervisor (VMs in addition to containers), distributed cloud storage, dedicated support, management tools, and easy installation.
- 1 History
- 2 Advantages and disadvantages
- 3 Kernel
- 4 Template
- 5 Remarks
- 6 Authors
- 7 References
SWsoft (now known as Parallels) initially released[note 1] a product for Linux named Virtuozzo back in 2001. Their current product is named Parallels Virtuozzo Containers. In 2005 a version of Virtuozzo was released for Microsoft Windows. Also in 2005, SWsoft created the OpenVZ Project to release under a GPL 2 license the underlying technology upon which Virtuozzo builds.
While OS Virtualization does not seem to have garnered the press attention and excitement some of the machine / hardware virtualization products have gotten in recent years, having initially been released in 2001 (Virtuozzo) and 2005 (OpenVZ), they have both proven themselves to be efficient, stable, and secure workhorses on tens of thousands of servers around the world. Linux OS Virtualization (which includes Linux-VServer) is arguably the oldest and most widely deployed Linux virtualization platform to date.
Advantages and disadvantages
Since it is relatively light weight, OS virtualization offers a number of benefits over machine / hardware virtualization:
- It is much more efficient
- It scales better
- It offers much greater machine density
- It offers a larger number of resource management parameters
- Resource management is dynamic so no container restart is needed
OpenVZ is able to achieve better performance (so close to native it is hard to measure a difference), scalability and density because there is a single Linux kernel running on the physical host with each container only taking up the resources necessary for running the processes / services you want inside them without all of the overhead of a full operating system. A basic container might be between 8-14 additional processes on the host node. OpenVZ can also handle more advanced applications such as huge multi-threaded Java applications with hundreds of threads / processes given the appropriate amount of container resource management configuration.
Another advantage of OpenVZ is that it offers a wide range of dynamic resource management parameters including several for memory usage, number of processes, CPU usage, disk space usage, etc... all of which may be changed while the container is running. OpenVZ also supports container disk quotas as well as (optional) user and group disk quotas within the containers.
OpenVZ offers a number of advanced features including checkpointing and container migration from one physical host to another. Migration comes in two forms:
- Live migration minimizes downtime (only a few seconds) and maintains machine uptime and network connections
- Offline migration where the machine is stopped, migrated, and then started back up again.
The migration features of OpenVZ do NOT require a shared storage solution and utilizes rsync to flawlessly copy container directory structures from one physical host to another.
While there are a large number of usage scenarios where you would want to use OS Virtualization, there remain a few scenarios where OS Virtualization is NOT suited and machine / hardware virtualization would be preferred:
- When you need to run non-Linux OSes
- When you want to run multiple kernel versions
- When you need a highly customized kernel
The OpenVZ kernel is a Linux kernel, modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and checkpointing. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.
From the point of view of applications and container users, each container is an independent system. This independence is provided by a virtualization layer in the kernel of the host OS. Note that only a negligible part of the CPU resources is spent on virtualization (around 1-2%). The main features of the virtualization layer implemented in OpenVZ are the following:
- A container (CT) looks and behaves like a regular Linux system. It has standard startup scripts; software from vendors can run inside a container without OpenVZ-specific modifications or adjustment;
- A user can change any configuration file and install additional software;
- Containers are completely isolated from each other (file system, processes, Inter Process Communication (IPC), sysctl variables);
- Processes belonging to a container are scheduled for execution on all available CPUs. Consequently, CTs are not bound to only one CPU and can use all available CPU power.
The OpenVZ network virtualization layer is designed to isolate CTs from each other and from the physical network:
- Each CT has its own IP address; multiple IP addresses per CT are allowed;
- Network traffic of a CT is isolated from the other CTs. In other words, containers are protected from each other in the way that makes traffic snooping impossible;
- Firewalling may be used inside a CT (the user can create rules limiting access to some services using the canonical iptables tool inside a CT). In other words, it is possible to set up firewall rules from inside a CT;
- Routing table manipulations and advanced routing features are supported for individual containers. For example, setting different maximum transmission units (MTUs) for different destinations, specifying different source addresses for different destinations, and so on.
OpenVZ resource management controls the amount of resources available for containers. The controlled resources include such parameters as CPU power, disk space, a set of memory-related parameters, etc. Resource management allows OpenVZ to:
- Effectively share available host system resources among CTs
- Guarantee Quality-of-Service (QoS)
- Provide performance and resource isolation and protect from denial-of-service attacks
- Collect usage information for system health monitoring
Resource management is much more important for OpenVZ than for a standalone computer since computer resource utilization in a OpenVZ-based system is considerably higher than that in a typical system. As all the CTs are using the same kernel, resource management is of paramount importance. Really, each CT should stay within its boundaries and not affect other CTs in any way — and this is what resource management does.
OpenVZ resource management consists of four main components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user beancounters. Please note that all those resources can be changed during CT runtime, there is no need to reboot. Say, if you want to give your CT less memory, you just change the appropriate parameters on the fly. This is either very hard to do or not possible at all with other virtualization approaches such as VM or hypervisor.
Two-Level Disk Quota
Host system administrator (HW root) can set up a per-container disk quotas, in terms of disk blocks and inodes (roughly number of files). This is the first level of disk quota. In addition to that, a container administrator (CT root) can employ usual quota tools inside own CT to set standard UNIX per-user and per-group disk quotas.
If one want to give a CT more disk space, you just increase its disk quota. No need to resize disk partitions etc.
Fair CPU scheduler
CPU scheduler in OpenVZ is a two-level implementation of fair-share scheduling strategy.
On the first level scheduler decides which CT is give the CPU time slice to, based on per-CT cpuunits values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities and such.
OpenVZ administrator can set up different values of cpuunits for different containers, and the CPU time will be given to those proportionally.
Also there is a way to limit CPU time, e.g. say that this container is limited to, say, 10% of CPU time available.
Similar to the Fair CPU scheduler described above, I/O scheduler in OpenVZ is also two-level, utilizing Jens Axboe's CFQ I/O scheduler on its second level.
Each container is assigned an I/O priority, and the I/O scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel.
User beancounters is a set of per-CT counters, limits, and guarantees. There is a set of about 20 parameters which are carefully chosen to cover all the aspects of CT operation, so no single container can abuse any resource which is limited for the whole node and thus do harm to another CTs.
Resources accounted and controlled are mainly memory and various in-kernel objects such as IPC shared memory segments, network buffers etc. etc. Each resource can be seen from /proc/user_beancounters and has five values assiciated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependant; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, fail counter for it is increased, so CT administrator can see if something bad is happening by analyzing the output of /proc/user_beancounters in her container.
An OS template is basically a set of packages from some Linux distribution used to populate a container. With OpenVZ, different distributions can co-exist on the same hardware box, so multiple OS templates can be available.
An OS template consists of system programs, libraries, and scripts needed to boot up and run the system (container), as well as some very basic applications and utilities. Applications like a compiler and an SQL server are usually not included into an OS template.
OS template metadata is a set of a few files containing the following information:
- List of packages that form this OS template
- Locations of package repositories
- Scripts needed to be executed on various stages of template installation
- Public GPG key(s) needed to check signatures of packages
- Additional OpenVZ-specific packages
- Using OS template metadata and vzpkg tools, an OS template cache can be created.
An OS template cache is an OS template installed into a container and then packed into a gzipped tarball. Using such a cache, a new container can be created in a matter of minutes, if not seconds.
OS template cache can either be created from OS template metadata using vzpkg tools, or by other means.
Cite error: Invalid
parameter "group" is allowed only.
<references />, or
<references group="..." />
Sergey Perfilev. Bauman Moscow State Technical University. UI-8. E-mail: email@example.com
- The primary site for OpenVZ Linux Containers project
- Installing and using OpenVZ on CentOS 5
<ref> tags exist for a group named "note", but no corresponding
<references group="note"/> tag was found, or a closing
</ref> is missing