Xen Project

From Bauman National Library
This page was last modified on 19 November 2016, at 17:43.
Xen Project
Xen project logo.png

Overview

The Xen Project hypervisor is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). The Xen Project hypervisor is the only type-1 hypervisor that is available as open source. It is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances. The Xen Project hypervisor is powering the largest clouds in production today.

Here are some of the Xen Project hypervisor's key features:

  • Small footprint and interface (is around 1MB in size). Because it uses a microkernel design, with a small memory footprint and limited interface to the guest, it is more robust and secure than other hypervisors.
  • Operating system agnostic: Most installations run with Linux as the main control stack (aka "domain 0"). But a number of other operating systems can be used instead, including NetBSD and OpenSolaris.
  • Driver Isolation: The Xen Project hypervisor has the capability to allow the main device driver for a system to run inside of a virtual machine. If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
  • Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine. This allows the guests to run much faster than with hardware extensions (HVM). Additionally, the hypervisor can run on hardware that doesn't support virtualization extensions.

This page will explore the key aspects of the Xen Project architecture that a user needs to understsand in order to make the best choices.

  • Guest types: The Xen Project hypervisor can run fully virtualized (HVM) guests, or paravirtualized (PV) guests.
  • Domain 0: The architecture employs a special domain called domain 0 which contains drivers for the hardware, as well as the toolstack to control VMs.
  • Toolstacks: This section covers various toolstack front-ends available as part of the Xen Project stack and the implications of using each.

History

Xen Project originated as a research project at the University of Cambridge, led by Ian Pratt, senior lecturer at Cambridge who co-founded XenSource, Inc. with Simon Crosby also of Cambridge University. The first public release of Xen was made in 2003.

Xen Project has been supported originally by XenSource Inc., and since the acquisition of XenSource by Citrix in October 2007. This organisation supports the development of the free software project and also sells enterprise versions of the software.

On 22 October 2007, Citrix Systems completed its acquisition of XenSource, and the Xen Project moved to the xen.org domain. This move had started some time previously, and made public the existence of the Xen Project Advisory Board (Xen AB), which had members from Citrix, IBM, Intel, Hewlett-Packard, Novell, Red Hat, Sun Microsystems and Oracle. The Xen Advisory Board advises the Xen Project leader and is responsible for the Xen trademark, which Citrix has freely licensed to all vendors and projects that implement the Xen hypervisor.

Citrix has also used the Xen brand itself for some proprietary products unrelated to Xen, including at least "XenApp" and "XenDesktop".

On 15 April 2013, it was announced that the Xen Project was moved under the auspices of the Linux Foundation as a Collaborative Project. The Linux Foundation launched a new trademark for "Xen Project" to differentiate the project from any commercial use of the older "Xen" trademark. A new community website was launched at xenproject.org as part of the transfer. Project members at the time of the announcement included: Amazon, AMD, Bromium, CA Technologies, Calxeda, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon. The Xen project itself is self-governing.

Xen Project Architecture

Below is a diagram of the Xen Project architecture. The Xen Project hypervisor runs directly on the hardware and is responsible for handling CPU, Memory, and interrupts. It is the first program running after exiting the bootloader. On top of the hypervisor run a number of virtual machines. A running instance of a virtual machine is called a domain or guest. A special domain, called domain 0 contains the drivers for all the devices in the system. Domain 0 also contains a control stack to manage virtual machine creation, destruction, and configuration.

Xen Arch Diagram.png

Components in detail:

  • The Xen Project Hypervisor is an exceptionally lean (<150,000 lines of code) software layer that runs directly on the hardware and is responsible for managing CPU, memory, and interrupts. It is the first program running after the bootloader exits. The hypervisor itself has no knowledge of I/O functions such as networking and storage.
  • Guest Domains/Virtual Machines are virtualized environments, each running their own operating system and applications. The hypervisor supports two different virtualization modes: Paravirtualization (PV) and Hardware-assisted or Full Virtualization (HVM). Both guest types can be used at the same time on a single hypervisor. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM. This approach is called PV on HVM. Guest VMs are totally isolated from the hardware: in other words, they have no privilege to access hardware or I/O functionality. Thus, they are also called unprivileged domain (or DomU).
  • The Control Domain (or Domain 0) is a specialized Virtual Machine that has special privileges like the capability to access the hardware directly, handles all access to the system’s I/O functions and interacts with the other Virtual Machines. It also exposes a control interface to the outside world, through which the system is controlled. The Xen Project hypervisor is not usable without Domain 0, which is the first VM started by the system.
  • Toolstack and Console: Domain 0 contains a control stack (also called Toolstack) that allows a user to manage virtual machine creation, destruction, and configuration. The toolstack exposes an interface that is either driven by a command line console, by a graphical interface or by a cloud orchestration stack such as OpenStack or CloudStack.
  • Xen Project-enabled operating systems: Domain 0 requires a Xen Project-enabled kernel. Paravirtualized guests require a PV-enabled kernel. Linux distributions that are based on recent Linux kernel are Xen Project-enabled and usually include packages that contain the hypervisor and Tools (the default Toolstack and Console). All but legacy Linux kernels are PV-enabled, capable of running PV guests.

Guest Types

The hypervisor supports running two different types of guests: Paravirtualization (PV) and Full or Hardware assisted Virtualization (HVM). Both guest types can be used at the same time on a single hypervisor. It is also possible to use techniques used for Paravirtualization in an HVM guest and vice versa: essentially creating a continuum between the capabilities of pure PV and HVM. We use different abbreviations to refer to these configurations, called HVM with PV drivers, PVHVM and PVH.

The evolution of the different virtualization modes in the Xen Project Hypervisor

PV

Paravirtualization (PV) is an efficient and lightweight virtualization technique originally introduced by Xen Project, later adopted by other virtualization platforms. PV does not require virtualization extensions from the host CPU. However, paravirtualized guests require a PV-enabled kernel and PV drivers, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. PV-enabled kernels exist for Linux, NetBSD, FreeBSD and OpenSolaris. Linux kernels have been PV-enabled from 2.6.24 using the Linux pvops framework. In practice this means that PV will work with most Linux distributions (with the exception of very old versions of distros).

This diagram gives an overview of how Paravirtualization is implemented in the Xen Project Hypervisor

HVM

Full Virtualization or Hardware-assisted virtualizion (HVM) uses virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. The Xen Project software uses Qemu to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc. Virtualization hardware extensions are used to boost performance of the emulation. Fully virtualized guests do not require any kernel support. This means that Windows operating systems can be used as a Xen Project HVM guest. Fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.

This figure shows the difference between HVM with and without PV drivers.

PVHVM

To boost performance, fully virtualized HVM guests can use special paravirtual device drivers (PVHVM or PV-on-HVM drivers). These drivers are optimized PV drivers for HVM environments and bypass the emulation for disk and network IO, thus giving you PV like (or better) performance on HVM systems. This means that you can get optimal performance on guests operating systems such as Windows. Note that Xen Project PV (paravirtual) guests automatically use PV drivers: there is thus no need for these drivers - you are already automatically using the optimized drivers. PVHVM drivers are only required for HVM (fully virtualized) guest VMs.

This figure shows the difference between HVM with and without PV and PVHVM drivers.

PVH

Xen Project 4.4 introduced a virtualization mode called PVH for DomU's. Xen Project 4.5 introduced PVH for Dom0 (both Linux and some BSD's). This is essentially a PV guest using PV drivers for boot and I/O. Otherwise it uses HW virtualization extensions, without the need for emulation. PVH is considered experimental in 4.4 and 4.5. It works pretty well, but additional tuning is needed (probably in the 4.6 release) before it should be used in production. PVH has the potential to combine the best trade-offs of all virtualization modes, while simplifying the Xen architecture. In a nutshell, PVH means less code and fewer Interfaces in Linux/FreeBSD: consequently it has a smaller TCB and attack surface, and thus fewer possible exploits. Once hardened and optimised, it should It also have better performance and lower latency, in particular on 64 bit hosts.

This figure shows the difference between HVM (and its variants), PV and PVH

Toolstacks, Managment APIs and Consoles

Xen Project software employs a number of different toolstacks. Each toolstack exposes an API, which will run different tools. The figure below gives a very brief overview of the choices you have, which commercial products use which stack and examples of hosting vendors using specific APIs.

Boxes marked in blue are developed by the Xen Project

The Xen Project software can be run with the default toolstack, with Libvirt and with XAPI. The pairing of the Xen Project hypervisor and XAPI became known as XCP which has been superceded by open source XenServer. The diagram above shows the various options: all of them have different trade-offs and are optimized for different use-cases. However in general, the more on the right of the picture you are, the more functionality will be on offer.

Hosts

Xen can be shipped in a dedicated virtualization platform, such as Citrix XenServer Enterprise Edition (formerly XenSource's XenEnterprise).

Alternatively, Xen is distributed as an optional configuration of many standard operating systems. Xen is available for and distributed with:

  • Alpine Linux offers a minimal dom0 system (Busybox, UClibc) that can be run from removable media, like USB sticks.
  • Debian GNU/Linux (since version 4.0 "etch") and many of its derivatives
  • FreeBSD 11 includes experimental host support.
  • Gentoo and Arch Linux both have packages available to support Xen.
  • Mageia (since version 4)
  • NetBSD 3.x. includes host support for Xen 2, with host support for Xen 3.0 available from NetBSD 4.0.
  • OpenSolaris-based distributions can function as dom0 and domU from Nevada build 75 onwards.
  • openSUSE 10.x to 12.x; only 64-bit hosts are supported since 12.1
  • Qubes OS for desktop usage
  • SUSE Linux Enterprise Server (since version 10)
  • Solaris
  • Ubuntu 12.04 "Precise Pangolin" and later releases; also 8.04 Hardy Heron, but no dom0-capable kernel in 8.10 Intrepid Ibex until 12.04

Guests

Unix-like systems as guests

Guest systems can run fully virtualized (which requires hardware support) or paravirtualized (which requires a modified guest operating system).

Most operating systems which can run on PC can run as a Xen HVM guest.

Additionally the following systems have patches allowing them to operate as paravirtualized Xen guests:

  • FreeBSD
  • GNU/Hurd/Mach (gnumach-1-branch-Xen-branch)
  • Linux, paravirtualization integrated in 2.6.23, patches for other versions exist
  • MINIX
  • NetBSD (NetBSD 2.0 has support for Xen 1.2, NetBSD 3.0 has support for Xen 2.0, NetBSD 3.1 supports Xen 3.0, NetBSD 5.0 features Xen 3.3)
  • NetWare (at Brainshare 2005, Novell showed a port that can run as a Xen guest)
  • OpenSolaris (See The Xen Community On OpenSolaris)
  • OZONE (has support for Xen v1.2)
  • Plan 9 from Bell Labs

Microsoft Windows systems as guests

Xen version 3.0 introduced the capability to run Microsoft Windows as a guest operating system unmodified if the host machine's processor supports hardware virtualization provided by Intel VT-x (formerly codenamed Vanderpool) or AMD-V (formerly codenamed Pacifica).

During the development of Xen 1.x, Microsoft Research, along with the University of Cambridge Operating System group, developed a port of Windows XP to Xen — made possible by Microsoft's Academic Licensing Program. The terms of this license do not allow the publication of this port, although documentation of the experience appears in the original Xen SOSP paper.

James Harper and the Xen open-source community have started developing GPL'd Paravirtualisation drivers for Windows. These provide front-end drivers for the Xen block and network devices, and allow much higher disk and network performance for Windows systems running in HVM mode. Without these drivers all disk and network traffic has to be processed through QEMU-DM.

Install on Ubuntu Server 16.04 LTS

Next will be described installation on Ubuntu Server 16.04 LTS hypervisor, and it will be installed Ubuntu 16.04

During installation of Ubuntu

During the install of Ubuntu for the partitioning method choose "Guided - use the entire disk and setup LVM". Then, when prompted to enter "Amount of volume group to use for guided partitioning:" Enter a value just large enough for the Xen Dom0 system, leaving the rest for virtual disks. Enter a value smaller than the size of your installation drive. For example 10 GB or even 5 GB should be large enough for a minimal Xen Dom0 system. Entering a percentage of maximum size (e.g. 25%) is also a reasonable choice.

Installing Xen

Install a 64-bit hypervisor. (A 64-bit hypervisor works with a 32-bit dom0 kernel, but allows you to run 64-bit guests as well.) :

$ sudo apt-get install xen-hypervisor-amd64

Modify GRUB:

$ sudo dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen
$ sudo update-grub

After reboot:

$ sudo reboot

Check success:

$ sudo xm list
(xl для поздних версий XenProject)

Network Configuration

This section describes how to set up linux bridging in Xen. It assumes eth0 is both your primary interface to dom0 and the interface you want your VMs to use. It also assumes you're using DHCP.

$ sudo apt-get install bridge-utils

Change /etc/network/interface:

auto lo eth0 xenbr0
iface lo inet loopback

iface xenbr0 inet dhcp
  bridge_ports eth0

iface eth0 inet manual

Reboot networking to enable xenbr0 bridge:

$ sudo ifdown eth0 && sudo ifup xenbr0 && sudo ifup eth0

Manually Create a PV Guest VM

List your existing volume groups (VG) and choose where you'd like to create the new logical volume. :

$ sudo vgs

Create the logical volume (LV). :

$ sudo lvcreate -L 10G -n lv_vm_ubuntu /dev/<VGNAME>

Confirm that the new LV was successfully created. :

$ sudo lvs

Get Netboot Images =

Choose an archive mirror https://launchpad.net/ubuntu/+archivemirrors.

$ sudo mkdir -p /var/lib/xen/images/ubuntu-netboot/trusty14LTS
$ cd /var/lib/xen/images/ubuntu-netboot/trusty14LTS
$ wget http://<mirror>/ubuntu/dists/trusty/main/installer-amd64/current/images/netboot/xen/vmlinuz
$ wget http://<mirror>/ubuntu/dists/trusty/main/installer-amd64/current/images/netboot/xen/initrd.gz

Set Up Initial Guest Configuration

$ cd /etc/xen
$ cp xlexample.pvlinux ubud1.cfg
$ vi ubud1.cfg
name = "ubud1"

kernel = "/var/lib/xen/images/ubuntu-netboot/trusty14LTS/vmlinuz"
ramdisk = "/var/lib/xen/images/ubuntu-netboot/trusty14LTS/initrd.gz"
#bootloader = "/usr/lib/xen-4.4/bin/pygrub"

memory = 1024
vcpus = 1

# Custom option for Open vSwitch
vif = [ 'script=vif-openvswitch,bridge=ovsbr0' ]

disk = [ '/dev/<VGNAME>/lv_vm_ubuntu,raw,xvda,rw' ]

# You may also consider some other options
# [[http://xenbits.xen.org/docs/4.4-testing/man/xl.cfg.5.html]]

Start the VM and connect to the console to perform the install. :

$ sudo xl create -c /etc/xen/ubud1.cfg

Once installed and back to command line, modify guest configuration to use the pygrub bootloader. These lines will change :

$ vi /etc/xen/ubud1.cfg
#kernel = "/var/lib/xen/images/ubuntu-netboot/trusty14LTS/vmlinuz"
#ramdisk = "/var/lib/xen/images/ubuntu-netboot/trusty14LTS/initrd.gz"
bootloader = "/usr/lib/xen-4.4/bin/pygrub"

Now let's restart the VM with the new bootloader. (If the VM didn't shutdown after the install above, you may manually shut it down.) :

$ sudo xl shutdown ubud1
$ sudo xl create -c /etc/xen/ubud1.cfg

Manually installing an HVM Guest VM

Download Install ISO. : http://www.ubuntu.com/download/desktop

sudo pvs

Choose your VG, create a LV :

sudo lvcreate -L 4G -n ubuntu-hvm /dev/<VG>

Create a guest config file /etc/xen/ubuntu-hvm.cfg :

builder = "hvm"
name = "ubuntu-hvm"
memory = "512"
vcpus = 1
vif = ['']
disk = ['phy:/dev/<VG>/ubuntu-hvm,hda,w','file:/root/ubuntu-12.04-desktop-amd64.iso,hdc:cdrom,r']
vnc = 1
boot="dc"
xl create /etc/xen/ubuntu-hvm.cfg
vncviewer localhost:0

After the install you can optionally remove the CDROM from the config and/or change the boot order. For example /etc/xen/ubuntu-hvm.cfg:

builder = "hvm"
name = "ubuntu-hvm"
memory = "512"
vcpus = 1
vif = ['']
#disk = ['phy:/dev/<VG>/ubuntu-hvm,hda,w','file:/root/ubuntu-12.04-server-amd64.iso,hdc:cdrom,r']
disk = ['phy:/dev/<VG>/ubuntu-hvm,hda,w']
vnc = 1
boot="c"
#boot="dc"