XEN

 General description

Xen is a type hypervisor that creates logical pools of system resources so that many virtual machines can share the same physical resources. This hypervisor runs directly on the system hardware. Xen inserts a virtualization layer between the system hardware and the virtual machines, turning the system hardware into a pool of logical computing resources that Xen can dynamically allocate to any guest operating system. Support for IA-32, x86-64, Itanium, and ARM architectures. It is an open-source hypervisor that is available for Linux & Solaris. Citrix XenServer is a commercial and fully supported Xen hypervisor. It allows several guest operating systems to execute on the same computer hardware concurrently. Xen systems have a structure with the Xen hypervisor as the lowest and most privileged layer.
Xen supports running two different types of guests: Paravirtualization (PV) and Full or Hardware assisted Virtualization (HVM). Both guest types can be used at the same time on a single Xen system.
Xen Paravirtualization (PV)
Paravirtualization is an efficient and lightweight virtualization technique introduced by Xen, later adopted by other virtualization platforms. PV does not require virtualization extensions from the host CPU. However, paravirtualized guests require a Xen-PV-enabled kernel and PV drivers, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. Xen-PV-enabled kernels exist for Linux, NetBSD, FreeBSD and OpenSolaris. Linux kernels have been Xen-PV enabled from 2.6.24 using the Linux pvops framework. In practice this means that PV will work with most Linux distributions (with the exception of very old versions of distros).
Xen Full Virtualization (HVM)
Full Virtualization or Hardware-assisted virtualizion uses virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. Xen uses Qemu to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc. Virtualization hardware extensions are used to boost performance of the emulation. Fully virtualized guests do not require any kernel support. This means that Windows operating systems can be used as Xen HVM guest. Fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.
Figure shows a system with Xen running virtual machines:
Fig. 4.6.6/1: The Xen architecture.
Fig. 4.6.6/1: The Xen architecture.
Xen is running three virtual machines. Each virtual machine is running a guest operating system and applications independent of other virtual machines while sharing the same physical resources. A running instance of a virtual machine in Xen is called a domain or guest. A special domain, called domain 0 contains the drivers for all the devices in the system. Domain 0 also contains a control stack to manage virtual machine creation, destruction, and configuration.
Fig. 4.6.6/2: Xen Architecture.
Fig. 4.6.6/2: Xen Architecture.

 Xen components

Components in detail:
  • The Xen Hypervisor is an exceptionally lean (<150,000 lines of code) software layer that runs directly on the hardware and is responsible for managing CPU, memory, and interrupts. It is the first program running after the bootloader exits. The hypervisor itself has no knowledge of I/O functions such as networking and storage.
  • Guest Domains/Virtual Machines are virtualized environments, each running their own operating system and applications. Xen supports two different virtualization modes: Paravirtualization (PV) and Hardware-assisted or Full Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM. This approach is called PV on HVM. Xen guests are totally isolated from the hardware: in other words, they have no privilege to access hardware or I/O functionality. Thus, they are also called unprivileged domain (or DomU).
  • The Control Domain (or Domain 0) is a specialized Virtual Machine that has special privileges like the capability to access the hardware directly, handles all access to the system’s I/O functions and interacts with the other Virtual Machines. It also exposes a control interface to the outside world, through which the system is controlled. The Xen hypervisor is not usable without Domain 0, which is the first VM started by the system.
  • Toolstack and Console: Domain 0 contains a control stack (also called Toolstack) that allows a user to manage virtual machine creation, destruction, and configuration. The toolstack exposes an interface that is either driven by a command line console, by a graphical interface or by a cloud orchestration stack such as OpenStack or CloudStack.
  • Xen-enabled operating systems: A Xen Domain 0 requires a Xen-enabled kernel. Paravirtualized guests require a PV-enabled kernel. Linux distributions that are based on recent Linux kernel are Xen-enabled and usually contain packages that contain the Xen Hypervisor and Tools Xen (the default Toolstack and Console). All but legacy Linux kernels are PV-enabled: in other words, they will run Xen PV guests.

 Key concepts of the Xen architecture

The following are key concepts of the Xen architecture:
  • Full virtualization- most hypervisors are based on full virtualization which means that they completely emulate all hardware devices to the virtual machines. Guest operating systems do not require any modification and behave as if they each have exclusive access to the entire system. Full virtualization often includes performance drawbacks because complete emulation usually demands more processing resources (and more overhead) from the hypervisor. Xen is based on paravirtualization; it requires that the guest operating systems be modified to support the Xen operating environment. However, the user space applications and libraries do not require modification. Operating system modifications are necessary for reasons like: so that Xen can replace the operating system as the most privileged software; so that Xen can use more efficient interfaces (such as virtual block devices and virtual network interfaces) to emulate devices — this increases performance.
  • Xen can run multiple guest OS, each in its on VM- Xen can run several guest operating systems each running in its own virtual machine or domain. When Xen is first installed, it automatically creates the first domain, Domain 0 (or dom0). Domain 0 is the management domain and is responsible for managing the system. It performs tasks like building additional domains (or virtual machines), managing the virtual devices for each virtual machine, suspending virtual machines, resuming virtual machines, and migrating virtual machines. Domain 0 runs a guest operating system and is responsible for the hardware devices.
  • Instead of a driver, lots of great stuff happens in the Xen daemon, xend- The Xen daemon, xend, is a Python program that runs in dom0. It is the central point of control for managing virtual resources across all the virtual machines running on the Xen hypervisor. Most of the command parsing, validation, and sequencing happens in user space in xend and not in a driver. IBM supports the SUSE Linux Enterprise Edition (SLES) 10 version of Xen which supports the following configuration: four virtual machines per processor and up to 64 virtual machines per physical system; SLES 10 guest operating systems (paravirtualized only).

 Deployment and instalation

To deploy virtualization for Xen:
  • Install Xen on the system.
  • Create and configure virtual machines (this includes the guest operating system).
Install the Xen software using one of the following methods:
  • Interactive install: Use this procedure to install directly on dedicated virtual machine on the Xen server. This dedicated virtual machine is referred to as the client computer in the install procedure.
  • Install from CommCell console: Use this procedure to install remotely on a dedicated virtual machine on the Xen server.