You are here

Xen Checkpointing Installation

How is the installation structured?

 The Space-Efficient Checkpointing Xen Installation can be seen as a 4-step procedure, corresponding to the next set of chapters.

  I.  After step 1, the user finishes up patching and installing Xen 4.5
 II.  After step 2, the user builds the new Linux kernel with Xen support for domain 0.
III.  After step 3, the user creates domain U on the Xen and install patched Linux on the domain U.
IV.  After step 4, the user knows how to use Space-Efficient Checkpointing Xen.
 

I.  Patch and install Xen 4.5

Firstly, download the Xen 4.5 archive.
And then, extract this archive.

$ tar xvf xen-4.5.0.tar.gz
$ cd xen-4.5.0

Xen Project uses several external libraries and tools. Install the build-essential package.
You also need to install additional dependencies.

# apt-get install build-essential
# apt-get build-dep xen 

The software uses the Autoconf tool to provide compile time configurability of the toolstack. To configure Xen Project, run the provided configure script.

$ ./configure

Before building the Xen 4.5, you should adjust our optimized Xen patch to use Space-Efficient Checkpointing Xen. Download our Xen patch file (xen.pagecache.patch), move that to Xen directory and adjust patch.

$ mv xen.pagecache.patch /path/to/Xen-4.5
$ patch -p1 < xen.pagecache.patch

Now, your Xen 4.5 source code is updated to Space-Efficient Checkpointing Xen source code.
Next, you can build and install our software.

To build all components, (hypervisor, tools, docs, stubdomain, etc) you can use the dist target.
If you want to rebuild a tree as if from a fresh check then you can use the world target. This is effectively the same as clean and the dist.

$ make dist
$ make world (* clean + dist *)

(It is recommended to use '-j[THE NUMBER OF CORES]' option. With this command, you can compile and install with multi threads)

All of the above targets will build and install the appropriate components into the dist subdirectory but not actually install onto the system.
To install onto the local machine simply call the install target (as root):

# make install

If you want to install onto a remote machine then you can simply copy the dist directory over. 
After installation, you rebuild your dynamic linker cache by running:

# /sbin/ldconfig

* If you want to get more information about installing Xen,
   go to web site: 
Xen-Project-Wiki: Compiling Xen From Source

You finish up with installing the Optimized Xen.

 

II.  Build the Linux kernel with Xen support for domain 0

 Next we'll build the Linux kernel with Xen support. This kernel will be our main running kernel (domain 0).
For the domain 0 kernel you need to select the backend implementation: these are used by the other domains (who use the frontend drivers) to communicate with the hardware.
However, you should be able to configure the kernel to provide support for both frontend (guest) and backend (host) drivers.

 We tested based on the Linux 3.13 and explain with these version.
For other distributions the package management commands must be adjusted accordingly.

First, download the Linux3.13 source code and extract it.

$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.13.tar.gz
$ tar xvjf linux-3.13.tar.gz

Second, go to directory and access menuconfig (run as root).

# cd linux-3.13
# make menuconfig

Enable general Xen support:

KERNELXen Base

Processor type and features  --->
     [*] Linux guest support  --->
          [*]   Enable paravirtualization code
          [*]     Xen guest support
          [*]       Support for running as a PVH guest
          [*]     Paravirtualization layer for spinlocks

Add support for paravirtualized console connections:

KERNELPV Console

Device Drivers  --->
     Character devices  --->
          [*] Xen Hypervisor Console support
          [*]   Xen Hypervisor Multiple Consoles support

Facilitates guest access to block and network devices via dom0:

KERNELDisk and Network

Device Drivers  --->
     [*] Block devices  --->
          <*>   Xen virtual block device support
     [*] Network device support  --->
          <*>   Xen network device frontend driver

In some configurations, it can be desirable to provide a guest with direct access to a PCI device. This is known as Xen PCI Passthrough:

KERNELGuest PCI Passthrough

Bus options (PCI etc.)  --->
     [*] Xen PCI Frontend

Keyboard, mouse, and display support via dom0 backend:

KERNELGuest Human Interface

Device Drivers  --->
     Input device support  --->
          [*]   Miscellaneous devices  --->
               <*>   Xen virtual keyboard and mouse support
     Graphics support  --->
          Frame buffer Devices  --->
               <*> Xen virtual frame buffer support

Xen dom0 support depends on APCI; without it dom0 related options will be hidden:

KERNELAPCI Support

Power management and ACPI options  --->
     [*] ACPI (Advanced Configuration and Power Interface) Support  --->

Typical network configuration depends on Linux bridge functionality:

KERNELLinux Bridge

[*] Networking support --->
     Networking options  --->
          <*> 802.1d Ethernet Bridging
   [*] Network packet filtering framework (Netfilter) --->
        [*] Advanced netfilter configuration
        [*]   Bridged IP/ARP packets filtering

The ability to run the HVM guests with depends on the Universal TUN/TAP device driver support:

KERNELTUN and TAP virtual network kernel devices

[*] Device Drivers  --->
     Network device support --->
     [M] Universal TUN/TAP device driver support

This option is required if you plan to create fully Emulated Network Devices within Dom0/DomU configuration.
The remaining drivers flesh out memory management, domain-to-domain communication, and communication to Xen via sysfs interfaces:

KERNELXen Drivers

Device Drivers  --->
     [*] Block devices  --->
          <*>   Xen block-device backend driver
     [*] Network device support --->
          <*>   Xen backend network device
     Xen driver support  --->
          [*] Xen memory balloon driver
          [*]   Scrub pages before returning them to system
          <*> Xen /dev/xen/evtchn device
          [*] Backend driver support
          <*> Xen filesystem
          [*]   Create compatibility mount point /proc/xen
          [*] Create xen entries under /sys/hypervisor
          <*> userspace grant access device driver
          <*> User-space grant reference allocator driver
          <M> Xen PCI-device backend driver
          <*> Xen ACPI processor
          [*] Xen platform mcelog

With all of the above configuration enabled, you need to build and install the Linux.
Type this command as root:

# make && make modules_install && make install && reboot

(It is recommended to use '-j[THE NUMBER OF CORES]' option. With this command, you can compile and install with multi threads)

Your machine should be able to boot as the domain 0 host.
 

III.   Launching Xen domain U and installing Linux on the domain U

Launching Xen domain U
 To adjust our patch and consequently install Space-Efficient Checkpointing Xen, you must create and launch your Xen domain U. 
First of all, you need to ready a Linux. 
(Since we tested on Ubuntu server, in this example, we explained through Ubuntu server.)
The Ubuntu server can be downloaded from Ubuntu download web site.

 After downloading the Ubuntu server, to install the Ubuntu server, creates the image file. Basically, by using QEMU, you can create those image file.

$ qemu-img-xen create -f raw [IMAGE FILE NAME].img [IMAGE FILE SIZE].img 
  (* IMAGE FILE SIZE safety when size is bigger than 16G *)

  Our patch removes file system dependency inside the domU so you don't have to consider the file system. However, since image file exists the outer of the domain U, you need to format your image file as a file system which you want to install.
`mkfs` is simply a front-end for the various file system builders. available under Linux. We use mkfs.

$ mkfs.[FILE SYSTEM. i.e. ext4, ext3, ext2, xfs .. ] [IMAGE FILE NAME].img

Now write the Xen configuration file. Xen configuration file is about a configuration of Xen dom-U which is executed.
Here are some sample configurations that are commonly used.

Example 1 - [CONFIG FILE NAME].conf

builder = "hvm"
name = "VM01"  # it must be unique
# Memory information. This metric is MB
memory = "4096"
maxmem = "4096"
# Num of cpus.
vcpu = "4"
# Network information
# To connect your virtual machie, you can use this IP address.
vif = [ 'mac=00:16:3e:0e:c5:b5' ] 
# Disk information.
disk = [ '/home/VMs/VM01.img,raw,xvda,rw'   
            # Image file which is created by QEMU path and format
            '/mnt/ISO_Images/ubuntu-14.04.1-server-amd64.iso,
            raw,xvdd,cdrom']   
            # OS image path and format

# if you want to connect with graphical client, 
# use SPICE. This example is for the SPICE
spice = 1
spicehost = "0.0.0.0"
spiceport = 5902
spicedisable_ticketing = 1

If you want to user more options, you can refer to this document: XenConfigurationDetails.pdf

After writing your virtual machine configuration, now you can start the virtual machine. Type this command as root.

# xl create [CONFIGURATION FILE].conf

Then, your virtual machine is going to boot and, by using SPICE or other graphical client applications, you can access your virtual machine from remote.
(The virtual machine network is already set in the configuration file.)

Connect your virtual machine, follow the Ubuntu installation instructions on the screen and install the Ubuntu server.
(WARNING! You must set your file system as what you builded your image file before)

After installing all the packages and reboot, you finish up installing your virtual machine.

Installing Linux on the Xen domU
 Since we modified the guest Linux system, therefore, you also need to apply our patch to your 'guest' Linux system.
Our Linux patch was made for Linux 3.13 version so we are going to explain based on the Linux 3.13 version.


First, download the Linux source code.

$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.13.tar.gz

Once you download the required kernel version source code, we need to unzip and untar it. We can do the following:

$ tar xvjf linux-3.13.tar.gz

After unzip the source code archive, we have to apply our Linux patch.
Download the Linux patch(ubuntu.trusty.pagecache.patch) and apply to your Linux.

$ mv ubuntu.trusty.pagecache.patch /path/to/linux/source/code
$ cd /path/to/linux/source/code
$ patch -p1 < ubuntu.trusty.pagecache.patch

Compile and install the Linux. Type this commands as root. Finally, reboot the PC.

# make && make modules_install && make install && reboot

(It is recommended to use '-j[THE NUMBER OF CORES]' option. With this command, you can compile and install with multi threads)

With the both Linux on the guest and Xen 4.5 installation finished and the guest rebooted, if everything has gone well, every process finished.
Now, you can utilize the Space-Efficient Checkpointing Xen with a couple of commands.

 

IV.  How to USE

You can save and restore your virtual machine by using your domain ID.
With 'xl list' with root authority, you can check your domain ID. 

# xl list

Checkpointing
 Saves a running domain to a state file so that it can be restored later. Once saved, the domain will no longer be running on the system.
You can experience checkpointing technology much faster than before.
Type this command as root:

# xl save [DOMAIN ID] [CHECKPOINT FILE NAME] 

Restoring
 Build a domain from a xl save state file.
Type this command as root:

# xl restore [CHECKPOINTED FILE NAME]