Virtualization on ARM with Xen

This is a repost of a tutorial published initially on community.arm.com – Thank you to Andrew Wafaa for allowing us to repost.
With ARM entering the server space, a key technology in play in this segment is Virtualization. Virtualization is not a tool solely for servers and the data center, it is also used in the embedded space in segments like automotive and it is also starting to be used in mobile.
This is not a new technology, IBM pioneered it in the 1960s, and there are many different hypervisors implementing different methods of virtualization. In the Open Source realm there are two major hypervisors: KVM and Xen. Both are interact directly with the Linux kernel, however KVM is solely in the Linux domain whereas Xen works with Linux, *BSD and other UNIX variants.
In the past it was generally accepted that there are two types of hypervisor, Type1 (also known as bare metal or native) where the hypervisor runs directly on the host server and controls all aspects of the hardware and manages the guest operating systems, and Type2 (also known as hosted) where the hypervisor runs within a normal operating system; under this classification Xen falls into the Type1 camp and KVM fell into the Type2 camp. However the modern implementations of the hypervisors has now blurred the lines of distinction.
This time round I’ll be taking a look at the Xen Hypervisor, which is now one of the Linux Foundation’s collaborative projects. Here is a brief overview of some of Xen’s features:

  • Small footprint. Based on a microkernel design, it has a limited interface to the guest virtual machine and takes up around 1MB of memory.
  • Operating system agnostic. Xen works well with BSD variants and other UNIX systems, although most deployments use Linux.
  • Driver Isolation. This allows the main device drivers of a system to run inside the VM, which enables the VM containing drivers to be rebooted in the event of a crash or compromise without affecting the host or other guests. In the Xen model the majority of the device drivers run in virtual machines rather than in the hypervisor, as well as allowing reusing of existing OS driver stacks this allows the VM containing the driver to be rebooted without affecting the host of other guests. Individual drivers can even be run in separate VMs in order to improve isolation and fault tolerance or just to take advantage of differing OS functionality.
  • Paravirtualization (PV). This style of port enables Xen to run on hardware that doesn’t have virtualization extensions, such as Cortex-A5/A8/A9 in ARM’s case.  There can also be some performance gains for some PV guests, but this requires the guests to be modified and prevents “out of the box” implementations of operating systems.
  • No emulation, no QEMU. Emulated interfaces are slow and insecure. By using hardware virtualization extensions and IO paravirtualization, Xen removes any need for emulation. As a result you have a smaller code base and better performances

The Xen hypervisor runs directly on the hardware and is responsible for handling CPU, Memory, and interrupts. It is the first program running after exiting the bootloader. Virtual machines then run atop of Xen. A running instance of a virtual machine in Xen is called a DomU or guest. The controller for the guest VMs is a special host VM called Dom0 and contains the drivers for all the devices in the system. Dom0 also contains a control stack to manage virtual machine creation, destruction, and configuration.
Xen_Arch_Diagram.png
Pieces of the puzzle:

  • The Xen Hypervisor is a lean software layer that runs directly on the hardware and as mentioned is responsible for managing CPU, memory, and interrupts. The hypervisor itself has no knowledge of I/O functions such as networking and storage.
  • Guest Domains/Virtual Machines (DomU) are virtualized environments, each running their own operating system and applications. On other architectures Xen supports two different virtualization modes: Paravirtualization (PV) and Hardware-assisted or Full Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. On ARM there is only one mode for virtualization that is more of a hybrid of the two, which effectively is hardware based with paravirt extensions. Some call this mode PVHVM. Xen guests are totally isolated from the hardware and have no privilege to access hardware or I/O functionality. Which is where DomU comes in, standing for unprivileged domain.
  • The Control Domain (or Dom0) is a specialized Virtual Machine that has special privileges like the capability to access the hardware directly, handles all access to the system’s I/O functions and interacts with the other Virtual Machines. It also exposes a control interface to the outside world, through which the system is controlled. The Xen hypervisor will not function without Dom0, which is the first VM started by the system.
  • Toolstack and Console: Dom0 contains a control stack (known as Toolstack) that allows a user to manage virtual machine creation, destruction, and configuration. The toolstack exposes an interface that is either driven by a command line console, by a graphical interface or by a cloud orchestration stack (OpenStack or CloudStack.)
  • Xen-enabled operating systems: Running an operating system as Dom0 or DomU requires the operating system kernel to be Xen enabled. However porting an operating system to Xen on ARM is simple: it just needs a few new drivers for the Xen paravirtualized IO interfaces. Existing open source PV drivers in Linux and FreeBSD are likely to be reusable. Linux distributions that are based on a recent Linux kernel (3.8+) are already Xen enabled and usually contain packages for the Xen hypervisor and tools.

The latest version of Xen is 4.4.0, which was released in March and has support for both ARMv7 and ARMv8. For this exercise I’ll be looking at using Xen on ARMv8 with the Foundation Model.
Please consult the Xen Wiki for more information on using Xen with Virtualization Extensions and using Xen with Models. For discussion/review/information and help there are mailinglists and IRC.
Development set up:
You can use whichever Linux distribution you prefer, so long as you have a suitable cross-compilation environment set up. I’m using openSUSE 13.1 with the Linaro Cross Toolchain for AArch64.

Typographic explanation:


host$ = run as a regular user on host machine
host# = run as root user on host machine (can use sudo if you prefer)
chroot> = run as root user in chroot environment
model> = run as root user in a running Foundation Model

The first steps are to build Xen and a Linux kernel for use in both Dom0 and DomU machines. We then package Xen and Linux along with a Device Tree together for Dom0 to be used in the model using boot-wrapper.

Build Xen:

If using Linaro’s toolchain, ensure the /bin directory is in your $PATH

host$ git clone git://xenbits.xen.org/xen.git xen
host$ cd xen
host$ git checkout RELEASE-4.4.0

There is a small build bug due to use of older autotools which will be fixed in the 4.4.1 release. Rather than wait for the next release, we’ll just backport it now.

host$ git cherry-pick 0c68ddf3085b90d72b7d3b6affd1fe8fa16eb6be

There is also a small bug in GCC with PSR_MODE see bug LP# 1169164. Download the attached PSR_MODE_workaround.patch

host$ patch -i PSR_MODE_workaround.patch -p1
host$ make dist-xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- CONFIG_EARLY_PRINT=fastmodel
host$ cd ..

Build Linux:


host$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
host$ cd linux
host$ git checkout v3.13

Create a new kernel config:

host$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig
host$ sed -e 's/.*CONFIG_XEN is not set/CONFIG_XEN=y/g' -i .config
host$ sed -e 's/.*CONFIG_BLK_DEV_LOOP is not set/CONFIG_BLK_DEV_LOOP=y/g' -i .config
host$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- oldconfig

Make sure to select Y to all Xen config options
I have attached a kernel.config which has all the required options enabled for reference.

host$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- Image
host$ cd ..

Obtain the Foundation Model:

In a browser go to http://www.arm.com/products/tools/models/fast-models/foundation-model.php
Scroll to the bottom and select “Download Now”
This should provide FM000-KT-00035-r0p8-52rel06.tgz
Extract the tarball

host$ tar xaf FM000-KT-00035-r0p8-52rel06.tgz

Build Boot Wrapper and device tree.

It is common to run the models without real firmware. In this case a boot-wrapper is needed to provide a suitable boot time environment for Xen, which allows booting into Non-Secure HYP mode and providing boot modules etc.:

host$ git clone -b xen-arm64 git://xenbits.xen.org/people/ianc/boot-wrapper-aarch64.git
host$ cd boot-wrapper-aarch64
host$ ln -s ../xen/xen/xen Xen
host$ ln -s ../linux/arch/arm64/boot/Image Image

Use the attached foundation-v8.dts to build the device tree blob

host$ dtc -O dtb -o fdt.dtb foundation-v8.dts
host$ make CROSS_COMPILE=aarch64-linux-gnu- FDT_SRC=foundation-v8.dts IMAGE=xen-system.axf
host$ cd ..

Run the Model to make sure the kernel functions, it will panic as we haven’t setup the rootfs yet:

host$ ./Foundation_v8pkg/models/Linux64_GCC-4.1/Foundation_v8 \
--image boot-wrapper-aarch64/xen-system.axf

Chroot Build Environment

Next we create a suitable chroot build environment using the AArch64 port of openSUSE. We will use the qemu-user-static support for AArch64 to run the chroot on the (x86) host.
First we build the qemu binary, then construct the chroot, finally we build Xen in the chroot environment.

Building qemu-aarch64-user


host$ git clone https://github.com/openSUSE/qemu.git qemu-aarch64
host$ cd qemu-aarch64
host$ git checkout aarch64-work

Install some build dependencies:

host# zypper in glib2-devel-static glibc-devel-static libattr-devel-static libpixman-1-0-devel ncurses-devel pcre-devel-static zlib-devel-static
host$ ./configure --enable-linux-user --target-list=arm64-linux-user --disable-werror --static
host$ make -j4
host$ ldd ./arm64-linux-user/qemu-arm64

not a dynamic executable
This last step is to verify that the resulting binary is indeed a static binary. We will copy it into the chroot later on.
We now need to enlighten binfmt misc about aarch64 binaries:
On openSUSE:

host# cp scripts/qemu-binfmt-conf.sh /usr/sbin/
host# chmod +x /usr/sbin/qemu-binfmt-conf.sh
host# qemu-binfmt-conf.sh

On Debian:

host# update-binfmts --install aarch64 /usr/bin/qemu-aarch64-static \
--magic '\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7' \
--mask '\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff'
host$ cd ..

Build the chroot environment


host$ wget http://download.opensuse.org/ports/aarch64/distribution/13.1/appliances/openSUSE-13.1-ARM-JeOS.aarch64-rootfs.aarch64-1.12.1-Build32.2.tbz

Note: the file name may change due to the continuous image building, if the above does not work verify the latest version of the tarball here.

host$ mkdir aarch64-chroot

host# tar -C aarch64-chroot -xaf openSUSE-12.3-AArch64-JeOS-rootfs.aarch64-1.12.1-Build5.13.tbz
Install the qemu binary into the chroot environment

host# cp qemu-aarch64/arm64-linux-user/qemu-arm64 aarch64-chroot/usr/bin/qemu-aarch64-static

host# cp /etc/resolv.conf aarch64-chroot/etc/resolv.conf

Build the Xen tools in the chroot environment (finally)

Copy the Xen sources into the chroot

host# cp -r xen aarch64-chroot/root/xen

Chroot into the aarch64 environment

host# chroot aarch64-chroot /bin/sh

We now need to install some build dependencies

chroot> zypper install gcc make patterns-openSUSE-devel_basis git vim libyajl-devel python-devel wget libfdt1-devel libopenssl-devel

If prompted to trust the key, I’ll let you choose whether to trust permanently or just this time (personally I chose to always trust the key).

chroot> cd /root/xen
chroot> ./configure
chroot> make dist-tools
chroot> exit

The Xen tools are now in aarch64-chroot/root/xen/dist/install

Root filesystem and image:

We will create an ext3 formatted filesystem image, we will also use a simplified initscript to avoid long waits while running the model.

host$ wget http://download.opensuse.org/ports/aarch64/distribution/13.1/appliances/openSUSE-13.1-ARM-JeOS.aarch64-rootfs.aarch64-1.12.1-Build32.2.tbz

This is the same rootfs tarball as use for the chroot. You can re-use the previously downloaded tarball if you wish.

host$ dd if=/dev/zero bs=1M count=1024 of=rootfs.img
host$ /sbin/mkfs.ext3 rootfs.img

Say yes, we know it’s not a block device

host# mount -o loop rootfs.img /mnt
host# tar -C /mnt -xaf openSUSE-13.1-ARM-JeOS.aarch64-rootfs.aarch64-1.12.1-Build32.2.tbz

Install the Xen tools that we built earlier

host# rsync -aH aarch64-chroot/root/xen/dist/install/ /mnt/

Create the init scipt:

host# cat > /mnt/root/init.sh <<EOF

#!/bin/sh
set -x
mount -o remount,rw /
mount -t proc none /proc
mount -t sysfs none /sys
mount -t tmpfs none /run
mkdir /run/lock
mount -t devtmpfs dev /dev
/sbin/udevd --daemon
udevadm trigger --action=add
mkdir /dev/pts
mount -t devpts none /dev/pts
mknod -m 640 /dev/xconsole p
chown root:adm /dev/xconsole
/sbin/klogd -c 1 -x
/usr/sbin/syslogd
cd /root
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
exec /bin/bash
EOF


host# chmod +x /mnt/root/init.sh

Get missing runtime dependencies for Xen

host$ wget http://download.opensuse.org/ports/aarch64/distribution/13.1/repo/oss/suse/aarch64/libyajl2-2.0.1-14.1.2.aarch64.rpm


host$ wget http://download.opensuse.org/ports/aarch64/distribution/13.1/repo/oss/suse/aarch64/libfdt1-1.4.0-2.1.3.aarch64.rpm


host# cp libyajl2-2.0.1-14.1.2.aarch64.rpm libfdt1-1.4.0-2.1.3.aarch64.rpm /mnt/root/


host# umount /mnt

Start the model


host$ ./Foundation_v8pkg/models/Linux64_GCC-4.1/Foundation_v8 \
--image boot-wrapper-aarch64/xen-system.axf \
--block-device rootfs.img \
--network=nat

Silence some of the harmless warnings

model> mkdir /lib/modules/$(uname -r)
model> depmod -a

Install the runtime dependencies:

model> rpm -ivh libfdt1-1.3.0-9.1.1.aarch64.rpm libyajl2-2.0.1-12.1.1.aarch64.rpm
model> ldconfig

Start the Xen daemon, you can ignore the harmless message about i386 qemu if it appears.

model> /etc/init.d/xencommons start

If you get an error of missing file for /etc/init.d/xencommons re-run ldconfig.
Confirm that Dom0 is up:

model> xl list

ID           Mem     VCPUs      State   Time(s)
Domain-0 0   512     2          r-----  13.9

Congratulations, you now have a working Xen toolstack. You can now shut down the model for now.

Creation of a DomU guest

For the guest rootfs we will use a smaller OpenEmbedded based Linaro image rather than a full openSUSE image purely for space constraints.

host$ wget http://releases.linaro.org/latest/openembedded/aarch64/linaro-image-minimal-genericarmv8-20140223-649.rootfs.tar.gz


host$ dd if=/dev/zero bs=1M count=128 of=domU.img

host$ /sbin/mkfs.ext3 domU.img
Again say yes, we know it’s not a block device

host# mount -o loop domU.img /mnt
host# tar -C /mnt -xaf linaro-image-minimal-genericarmv8-20140223-649.rootfs.tar.gz
host# umount /mnt

Make the DomU rootfs and kernel available to the Dom0

host# mount -o loop rootfs.img /mnt
host# cp domU.img /mnt/root/domU.img
host# cp linux/arch/arm64/boot/Image /mnt/root/Image

Create the config for the guest

host# cat > /mnt/root/domU.cfg <<EOF

kernel = "/root/Image"
name = "guest"
memory = 512
vcpus = 1
extra = "console=hvc0 root=/dev/xvda ro"
disk = [ 'phy:/dev/loop0,xvda,w' ]
EOF


host# umount /mnt

Start the model again:

host$ ./Foundation_v8pkg/models/Linux64_GCC-4.1/Foundation_v8 \
--image boot-wrapper-aarch64/xen-system.axf \
--block-device rootfs.img \
--network=nat

model> losetup /dev/loop0 domU.img
model> /etc/init.d/xencommons start
Create the DomU using the config

model> xl create domU.cfg

View the guest’s info on the Xen console

model> xl list

xenhost.png
Screenshot of the Dom0 host
Start the Dom0

model> xl console guest

xenguest.png
Screenshot of the DomU guest
Now all that’s left is to have a lot of fun!

Read more