Xen 4.1 releases
After 11 months of development and 1906 commits later (6 a day !!!), Xen.org is proud to present its new stable Xen 4.1 release. We also wanted to take this opportunity to thank the 102 individuals and 25 organisations who have contributed to the Xen codebase and the 60 individuals who made just over 400 commits to the Xen subsystem and drivers in the Linux kernel.
New Xen Features
Xen 4.1 sports the following new features:
- A re-architected XL toolstack that is functionally nearly equivalent to XM/XEND
- Prototype credit2 scheduler designed for latency-sensitive workloads and very large systems
- CPU Pools for advanced partitioning
- Support for large systems (>255 processors and 1GB/2MB super page support)
- Support for x86 Advanced Vector eXtension (AVX)
- New Memory Access API enabling integration of 3rd party security solutions into Xen virtualized environments
- Even better stability through our new automated regression tests
Further information can be found in the release notes.
XL Toolstack: Xen 4.1 includes a re-architected toolstack, that is based on the new libxenlight library, providing a simple and robust API for toolstacks. XL is functionally equivalent and almost entirely backwards compatible with existing XM domain configuration files. The XEND toolstack remains supported in Xen 4.1 however we strongly recommend that users upgrade to XL. For more information see the Migration Guide. Projects are underway to port XCP’s xapi and libvirt to the new libxenlight library.
Credit2 Scheduler: The credit1 scheduler has served Xen well for many years. Â But it has several weaknesses, including working poorly for latency-sensitive workloads, such as network traffic and audio. The credit2 scheduler is a complete rewrite, designed with latency-sensitive workloads and very large numbers of CPUs in mind. We are still calling it a prototype scheduler as the algorithm needs more work before it will be ready to become the main scheduler. However it is stable and will perform better for some workloads than credit1.
CPU pools: The default credit scheduler provides limited mechanisms (pinning VMs to CPUs and using weights) to partition a machine and allocate CPUs to VMs. CPU pools provide a more powerful and easy to use way to partition a machine: the physical CPUs of a machine are divided into pools. Â Each CPU pool runs its own scheduler and each running VM is assigned to one pool. Â Â This not only allows a more robust and user friendly way to partition a machine, but it allows using different schedulers for different pools, depending on which scheduler works best for that workload.
Large Systems: Xen 4.1 has been extended and optimized to take advantage of new hardware features, increasing performance and scalability in particular for large systems. Xen now supports the Intel x2APIC architecture and is able to support systems with more than 255 CPUs. Further, support for EPT/VTd 1GB/2MB super pages has been added to Xen, reducing the TLB overhead. EPT/VTd page table sharing simplifies the support for Intel’s IOMMU by allowing the CPU’s Enhanced Page Table to be directly utilized by the VTd IOMMU. Timer drift has been eliminated through TSC-deadline timer support that provides a per-processor timer tick.
Advanced Vector eXtension (AVX): Support for xsave and xrestor floating point instructions has been added, enabling Xen guests to utilize AVX instructions available on newer Intel processors.
Memory Access API: The mem_access API has been added to enable suitably privileged domains to intercept and handle memory faults. This extents Xen’s security features in a new direction and enables third parties to invoke malware detection software or other security solutions on demand from outside the virtual machine.
Upstreaming
During the development cycle of Xen 4.1, the Xen community worked closely with upstream Linux distributions to ensure that Xen dom0 support and Xen guest support is available from unmodified Linux distributions. This means that using and installing Xen has become much easier than it was in the past.
- Basic dom0 support was added to the Linux kernel and a vanilla 2.6.38 kernel is now able to boot on Xen as initial domain. There is still some work to do as the initial domain is not yet able to start any VMs, but this and other improvements have already been submitted to the kernel community or will be soon.
- Xen developers rewrote the Xen PV-on-HVM Linux drivers in 2010 and submitted them for inclusion in upstream Linux kernel. Xen PV-on-HVM drivers were merged to upstream kernel.org Linux 2.6.36, and various optimizations were added in Linux 2.6.37. This means that any Linux 2.6.36 or 2.6.37 kernel binary can now boot natively, on Xen as dom0, on Xen as PV guest and on Xen as PV on HVM guest. For a full list of supported Linux distributions see here.
- Xen support for upstream Qemu was developed, such that upstream Qemu can be used as Xen device model. Our work has received a good feedback from the Qemu Community, but is not yet in the mainline.
The Xen development community recognizes that there is still some way to go, thus we will continue to work with upstream open source projects to ensure that Xen works out-of-the-box with all major operating systems, allowing users to get the benefits of Xen such as multi-OS support, performance, reliability, security and feature richness without incurring the burden of having to use custom builds of operating systems.
More Info
Downloads, release notes, data sheet and other information are available from the download page. Links to useful wiki pages and other resources can be found on the Xen support page.