Wei Liu: status update of the Virtio on Xen project

Another guest blog post by Wei Liu, one of our Google Summer of Code students.
Hi everybody, it’s midterm of Google Summer of Code now, let me tell you what I’ve done and learned during this period.
I started working on the project in the community bonding period. I took Virtio on Xen HVM as my warming up phase, which would help me understand QEMU and Virtio implementation better. Luckily, it did not require much work to get Virtio work on Xen HVM. At the end of the community bonding period, I wrote a patch to enable MSI injection for HVM guest, which has been applied to the tree.
Then I started to work on Virtio for pure PV. That’s not trivial. I spent lots of time trying to implement a Virtio transport layer with Xenbus, event channel and grant table, which is called virtio_xenbus (corresponding to current Virtio transport layer virtio_pci, which utilizes virtual PCI bus). The new transport layer must retain same behavior of the old one. However, one fundamental difference between evtchn and vpci is that, vpic works in a synchronous way while evtchn is born asynchronous. I got inspired by xen-pcifront and xen-pciback and finally solved this problem. Ah, a working transport layer finally.
But porting Virtio for pure PV needs more than a working transport layer. Vring, which is responsible for storing data, also needs some care. The original implementation uses kmalloc() to allocate the ring. It is OK to use kmalloc to get physical continuous memory in HVM. However, Xen PV backend needs to access machine contiguous memory. So we have to enable Xen’s software IOTLB and replace kmalloc() with DMA API. Also, the physical address in scatter gather list should be replaced with machine address. So here we get a Vring implementation for pure PV guest.
Is that all? No. One feature we need to disable is the indirect buffer support. Because this feature causes specific driver to allocate buffers with kmalloc() in a much upper level. I tangled with this problem for sometime, finding that I would rather leave those drivers alone than break them. So I chose to disable this feature at the moment. But this feature is critical to good performance, so I may try to enable it someday.
Good, we finally have our foundation ready! Let’s start to tangle with specific drivers. I chose Virtio net driver as a start. Every driver has its own features. As mentioned above, we should avoid allocating buffer with kmalloc() in driver level, so the CTRL_VQ feature needs to be disabled. In fact, I have no driver features enabled at the moment. What makes me really happy is that Virtio net almost works out of the box. I just want to make sure things work for the moment, pre-mature optimization is evil.
What to do next? Virtio blk is my next goal. Hopefully it would not take too long because I’m a bit behind schedule. Then I will start to port SPICE for Xen. Then try to enable more features of Virtio net/blk and gain better performance.
That’s the plan. Time is very limited, I feel excited.
I’ve learned a lot during this period. I worked together with the community. The interaction worked out quite well. I discussed a lot with Xen developers and got a better understanding of Xen and QEMU, as well as Virtio itself.
Last but not least, I want to thank Stefano Stabellini, Ian Campbell, Konrad Wilk and those who helped me through my project in this hot summer. I would not have come so far without your help.
– Wei Liu

Read more