Categories
Uncategorized

XenSource University

Work has finally calmed down enough that I’m able to write a post about this. Last week I flew up to San Jose to attend the first XenSource University.  This was a two day event, the first of which had a series of business and technical presentation from XenSource and some of their strategic partners (Intel, Entisys etc). The second was a split between business/strategy one-on-one meetings, or a full day technical training course on the Xen Enterprise product, culminating in an exam for accreditation.

The technical components of the event were towards the lighter side of a technical forum, but I there were a wide range of people there, from those who have been using Xen directly for long time (like myself), to those who are comfortable with installing and managing VMWare, but have never touched linux, to those who are merely in the sales/demo teams of their companies. There was a good presentation from Intel discussion the VT enhancements and the future of VT, and the XenSource roadmap covered some aspects in quite a bit of detail.   There were a couple of interesting announcements too, which will be coming out later in the month.

I think the two most important aspects of this event for me were the networking with other users of Xen and XenSource products, as well as meeting more of the XenSource team; and discovering some limitations that are inherent in Xen itself. Xen apparently doesn’t support more than 4 NICs on the host, which is of major concern to anyone used to deploying VMWare ESX / Enterprise, which apparently needs about 8 gige nics just to operate. This will be ameliorated somewhat by the better performance you can expect to see under linux/Xen, however there are still enough situations in which you might want more than 4 NICs. XS doesn’t support bonding or VLANS just yet either, although both linux and Xen do – it’s just not in the UI. This will be fixed later.

We also identified an efficiency problem within the bridging system. You should be able to send data between Xen guests at relatively high speeds with an internal-only bridge, however we didn’t have much luck making this happen. This could be related to memory bandwidth issues, as the boxes we were using were not overly flash systems. Performance dropped almost linearly with an increase in MTU as well. These bugs might be in Xen, or might be in the linux bridging code, or might be in the PV ethernet driver being used.

Patrick Naubert from Xelerance, the custodians of the OpenS/WAN project, also pointed out that entropy is basically non-existant inside a Xen guest. This is a problem for anyone wanting to do crypto, of course. This shouldn’t be hard to fix if you are running Xen-aware kernels, so hopefully we’ll see a fix to this soon.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.