advocacy linux

Further bugs in switches

Further to my {{post id=”opennms-and-bugg-switches” text=”previous post”}} on the topic of uncovering bugs in switches with network monitoring systems, I have two more bits of news:

The first is that the vendor has given me a new firmware for one of the switch models which fixes the bug. Apparently it was a known but undocumented bug. Still waiting on a firmware for the other switches.

The second is that I have found another bug in the switches – this time with HTTPS. It’s not triggering during autodiscovery this time, and the bug takes a bit longer to manifest (2-4 weeks, it seems), so it’s slightly harder to track down. I’ve got to set up a test rig to hammer some of the switches with HTTPS connection attempts and see what shakes loose.


Xen partially merged into linux kernel

It’s been a long time coming, but the first parts of Xen have been finally merged into upstream codebase for the linux 2.6.23 kernel.  Announcements here, here and here.  Worth reading Simon Crosby’s announcement, as it actually has some information about what is and isn’t being merged. Specifically, this is the merge of the paravirt_ops part of Xen, something that VMWare has had in the mainstream kernel for a while now, and it’s only 32 bit at this stage. From Simon’s blog:

“The effort for XenSource has been led by Jeremy Fitzhardinge, who has tirelessly tracked the developing kernel versions, while adding the Xen guest support for SMP guests, with fast paravirtualized block and network I/O. Next up is 64 bit support, according to Jeremy, who is also working on Dom0 support.”

While this isn’t a full merge of the entire Xen codebase into the kernel, it should still make it a lot easier to build xen-aware kernels.

Also merged into 2.6.23 is lguest, Rusty Russel’s linux-only paravirtualised hypervisor. I’ve been following Rusty’s blog posts on his development of this, although I was never sure of his reasoning for starting work on it. It may just have been a “lets see if I can make one” sort of approach.   Between kvm, lguest, uml and xen, there is now a lot of choice for virtualisation under the linux kernel. Not all approaches are the same – xen and lguest are paravirtualised while kvm is full virtualisation, and UML just runs a new linux kernel in user-space.  Xen and KVM will support windows (and other OS) guests, lguest and UML will only support linux guests. KVM requires VT or AMD-V chips, Xen requires them to install windows (but not linux), and lguest and UML don’t make use of them at all.


XenSource ‘Simply Virtualize’

The XenSource ‘Simply Virtualize‘ tour made it’s way to NZ this week, with a 3 hour set of presentations at Microsoft House yesterday afternoon. We had a good catchup with John Glendenning from XenSource on Monday, but I can’t talk about most of that.

The presenters at the event were XenSource, Sun Microsystems, Microsoft, Platespin and ExpresData hosted the event. The tour seems to be done in conjunction with IBM elsewhere, but was done with Sun here in NZ.

John Glendenning from XenSource gave a brief introduction to XenSource and Xen Enterprise, and also some points about where development is heading. One point is the planned interoperability with Veridian, the Microsoft virtualization stack. The Microsoft presentation followed, which focussed entirely on MS’s various virtualization technologies – presentation, application, server, etc. It had some interesting aspects, but was a tiny bit out of place I thought.  Sun had a good set of slides on their AMD-V platforms, including their new blade infrastructure which will support opteron, xeon (when they come out later in the year) and ultrasparc blades. Platespin then gave a fairly quick, but comprehensive overview of their P2V / V2V and capacity planning and management tools. James Johnstone from ExpressData finished up with a demo of Xen Enterprise and Windows XP guests running on a SunFire x4200 M2.

Things that I got from the various presentations:

  • Microsoft’s SoftGrid Application Virtualization software suite looks damn useful, and I can think of at least one site we could have used it this year.
  • Sun are coming out with Xeon-based servers soon, as well as AMD’s quad-core range being on the horizon
  • Sun have a blade range that has a fully modular IO system – the PCI infrastructure is abstracted away from the blade.
  • Platespin’s PowerConvert and PowerRecon products look very useful, and they are aggresively adding new features.

From Platespin’s presentation and some of the points that MS came out with it is very clear that the virtualization technology you choose for server virtualization is definitely not the final decision to make, nor should it be. The virtualization ecosystem is massive already – mostly due to the number of ISVs VMWare has on board. These ISVs are now targetting Xen Enterprise as a platform as well, and are bringing their already mature technology to focus on the alternative platforms. This gives Xen Enterprise quite a bit of credibility, as the management tools don’t have to be rebuilt – vendors like Platespin, Leostream, Mountain View Data, Marathon etc can target Xen with relative ease.

advocacy NSP Xen

Xensource and VMWare performance comparison

I was discussing Xensource with a potential client a few weeks ago, and was fairly surprised when they pulled out a performance comparison of VMWare and Xen, which showed VMware massively outperforming Xen in several tests. On further inspection, it was fairly obvious that VMWare’s tests used the open-source version of Xen, and were running windows based tests on it. This might be a fairly typical enterprise environment, but they weren’t really playing a fair game – Xensource’s product range include a PV driver set for windows which drastically improves performance. This driverset isn’t available under the open source version of Xen.

The comparison the client had been given also had some other data included, some of which was misleading, and some of which was just plain wrong. It included statements such as ‘Xen does not support live migration’ when it does (and what’s more, the open source version supports it natively, so it’s not a bolt-on to the product), and a point stating that Xen had no management consoles available on one page, and a price comparison of VMware, Xensource and VirtualIron on the next. Xensource provide a commercial management console for Xen. Huh.
After a bit of digging, I found the original VMware published report that this comparison was drawn from. Yes, VMware didn’t run a fair test, and yes, given that unfair test, Windows under VMWare ESX massively outperforms Xen in some areas, primarily I/O related.

We mentioned that this report was being circulated to Xensource at about that time. They must have been getting the same heads-up elsewhere, because within a few days of that they had published a performance comparison of Xen Enterprise and VMWare ESX themselves, and even gotten approval from VMWare to publish it! Roger Klorese links to the report from his blog. The report is here.
The report shows that the gap between VMWare ESX and Xen Enterprise performance is negligble in most cases, and Xen Enterprise outperforms VMWare ESX considerably in some areas. It’s definitely a much closer race than VMWare’s report would have you believe.

advocacy General NSP WLUG

Puppet – a system configuration tool

I saw a couple of blog posts about puppet recently. I’ve been meaning to investigate cfengine for a while now, and puppet was a new angle on the same problem. From the intro:
Puppet is a system configuration tool. It has a library for managing the system, a language for specifying the configuration you want, and a set of clients and servers for communicating the configuration and other information.

The library is entirely responsible for all action, and the language is entirely responsible for expressing configuration choices. Everything is developed so that the language operations can take place centrally on a single server (or bank of servers), and all library operations will take place on each individual client. Thus, there is a clear demarcation between language operations and library operations, as this document will mention.

It’s very new still, and is under active development. It seems to have been designed with fixing some of the hassles of cfengine in mind. It is written in ruby and has a reasonably powerful config language, and you can use embedded ruby templates for dynamically building up content to deploy. I have no particular preference for ruby – in fact, this is the first time I’ve used the language. Configuration is stored in a manifest on the puppetmaster server, and is based on the notions of classes and nodes. A node can inherit from multiple classes, or can merely include a specific class if certain criteria are met. Subclasses can override specific details of a parent class.
It makes use of a library called facter (also written by reductive labs), to pull information ‘facts’ from the client hosts, and these can be used in the manifests to control configuration. For example, it will work out the linux distribution you are running and store this in a variable, and you can use this to determine which classes to run.  It is fairly easy to extend facter to support additional facts – so I added support for working out the Debian and Ubuntu release number and codename – eg, 3.1 and sarge, or 6.10 and edgy.
There is a dependancy system in place, so that you can specify a rule to ensure that a service is running, which depends on the package being installed. If you use puppet to manage the config file for the service, you can set a subscription on the file for the service, so that if a change to that file is pushed out via puppet, it will restart the server for you as well.

Installing packages is handled well, with the option for seeding debconf if appropriate. Puppet understands several package management formats, including apt, rpm and yum.
I’m by no means an expert with cfengine, but this feels a lot nicer to use. After my initial testing, I see no reason so far to not deploy this at work. I’ll test try a test deployment on some systems, and if that works out I’ll push it the whole way out.

advocacy WLUG

2.6.16 sensibleness

From a post on Ian McDonald’s Blog:
Is it me or is it crazy the pace of point releases at the moment? Do we really need four kernels in three days?

Actually it’s just you, and perhaps a few others, who for some reason still don’t see any point in the 2.6.x.y release of kernels.

We haven’t had 4 new kernels in 3 days. We’ve had 4 small patchsets covering regressions and security flaws. In each case, the patches are less than 100 lines of code, and are considered “simple”. The fact that there are 4 of them represents the high level of effort the stable tree maintainer, and people submitting patches, is putting into the kernel.

I’d much rather have 4 security releases in 3 days, then be told to wait until the next stable kernel. Or have to track down and find the patch myself, only to discover that it doesn’t apply cleanly. I’m capable of doing these things, but there are plenty of people who aren’t, and the 2.6.x.y stable series provides a great infrastructure for the announcement and dissemination of timely security and regression patches.

At the end of the day, the chances are fairly good you’ll never need any of the patches in the stable series. What’s more, the 2.6.x.y patches all get rolled into 2.6.x+1 anyway, so you’ll get them all then. So feel free to ignore these “crazy” “kernel releases”