Categories
linux NSP Xen

Exporting Tape Autoloaders via iSCSI

A while ago I posted about {{post id=”iscsi-for-scsi-device-passthrough-under-xen-enterprise” text=”exporting a tape drive via iSCSI”}} to enable windows VMs to backup to a SCSI tape drive under Citrix Xenserver. I spent a couple of hours googling for whether or not you could do the same thing with a tape autoloader, and didn’t find a lot of useful information.

So, I just dived in and tried it, and it turns out exactly the same process works fine for exporting a tape autoloader via iSCSI as well, as long as you are slightly careful about your configuration file.

First of all, find your HCIL numbers with lsscsi:
[code]
[4:0:0:0] tape HP Ultrium 4-SCSI U24W /dev/st0
[4:0:0:1] mediumx HP 1×8 G2 AUTOLDR 1.70 –
[/code]

So, we’ve got an HP Ultrium 4 tape drive on 4:0:0:0, and a 1×8 G2 Autoloader on 4:0:0:1. Let’s configure IETd:

[code]
Target iqn.2007-04.com.example:changer0
Lun 0 H=4,C=0,I=0,L=0,Type=rawio
Type 1
InitialR2T No
ImmediateData Yes
xMaxRecvDataSegmentLength 262144

Lun 1 H=4,C=0,I=0,L=1,Type=rawio
Type 1
[/code]

A couple of points to note:

  • I’ve named it changer0, you don’t have to
  • You do have to make sure both the tape drive device(s) (in this case, 4:0:0:0) and the changer device (4:0:0:1) are exported as different LUNs under the same target
  • The other options (InitialR2T, ImmediateData etc) may or may not work for you, consult the IETd documentation for what you actually need and want.

Once you’ve restarted the iscsi target, you can load up an initiator and connect to it, and you should see both devices being exported under the one target. If you accidentally use a different target for the changer and the tape drive, you’ll find that your backup software probably can see the changer device, but will tell you there are not available drives.

Categories
linux NSP Xen

HP AiO iSCSI and Citrix Xenserver

A couple of our clients have HP AiO1200 iSCSI systems. These are nice enough units, especially for entry level iSCSI SANs. They’re in a slightly modified HP DL320s chassis, and run Windows 2003 Storage Server, as well as some custom built HP management tools.

I’ve never had an easy run when dealing with their iSCSI target and the open-scsi stack used by Citrix Xenserver. The first problem I had was that the management tools don’t support multiple initiators connecting to the same target LUN. They don’t actually stop you, but it seems that you need to hold your tongue just right to allow multiple initiators connecting to the one target. If you don’t hold it just right, the admin tools will let you do it, but it won’t actually work.[1]

The second problem I had is that Xenserver just refuses to connect, saying “Your Target is probably misconfigured”. There’s not really a lot of configuration you can do with an iSCSI target, so I’m perplexed here. Digging deeper:

[code]

# iscsiadm -m discovery -t st -p 1.2.3.4
1.2.3.4:3260,1 iqn.1991-05.com.microsoft:storage-iqn.xen-osdata-target
[/code]

It seems that iscsiadm can see everything fine!. I tried adding the target via the cli:

[code]
# xe sr-create host-uuid=f3b260ab-f8b9-4b52-980d-7b7e93ab8dcf content-type=user name-label=AIO_OSDATA shared=true type=lvmoiscsi device-config-targetIQN=iqn.1991-05.com.microsoft:storage-iqn.xen-osdata-target device-config-target=1.2.3.4

Error code: SR_BACKEND_FAILURE_107
Error parameters: , The SCSIid parameter is missing or incorrect, \
< ?xml version="1.0" ?>
iscsi-target
LUN
vendor
HP
/vendor
LUNid
0
/LUNid
size
42949672960
/size
SCSIid
360003ff646c289389ea2e31c1d419930
/SCSIid
/LUN
/iscsi-target
[/code]

Unlike the error messages you get in the GUI, that one is quite helpful[2] It tells us we’re missing a SCSIid parameter, and then lists a SCSIid parameter to try:

[code]
xe sr-create host-uuid=f3b260ab-f8b9-4b52-980d-7b7e93ab8dcf content-type=user name-label=AIO_OSDATA shared=true type=lvmoiscsi device-config-targetIQN=iqn.1991-05.com.microsoft:storage-iqn.xen-osdata-target device-config-target=1.2.3.4 device-config-LUNid=1 device-config-SCSIid=360003ff646c289389ea2e31c1d419930
[/code]

And our iSCSI target was happily added. I’m not entirely sure the LUNid parameter is required, this post suggests it isn’t. I found a couple of other posts on the forums which suggest that using the CLI for these tasks should be your first attempt.

[1] Where “work” means “actually let you connect more than one initiator at a time”
[2] Although, being full of less than and greater than signs, doesn’t want to display nicely in wordpress. So it’s a bit sanitised

Categories
linux NSP Xen

Linux iSCSI stacks and multiple initiators per target LUN

I’ve used a few hardware based iSCSI stacks for Xenserver shared storage backends, but never had spare hardware to run up software based stacks. This is rather backwards from the usual way people would test things I guess, but it’s how it worked out for us.

However, we’re now getting some new hardware for internal use – a couple of frontend servers and a storage server apparently, and we’re going to use a software based iSCSI stack on it. We’ve had a look at some of the commercial offerings – SANmelody, Open-e etc, but I’d much rather not spend money where it’s not needed. This iSCSI backend is going to have one or two LUNs, shared to a static number of hosts, and that’s it.

I’d steered away from the various open-source iSCSI target stacks, because it wasn’t clear whether they supported multiple initiators to access a single LUN concurrently. This surprised me somewhat – it seemed like it should just work, however we kept getting caught by people asking about the “MaxConnections” parameter for IETd, which sounds like it means “Maximum number of initiators to this LUN”, and has a rather depressing note beside it stating that the only valid parameter is “1” at this stage.

This didn’t sit right with me though – surely there are lots of people using fully opensource iSCSI systems. All the talk about iSCSI being a cheap (free!) SAN alternative can’t just be referring to people consolidating disks but still allocating single-use LUNS. I’ve found lots of references to people even talking about using software iSCSI targets with Xen as a shared storage backend.

And, of course, it’s not right[1]. The IETd “MaxConnections” parameter refers to the number of connections a single initiator can make with respect to a single target, which boils down to whether multipath IO is supported via the iSCSI stack or not. And it’s not, as far as IETd is concerned. This post to iscsi.iscsi-target.devel clears things up quite nicely, but it took me a damned long time to find. So, hopefully, this will help someone else answer this question.

1) multiple ini access different targets on same iet box at same time.
no data concurrency issue. the performance totally depends on your HW.
of course, IET can be improved to support large # of ini better

2) multiple ini access same targets on same iet box at same time. has
data concurrency issue here, so need a clsuter file system or similar
system at client side to coordinate.

3) one ini access different targets on one iet box. it will create
multiple sessions and no data concurrency issue here. performance issue
depends on HW.

all these are MS&OC/S (Multiple Sessions& One Connection per Session)

4) one ini access same target on one iet box.

it might try to use multiple connection in one session (MC/S, Multiple
Connection per Session), but iet doesnot support it and in parameter
negotiation, iet stick to MaxConn=1.

it might try to create multiple sessions with same target (still one
connection per session), which is allowed. usually this is controlled by
client software, for example, linux multi-path.

I read “multiple ini[tiator] access same targets on same iet box at same time” to mean exactly the problem I’m looking at, and the only cavaet is the filesystem issue, which Xenserver deals with. And it clarifies the point about MaxConnections too.

[1] That said, I haven’t tested this properly yet. I ran up IETd on a test box and connected OSX and linux to it concurrently, but while I could format the disk via linux I couldn’t mount it for some reason. OSX saw it fine. I’m not sure if this wasn’t just some transient weirdness on my test boxes or not.

UPDATE: Matt Purvis emailed me to confirm that it does all work as expected. Thankfully. I hope other people find this post useful – if only because it means I’m not the only one that spent hours trying to find a definitive answer on this topic.

Categories
NSP Xen

Week of Xenserver Bugs

Over the last week I’ve been required to fix four different bugs relating to Xenserver. Not all were major bugs, not all were even Xenserver’s fault.

DVD drive missing

The first bug, and actually one that first showed itself several months ago, is that the option to attach the server’s DVD drive to a VM was not present. This originally happened because the DVD drive in the HP C3000 Blade chassis died, and was replaced. Even after this was replaced, it wouldn’t show up in Xencenter however. There are forum notes around on recreated the VBD and so on, however in this case that wasn’t even required – after reattaching the DVD drive via the Bladecenter ILO to the individual blades and confirmed that the correct CD device appeared in dmesg output, I ran the command xe-toolstack-restart. This command, as you might guess, restarts the xenserver toolstack. The DVD drive now shows up in Xencenter. I’d actually logged a bug report with Citrix for this a while back, and so credit is due to the Citrix engineer that called me back on this issue and suggested trying xe-toolstack-restart before doing anything else.

Xencenter not connecting

The same day as fixing the above bug, I had another customer call me saying they couldn’t connect via XenCenter to their Xenserver Enterprise host. I’d had a similar issue several months ago when someone changed the networking configuration on the host, and the fix then was, as above, to run the xe-toolstack-restart command. All fixed! Well, in this case, the symptoms were fixed, we still don’t know what caused the underlying problem.

VMs not starting, ISO SR failing after upgrade

This one came through on the same day as well. One of our customers had run an upgrade from 4.0.1 to 4.1.0 on their own internal evaluation system of Xenserver Enterprise, which actually had a couple of production hosts on it. They’d run the upgrade and the ISO storage repository failed to reconnect, and a couple of VMs that had previously had ISO images mounted out of the SR failed to boot. Sadly, xe-toolstack-restart didn’t solve anything for me here.

There is a lot of functionality exposed via the CLI however, so I was able to force detach the ISO images from the VMS in question. They were in a suspended state however, so I had to manually force reset them. Once I had these fixed I looked at what caused the ISO SR to die.

One of the things a that a lot of people misunderstand about Xenserver is that it is effectively an appliance. It runs CentOS as the dom0 (priviledged domain), but that doesn’t mean you should consider it to be a useful CentOS server. The upgrade process for a Xenserver system is to duplicate the primary partition into a backup partition (copy /dev/sda1 into /dev/sda2, for example). Once this is done, it basically performs a full install of the new version of Xenserver into /dev/sda1, and migrates the settings it knows about – all the Xenserver state, your networking configuration (in theory anyway), and so on. Things it misses include any custom software you might have installed (iSCSI initiators for tape access, monitoring tools, any custom scripts) – these all get “deleted”. They’re still actually in the backup partition, just not in the active one.

The upshot of this is that when you connect your ISO SR to a CIFS share and use a hostname to refer to the server rather than an IP address, don’t “make it work” by adding an entry to /etc/hosts. If you want to use hostnames, make sure they work via DNS, and make sure your DNS is set up right on your Xenserver host.

I think there’s a lot Xenserver could have done to have prevented this bug from happening, so hopefully they’ll add some smarts to auto-detach VDIs from ISO SRs if the SR doesn’t connect properly. I’m not sure there’s a nice way to auto-migrate all the users settings (eg, do an inplace upgrade rather than an overwrite upgrade) – there’s too much scope for stuff to change.

Upgrade loses network settings on Xenserver

And now my final bugs, and the most annoying. We have a customer with a Xen Enterprise 3.2 host, with a Win2k3 terminal server and a Win2k3 SBS server on it, running their core business infrastructure. We’d scheduled an outage for the upgrade from 3.2 to 4.0.1 to 4.1.0, and it all looked good, except…

Xenserver network settings failed to migrate. Not sure why his happened, it definitely doesn’t seem to always happen. The xe pif-reconfigure-ip command is used in Xenserver 4.1.0 to reconfigure the IP stack on the host however, followed by a xe-toolstack-restart. My favourite command!

Xentools won’t install in 4.1.0 system upgraded from 3.2.0

This one took up basically my entire day yesterday. After the upgrade from 3.2.0 through 4.0.1 and into 4.1.0, the VMs booted, but were running the old version of Xentools. The technician doing the upgrade attempted to install the new Xentools, however on both servers it got as far as uninstalling the 3.2.0 Xentools, and then failed completely to install the 4.1.0 version. We spent a lot of time going back and forth uninstalling and attempting to reinstall the drivers, before eventually completely uninstalling them and leaving the systems running without xentools for the afternoon. I then spent most of my evening on the phone to Citrix support in Australia, both looking at the site in question over a very laggy Gotoassist connection. We finally went through another complete uninstall of xentools, including removing all the hidden device drivers (see here for details), and then installed an internal release of Xentools for 4.0.1, which at least resolved the issue.

The bug appears to be within the Xentools, but it could also be within windows itself, or that’s what I understood from the Citrix engineer I was talking to. We are apparently the second documented occurance of this bug, and Citrix is working on a final resolution. The Citrix engineer in question had managed to replicate the bug on one of his test systems, which is reassuring to me – they can prove they fix it, at least for some permutation of the problem.

Summary

It feels like I’m painting a bad picture of Xenserver here, and maybe I am. You can take what you like from what I’ve written, I guess :). I’m not sure that any company could push through as many major changes as quickly as Xensource/Citrix have and not end up with some showstopper bugs, but I think some of the smaller ones should have been avoidable. Others, like the xentools bug I mentioned last, only seem to effect older systems being upgraded, and even then it doesn’t always happen to them, and I don’t really think you can test for that sort of edge case very easily, especially if you don’t know it happens. I’ll post an update when Citrix resolve this last bug, so if anyone is reading this and is put off upgrading their XE 3.2 system, check back for an update!

Categories
General NSP Xen

Xenserver Enterprise + Fibrechannel storage = live migration

I finally got round to booking time at the IBM demo center to have a fiddle with Xen Enterprise on shared storage. They’ve got a good range of entry-level kit at the demo center, but the important bits (for me anyway) were a Bladecenter H chassis with some HS21 blades fitted with Fibrechannel HBAs, and a DS3400 Fibrechannel storage array.

The IBM DS3000 series of arrays looks really promising. There’s three current variants – SAS, iSCSI and Fibrechannel attached, along with an EXP3000 expansion shelf. All four systems will scale to 48 SAS (14.4 TB) drives over 4 shelves, and an IBM firmware announcement I read online the other day strongly suggests they will support SATA drives in the very near future (36 TB using 750 GB disks). And the DS3000 series are cheap. Performance is pretty good, but you can only stack the controller with 1GB of cache – which really highlights the “entry level” bit of these SANs. The SAS attached option is compelling as well – you get SAN functionality, very close to FC levels of performance (3 Gbps peak HBA throughput for SAS compared to 4Gbps for FC), at a fraction of the cost. The DS3200 allows 6 single-connect or 3 dual-connect hosts, and as soon as IBM get round to releasing a SAS switch, that limit will disappear.

I’d used the DS3400 management software in the past, so setting up a LUN within the existing array took about 5 minutes, including creating the host to WWN mappings; and I had two Xenserver Enterprise installs already set up on a matched pair of blades from a previous attempt at this with the DS3300.

Xenserver Enterprise supports shared storage, but as of version 4.0.1, it only supports NFS or iSCSI shared storage officially. I’d had problems getting the openiscsi software iSCSI stack that Xenserver ships with to communicate successfully with the DS3300 however, and ran out of time. On the other hand, FC shared storage is just not supported at all yet. There’s a forum article explaining Xensource’s position on the support, which also links to a knowledge base article describing how shared storage works in Xenserver. The article was pulled with the release of 4.0.1:

“We took it down because, with the level of testing and integration we could do by initial release of 4.0, we couldn’t be any further along than a partial beta. There are business reasons we couldn’t ship a product as released while describing some of the features as “beta”, and it is hypocritical for us to officially describe a way of using the product yet describe that as “unsupported.” For that reason, until we are ready to release supported shared Fibre Channel SRs, we’re not going to put the KB article back up.”

The article describes the overall setup you have to have in place to have FC shared storage working – namely array, LUN and zoning management on the SAN and HBA, and locating the device node that maps to the corresponding LUN. And then there are two commands to run, on one node of your Xenserver pool, and your shared storage system is up and running.

At this point it took about another 5 minutes to create a Debian Etch domU, and verify that live migration between physical hosts was indeed working.

I set up a slightly more complicated scenario, which I’m planning on using with some other software to demonstrate a HA / DR environment, but which also enabled me to do one of those classic cheesy live migration “but it still works” tests – I connected a Wyse thinclient through to a terminal server domU, migrated the TS between physical hosts, and demonstrated that the terminal session stayed up. A ping left running during this time experienced a brief bump of about 100 ms during the final cutover, but that’s possible attributable forwarding-path updates in the bridge devices on the Xenserver hosts. Either way it works well. Moving a win2k3 terminal server took about 15 seconds on the hardware I was working with.
Overall, I’m quietly encouraged by this functionality and it’s ease of set up. I’m perhaps a bit underwhelmed by it too – it ended up being such a nonevent to get it working, that I’m annoyed I didn’t get round to setting up the demonstration months ago.

Categories
NSP Xen

VMWare responds to XenSource, MS Threat?

Five weeks or so ago VMWare accounced massive price cuts for their existing products. Today I read that VMWare are redefining their product range completely.

The article has more detailed information, but the basic gist is that VMWare is releasing a new entry-level SKU, not manageable via VirtualCenter (and thus not suited for enterprise, but entirely suited for sites that just want one or two hosts), with SATA support.   SATA support has tradiationally been missing from VMWare’s products, and the presence of SATA support was one of the selling points for other virtualisation vendors such as Xensource and VirtualIron.

I asked in my last post “Has VMWare seen the writing on the wall?” I guess it has. It’s not end-game yet, of course: Gartner analysts peg virtualisation use at under 10% of the server market (I don’t have an exact reference for that, but there’s lots of references at 6% or so during late 2006). This is merely the point where VMWare realises that it’s not enough to be the big fish in the pond – Citrix and Microsoft are now in there too.

Categories
linux NSP WLUG Xen

iSCSI for SCSI device passthrough under Xen Enterprise

I recently had to add a SCSI tape drive to a Xen Enterprise server, and needed to use BackupExec under one of the Windows domU’s as the backup software. Luckily, Greig did this a few months ago already using the iSCSI Enterprise Target, and put his notes up on the WLUG Wiki here.

I hit one problem however – when using NTBackup to test the system, it would write about 20 MB to tape, then fail. Greig pointed out he’d only ever used BackupExec, which was the software that was going to be used finally anyway, so I installed that and then it worked fine.

Also, going one step further, it’s possible to use the same technique to push USB mass storage devices over iSCSI to domUs. As Xen Enterprise doesn’t have a nice way of passing USB mass storage devices through to domUs yet, this is a very good solution in the interim

[code]

Target iqn.2007-04.com.example:tape0
Lun 0 H=4,C=0,I=0,L=0,Type=rawio
Type 1
InitialR2T No
ImmediateData Yes
xMaxRecvDataSegmentLength 262144

Target iqn.2007-04.com.example:usb0
Lun 1 Path=/dev/sdb,Type=fileio
[/code]

As it refers to the scsi device, which will change if you unplug and re-insert a USB block device, it makes a lot of sense to use udev to map your USB mass storage device to a specific /dev entry.

I don’t have a feeling for how robust this is yet.

Categories
linux NSP Xen

Has VMWare seen the writing on the wall?

As seen here, VMWare is set to slash the pricing for VMWare ESX and VirtualCenter for the SME market – three copies of ESX and a limited copy of VirtualCenter, for the nice round price of $3k US – a significant price cut. This is hot on the heels of XenSource’s new pricing and acquisition by Citrix (XenSource also has a current promotion of “buy one, get three” as an introduction to the new XE 4.0 pricing model.)

Has VMWare seen the writing on the wall? They look a bit defensive at times. Or maybe VMWare has just realised they’ve neglected the SME market for too long. In NZ especially, most businesses are in the SME bracket, and just can’t afford VMWare’s prices – it’s cheaper to buy a new server machine in most cases.

Xen Enterprise’s new price tag of $2.5kUS may be outside their reach as well, but most SMEs don’t need the enterprise features present in XenSource’s flagship product. XenServer, at $750 US, fits right in the sweet spot, along with VirtualIron’s flagship offering.

Categories
Xen

Resizing domU partitions with Xen Enterprise 3.2

I recently had to resize the root partition of a sarge domU under Xen Enterprise 3.2. The various xe tools and the xen enterprise console let you resize the disk image, but they don’t resize the filesystem for you. If it’s not the root filesystem, you can resize it yourself once you reboot, but you can’t resize a mounted filesystem

There are a couple of XenSource forum posts dealing with this, but they seem to be oriented around Xen Enterprise 3.1, as they don’t work with 3.2. The basic premise is fine – find the device ID relating to the disk image on the Xen Enterprise dom0, and resize it, however the mechanics have changed.

The block devices you get access to under the XE dom0 are actually disk images, with an intact partition table. Attempting to access them directly will fail, because the standard partition resizing tools expect a partition, not a disk image. So you have to use losetup and an offset to bind that partition to a new loopback device, then you can start working on it.

It’s possible there is a much easier way to do this than below, and I’m sure it’s probably changed with the new storage manager codebase under XE 4.0, but I need to document this somewhere.

First up, shutdown the domU, then resize the xvda, either through the XE 3.2 console or via the xe command:

[code] xe vm-disk-resize vm-name=Test disk-name=xvda disk-size=11000[/code]

Now of all, get the right UUIDs from XE:

[code]

# xe host-vm-list
NAME: Test
uuid: 184bea51-9733-43b0-af02-fa6493235218
state: DOWN

# sm info

…..

——> VDI ID: [184bea51-9733-43b0-af02-fa6493235218.xvda]
Name: [NULL]
Descr: [DESCR]
Device: [/dev/VG_XenStorage-1ae909da-1302-45db-939b-1a2e26e78573/
LV-184bea51-9733-43b0-af02-fa6493235218.xvda]
Shareable: [0]
Virtsize: [11000 MB]
Parent UUID: [1ae909da-1302-45]
State: [0]
Type: [0]
[/code]

The “Device” path in there gives us the appropriate device on the dom0 host. In order to keep this displaying nicely, I’m going to refer to it us /dev/path/to/xvda from now on. This is a disk image, not a partition image, which we can prove with the following command:

[code]
# fdisk /dev/path/to/xvda -ul
Disk /dev/path/to/xvda: 11.5 GB, 11534336000 bytes 255 heads, 63 sectors/track, 1402 cylinders, total 22528000 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/mapper/xvda1 63 10474379 5237158+ 83 Linux
[/code]

Note that /dev/mapper/xvda1 above is probably a long string with some UUIDs embedded in it. I’ve shorted it for visibility.

This fdisk command showed us that at offset 63, there is a partition, which has 5237158 blocks (or the 5GiB partition configured by default). We can use this offset with the losetup command to bind a device node inside this disk image. I use /dev/loop99 below because I’m sure it’s not already in use. Disk sectors are 512 bytes, and this partition starts at sector 63, so we need to look at offset of 63 * 512 bytes:

[code]
# losetup /dev/loop99 -o $((63 * 512)) /dev/path/to/xvda
[/code]

Done! We now have a device at /dev/loop99 that corresponds to the xvda1 partition inside the xvda block device. We can run a filesystem check on this:

[code]
# e2fsck -f /dev/loop99
e2fsck 1.35 (28-Feb-2004)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/loop99: 35722/655360 files (0.3% non-contiguous), 132621/1309289 blocks
[/code]

The partition xvda1 is still only 5GB in size however. So, you’ll need to resize the partition using parted

[code]# parted /dev/path/to/xvda
GNU Parted 1.6.19
….

Using /dev/path/to/xvda
(parted) print
Disk geometry for /dev/path/to/xvda: 0.000-11000.000 megabytes
Disk label type: msdos
Minor Start End Type Filesystem Flags
1 0.031 5114.443 primary ext3
(parted) resize
Partition number? 1
Start? [0.0308]?
End? [10997.6216]?
(parted) p
Disk geometry for /dev/path/to/xvda: 0.000-11000.000 megabytes
Disk label type: msdos
Minor Start End Type Filesystem Flags
1 0.031 10997.622 primary ext3
(parted) q
[/code]

And now, we finally run another fsck and then run resize2fs before tearing down the block device mapping
[code]

# e2fsck -f /dev/loop99
e2fsck 1.35 (28-Feb-2004)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/loop99: 35722/1409024 files (0.3% non-contiguous), 156364/2815383 blocks
# resize2fs /dev/loop99
resize2fs 1.35 (28-Feb-2004)
Resizing the filesystem on /dev/loop99 to 2815992 (4k) blocks.
The filesystem on /dev/loop99 is now 2815992 blocks long.
# losetup -d /dev/loop99

[/code]

Now you can restart the domU, login and check your disk space:

[code]
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 11084028 448004 10636024 5% /
[/code]

If you boot the domU and get a fsck failure, and you run fsck and it tells you the filesystem is larger than the partition, make a note of how many blocks fsck things the filesystem should be, destroy the domU, rerun losetup and e2fsck, then run resize2fs again and specify the number of blocks fsck output. Run e2fsck again, losetup -d, then start the domU again.

Categories
advocacy NSP Xen

Xensource and VMWare performance comparison

I was discussing Xensource with a potential client a few weeks ago, and was fairly surprised when they pulled out a performance comparison of VMWare and Xen, which showed VMware massively outperforming Xen in several tests. On further inspection, it was fairly obvious that VMWare’s tests used the open-source version of Xen, and were running windows based tests on it. This might be a fairly typical enterprise environment, but they weren’t really playing a fair game – Xensource’s product range include a PV driver set for windows which drastically improves performance. This driverset isn’t available under the open source version of Xen.

The comparison the client had been given also had some other data included, some of which was misleading, and some of which was just plain wrong. It included statements such as ‘Xen does not support live migration’ when it does (and what’s more, the open source version supports it natively, so it’s not a bolt-on to the product), and a point stating that Xen had no management consoles available on one page, and a price comparison of VMware, Xensource and VirtualIron on the next. Xensource provide a commercial management console for Xen. Huh.
After a bit of digging, I found the original VMware published report that this comparison was drawn from. Yes, VMware didn’t run a fair test, and yes, given that unfair test, Windows under VMWare ESX massively outperforms Xen in some areas, primarily I/O related.

We mentioned that this report was being circulated to Xensource at about that time. They must have been getting the same heads-up elsewhere, because within a few days of that they had published a performance comparison of Xen Enterprise and VMWare ESX themselves, and even gotten approval from VMWare to publish it! Roger Klorese links to the report from his blog. The report is here.
The report shows that the gap between VMWare ESX and Xen Enterprise performance is negligble in most cases, and Xen Enterprise outperforms VMWare ESX considerably in some areas. It’s definitely a much closer race than VMWare’s report would have you believe.