Categories
Uncategorized

Linux QOS and monitoring

I implemented QOS for inter-office phone calls for a client today using tc and diffserv. The phones and phone systems were configured by the supplier to set “Diffserv 46”, as their technician called it, which is also known as the EF PHB, or Expedited Forwarding Per-Hop Behaviour. This was made slightly trickier by having to re apply the DSCP on outbound packets due to tunnel traversal. In the end I decided it was easier to use iptables to do this, rather than trying to get tc to do it via dsmark:

[code] /sbin/iptables -t mangle -A OUTPUT -d a.b.c.d -j DSCP –set-dscp-class EF
[/code]
Actually applying the shaping is relatively straight forward using dsmark and tcindex:[code]
#!/bin/sh
# Create root DiffServ qdisc, attach to proper network interface
# This also uses any existing DSCP flags within the packet as the tcindex
tc qdisc add dev eth2 handle 1:0 root dsmark indices 64 set_tc_index
tc filter add dev eth2 parent 1:0 protocol ip prio 1 tcindex mask 0xfc shift 2
#
# Create class-based queuing discipline to hold the two classes
tc qdisc add dev eth2 parent 1:0 handle 2:0 cbq bandwidth 10Mbit cell 8 avpkt 1000 mpu 64
#
#Create EF class, create queuing discpline for EF, create filters
tc class add dev eth2 parent 2:0 classid 2:1 cbq bandwidth 10Mbit rate 5Mbit avpkt 40000 prio 1 bounded isolated allot 1514 weight 1 maxburst 30
tc qdisc add dev eth2 parent 2:1 tbf rate 5Mbit burst 2Mbit limit 5Mbit
tc filter add dev eth2 parent 2:0 protocol ip prio 1 handle 0x2e tcindex classid 2:1 pass_on
#
# Create BE class, create queuing discipline for BE, create filters
tc class add dev eth2 parent 2:0 classid 2:2 cbq bandwidth 10Mbit rate 3Mbit avpkt 1000 prio 7 allot 1514 weight 1 maxburst 21 borrow split 2:0 defmap 0xffff
tc qdisc add dev eth2 parent 2:2 red limit 50Kbit min 10Kbit max 30Kbit burst 20 avpkt 1000 bandwidth 3Mbit probability 0.4
tc filter add dev eth2 parent 2:0 protocol ip prio 2 handle 0 tcindex mask 0 classid 2:2 pass_on
[/code]

I then decided I needed a way to monitor whether this was actually working. A quick google search unveiled http://www.docum.org/docum.org/monitor/, which had a couple of different tc monitors. The author states he is no longer working on them, but they work well enough, and the iproute2+tc suite hasn’t exactly changed much lately anyway
[code]
./monitor_tc_top_bis.pl
18:52:18 up 30 min, 3 users, load average: 0.10, 0.08, 0.08
Interval Monitor Monitor Total
Dev Classid Priority Speed Bytes Speed Bytes Comment
——————————————————————————–
eth2 2: N/A 64.18Kbps 2.47MB 36.69Kbps N/A
eth2 2:1 1 6.03Kbps 86.27KB 1.25Kbps N/A
eth2 2:2 7 64.18Kbps 2.47MB 36.69Kbps N/A
[/code]

Categories
Uncategorized

XenSource release Xen Server, Xen Express

XenSource have announced a couple of new commercial offerings to go along with their Xen Enterprise release. While Xen itself is opensource, XenSource have decided to make commercial packages offering a GUI management console, more advanced management APIs, and perhaps most importantly, PV drivers for Windows guests.

The full suite of products now looks like:

  • Xen Enterprise. Unlimited guests, multi OS. Pricing starts at $498 US for a dual-socket system
  • Xen Server. 8 Windows guests, 8 GB of ram. $99 US annual subscription, dual-socket system only
  • Xen Express. 4 guests, multi OS, 4 GB of ram. Free.

All of these products, including the free version, have the PV drivers for windows. There is also a seamless upgrade path between the products, so you can do a test deployment with Xen Express, then purchase Xen Server or Xen Enterprise as you need.

There are some differences other than those listed above. Xen Express will not allow multi-host management. The other two products will – this means you can log into multiple servers from the same console at the same time, and get holistic view of your virtualised servers. Also, while XenSource has yet to release any HA/DR, Live migration or integrated backup plugins, it is unlikely that these will be able to run on Xen Express.

http://www.xensource.com/

Categories
Uncategorized

XenSource University

Work has finally calmed down enough that I’m able to write a post about this. Last week I flew up to San Jose to attend the first XenSource University.  This was a two day event, the first of which had a series of business and technical presentation from XenSource and some of their strategic partners (Intel, Entisys etc). The second was a split between business/strategy one-on-one meetings, or a full day technical training course on the Xen Enterprise product, culminating in an exam for accreditation.

The technical components of the event were towards the lighter side of a technical forum, but I there were a wide range of people there, from those who have been using Xen directly for long time (like myself), to those who are comfortable with installing and managing VMWare, but have never touched linux, to those who are merely in the sales/demo teams of their companies. There was a good presentation from Intel discussion the VT enhancements and the future of VT, and the XenSource roadmap covered some aspects in quite a bit of detail.   There were a couple of interesting announcements too, which will be coming out later in the month.

I think the two most important aspects of this event for me were the networking with other users of Xen and XenSource products, as well as meeting more of the XenSource team; and discovering some limitations that are inherent in Xen itself. Xen apparently doesn’t support more than 4 NICs on the host, which is of major concern to anyone used to deploying VMWare ESX / Enterprise, which apparently needs about 8 gige nics just to operate. This will be ameliorated somewhat by the better performance you can expect to see under linux/Xen, however there are still enough situations in which you might want more than 4 NICs. XS doesn’t support bonding or VLANS just yet either, although both linux and Xen do – it’s just not in the UI. This will be fixed later.

We also identified an efficiency problem within the bridging system. You should be able to send data between Xen guests at relatively high speeds with an internal-only bridge, however we didn’t have much luck making this happen. This could be related to memory bandwidth issues, as the boxes we were using were not overly flash systems. Performance dropped almost linearly with an increase in MTU as well. These bugs might be in Xen, or might be in the linux bridging code, or might be in the PV ethernet driver being used.

Patrick Naubert from Xelerance, the custodians of the OpenS/WAN project, also pointed out that entropy is basically non-existant inside a Xen guest. This is a problem for anyone wanting to do crypto, of course. This shouldn’t be hard to fix if you are running Xen-aware kernels, so hopefully we’ll see a fix to this soon.