Categories
advocacy General NSP WLUG

Puppet – a system configuration tool

I saw a couple of blog posts about puppet recently. I’ve been meaning to investigate cfengine for a while now, and puppet was a new angle on the same problem. From the intro:
Puppet is a system configuration tool. It has a library for managing the system, a language for specifying the configuration you want, and a set of clients and servers for communicating the configuration and other information.

The library is entirely responsible for all action, and the language is entirely responsible for expressing configuration choices. Everything is developed so that the language operations can take place centrally on a single server (or bank of servers), and all library operations will take place on each individual client. Thus, there is a clear demarcation between language operations and library operations, as this document will mention.

It’s very new still, and is under active development. It seems to have been designed with fixing some of the hassles of cfengine in mind. It is written in ruby and has a reasonably powerful config language, and you can use embedded ruby templates for dynamically building up content to deploy. I have no particular preference for ruby – in fact, this is the first time I’ve used the language. Configuration is stored in a manifest on the puppetmaster server, and is based on the notions of classes and nodes. A node can inherit from multiple classes, or can merely include a specific class if certain criteria are met. Subclasses can override specific details of a parent class.
It makes use of a library called facter (also written by reductive labs), to pull information ‘facts’ from the client hosts, and these can be used in the manifests to control configuration. For example, it will work out the linux distribution you are running and store this in a variable, and you can use this to determine which classes to run.  It is fairly easy to extend facter to support additional facts – so I added support for working out the Debian and Ubuntu release number and codename – eg, 3.1 and sarge, or 6.10 and edgy.
There is a dependancy system in place, so that you can specify a rule to ensure that a service is running, which depends on the package being installed. If you use puppet to manage the config file for the service, you can set a subscription on the file for the service, so that if a change to that file is pushed out via puppet, it will restart the server for you as well.

Installing packages is handled well, with the option for seeding debconf if appropriate. Puppet understands several package management formats, including apt, rpm and yum.
I’m by no means an expert with cfengine, but this feels a lot nicer to use. After my initial testing, I see no reason so far to not deploy this at work. I’ll test try a test deployment on some systems, and if that works out I’ll push it the whole way out.

Categories
Uncategorized

mp3gain

I had a couple of albums of mp3s that were encoded with really low gain. Rather than re-encode the mp3s (which wouldn’t have taken too long), I had a look for tools that would let me normalise these tools. I didn’t have a lot of luck, and then yesterday I saw a link to mp3gain pop up in my aggregator.

Mp3gain works by analysing the mp3s passed to it, then tweaking the mp3 metadata to adjust the gain. It doesn’t re-encode the mp3s. It can normalise the gain on a single mp3, or work out the ‘normalised’ gain on an entire album (or repository) and tweak each mp3 to bring it in line with the others.  To be honest, I didn’t even know mp3s had metadata you could tweak to do this, so it didn’t occur to me this was an option.

I ran it  across the albums in question, and it decided they were consistent within themselves. Rather than run it across the entire repository, I increased the gain by a set 3dB, and then after listening to the resulting output, another 3dB. Maybe one day I’ll back up my mp3s and run it across the entire repository. This works pretty well for now.

Categories
Uncategorized

Linux QOS and monitoring

I implemented QOS for inter-office phone calls for a client today using tc and diffserv. The phones and phone systems were configured by the supplier to set “Diffserv 46”, as their technician called it, which is also known as the EF PHB, or Expedited Forwarding Per-Hop Behaviour. This was made slightly trickier by having to re apply the DSCP on outbound packets due to tunnel traversal. In the end I decided it was easier to use iptables to do this, rather than trying to get tc to do it via dsmark:

[code] /sbin/iptables -t mangle -A OUTPUT -d a.b.c.d -j DSCP –set-dscp-class EF
[/code]
Actually applying the shaping is relatively straight forward using dsmark and tcindex:[code]
#!/bin/sh
# Create root DiffServ qdisc, attach to proper network interface
# This also uses any existing DSCP flags within the packet as the tcindex
tc qdisc add dev eth2 handle 1:0 root dsmark indices 64 set_tc_index
tc filter add dev eth2 parent 1:0 protocol ip prio 1 tcindex mask 0xfc shift 2
#
# Create class-based queuing discipline to hold the two classes
tc qdisc add dev eth2 parent 1:0 handle 2:0 cbq bandwidth 10Mbit cell 8 avpkt 1000 mpu 64
#
#Create EF class, create queuing discpline for EF, create filters
tc class add dev eth2 parent 2:0 classid 2:1 cbq bandwidth 10Mbit rate 5Mbit avpkt 40000 prio 1 bounded isolated allot 1514 weight 1 maxburst 30
tc qdisc add dev eth2 parent 2:1 tbf rate 5Mbit burst 2Mbit limit 5Mbit
tc filter add dev eth2 parent 2:0 protocol ip prio 1 handle 0x2e tcindex classid 2:1 pass_on
#
# Create BE class, create queuing discipline for BE, create filters
tc class add dev eth2 parent 2:0 classid 2:2 cbq bandwidth 10Mbit rate 3Mbit avpkt 1000 prio 7 allot 1514 weight 1 maxburst 21 borrow split 2:0 defmap 0xffff
tc qdisc add dev eth2 parent 2:2 red limit 50Kbit min 10Kbit max 30Kbit burst 20 avpkt 1000 bandwidth 3Mbit probability 0.4
tc filter add dev eth2 parent 2:0 protocol ip prio 2 handle 0 tcindex mask 0 classid 2:2 pass_on
[/code]

I then decided I needed a way to monitor whether this was actually working. A quick google search unveiled http://www.docum.org/docum.org/monitor/, which had a couple of different tc monitors. The author states he is no longer working on them, but they work well enough, and the iproute2+tc suite hasn’t exactly changed much lately anyway
[code]
./monitor_tc_top_bis.pl
18:52:18 up 30 min, 3 users, load average: 0.10, 0.08, 0.08
Interval Monitor Monitor Total
Dev Classid Priority Speed Bytes Speed Bytes Comment
——————————————————————————–
eth2 2: N/A 64.18Kbps 2.47MB 36.69Kbps N/A
eth2 2:1 1 6.03Kbps 86.27KB 1.25Kbps N/A
eth2 2:2 7 64.18Kbps 2.47MB 36.69Kbps N/A
[/code]

Categories
Uncategorized

XenSource release Xen Server, Xen Express

XenSource have announced a couple of new commercial offerings to go along with their Xen Enterprise release. While Xen itself is opensource, XenSource have decided to make commercial packages offering a GUI management console, more advanced management APIs, and perhaps most importantly, PV drivers for Windows guests.

The full suite of products now looks like:

  • Xen Enterprise. Unlimited guests, multi OS. Pricing starts at $498 US for a dual-socket system
  • Xen Server. 8 Windows guests, 8 GB of ram. $99 US annual subscription, dual-socket system only
  • Xen Express. 4 guests, multi OS, 4 GB of ram. Free.

All of these products, including the free version, have the PV drivers for windows. There is also a seamless upgrade path between the products, so you can do a test deployment with Xen Express, then purchase Xen Server or Xen Enterprise as you need.

There are some differences other than those listed above. Xen Express will not allow multi-host management. The other two products will – this means you can log into multiple servers from the same console at the same time, and get holistic view of your virtualised servers. Also, while XenSource has yet to release any HA/DR, Live migration or integrated backup plugins, it is unlikely that these will be able to run on Xen Express.

http://www.xensource.com/

Categories
Uncategorized

XenSource University

Work has finally calmed down enough that I’m able to write a post about this. Last week I flew up to San Jose to attend the first XenSource University.  This was a two day event, the first of which had a series of business and technical presentation from XenSource and some of their strategic partners (Intel, Entisys etc). The second was a split between business/strategy one-on-one meetings, or a full day technical training course on the Xen Enterprise product, culminating in an exam for accreditation.

The technical components of the event were towards the lighter side of a technical forum, but I there were a wide range of people there, from those who have been using Xen directly for long time (like myself), to those who are comfortable with installing and managing VMWare, but have never touched linux, to those who are merely in the sales/demo teams of their companies. There was a good presentation from Intel discussion the VT enhancements and the future of VT, and the XenSource roadmap covered some aspects in quite a bit of detail.   There were a couple of interesting announcements too, which will be coming out later in the month.

I think the two most important aspects of this event for me were the networking with other users of Xen and XenSource products, as well as meeting more of the XenSource team; and discovering some limitations that are inherent in Xen itself. Xen apparently doesn’t support more than 4 NICs on the host, which is of major concern to anyone used to deploying VMWare ESX / Enterprise, which apparently needs about 8 gige nics just to operate. This will be ameliorated somewhat by the better performance you can expect to see under linux/Xen, however there are still enough situations in which you might want more than 4 NICs. XS doesn’t support bonding or VLANS just yet either, although both linux and Xen do – it’s just not in the UI. This will be fixed later.

We also identified an efficiency problem within the bridging system. You should be able to send data between Xen guests at relatively high speeds with an internal-only bridge, however we didn’t have much luck making this happen. This could be related to memory bandwidth issues, as the boxes we were using were not overly flash systems. Performance dropped almost linearly with an increase in MTU as well. These bugs might be in Xen, or might be in the linux bridging code, or might be in the PV ethernet driver being used.

Patrick Naubert from Xelerance, the custodians of the OpenS/WAN project, also pointed out that entropy is basically non-existant inside a Xen guest. This is a problem for anyone wanting to do crypto, of course. This shouldn’t be hard to fix if you are running Xen-aware kernels, so hopefully we’ll see a fix to this soon.

Categories
Uncategorized

Mailserver upgrades

I have been upgrading our MTA infrastructure at work from qmail and vpopmail, to a more robust system built using exim4, cyrus, and openldap for authentication and configuration data. I’ve been running a similar setup for ages on meta.net.nz, so I took the opportunity to do some work on the codebase as well.

SOAP API

The backend has had a SOAP API for a while now but it was pretty basic. I just used Nusoap and PHP to create the SOAP server, and didn’t bother with WSDL. I decided it would be a good idea to get Nusoap to provide decent WSDL so I could do introversion at the client end, and in doing so realised just how much work Nusoap does for you if you let it. It’ll automatically marshall PHP arrays into the right things so they appear as you expect at the other side.
I’m still having problems getting some complex types working, but otherwise it’s going well. As well as a bunch of single-purpose python scripts (using SOAPpy this time), I have an API wrapper script which lets you call any of the functions exposed by the API from the command line. With WSDL providing function arguments, return values and function descriptions, it even provides useful help. I forsee this being more use for debugging or quick modifications, or maybe used inside a wrapper to do more complicated tasks, but it’s probably better in that case to call the SOAP functions directly.

Secure Replicated LDAP

I’ve been doing replicated LDAP inside a XEN multiple-virtual-server network, but I decided with a mail infrastructure it is worth using SSL to secure the replication between hosts. I set up a CA for this purpose. Replication over SSL is no harder than normal replication, which I’ve done often enough now that it’s pretty easy to handle. Having this infrastructure in place means I can host a backup MX offsite and export my entire configuration to it via LDAP, so it can be as efficient as my onsite MXes as possible.

Spam / Malware scanning

I’m also taking the opportunity to work out some “best practices” for SA and so on. Greylisting is something that comes up fairly often, so I’m trying to find a decent greylisting implementation that will scale between multiple hosts, potentially offsite hosts, and will work sensibly within exim. A lot of them seem fairly immature, or rely on exim talking directly to a database. This latter point might not be a huge concern, but I’d rather have a system I can submit an email via, or better still a greylist tuple, and have it return a succeed or fail. There are a large number of implementations however, so this bit is taking a while to work through. Tools like AWGL (don’t have a link handy) or IMMDT.pm (Perry’s original concept for AWGL) are interesting too.

Exim

And of course, I get to go over my exim configuration, which started out as an exim 3.3 config and has been upgraded throughout the years to a 4.6 config, and pull out any quirks, and add in all the new features people are using. Even fairly trivial things such as recipient verification callout (checking with the destination server, possibly local, if the username exists – if it doesn’t, reject the email at SMTP time) have made a huge difference already.

So far
My new server is currently only running as 2MX for a couple of domains, and during that time the primary hasn’t gone offline at all. This means that approximately all of the mail it is seeing is spam. It’s dropped about 70% so far, and about another 10% of the email it has processed has been locally generated from various things happening on the system. That figure is quite high, so I’ll have to look at it and work out why it’s being sent (and where it’s going, as it seems it’s not ending up in the public folder I thought it would).

Still to go

I have to rebuild the IMAP/POP infrastructure somewhat, and that’s the worst bit of the job as it involves changing passwords for the hundred or so clients who connect directly to our server. My overall infrastructure will end up with a couple of inbound MX servers, a POP/IMAP server, possibly a separate server for spam/virus scanning (although I might look at having these services local to each MX and maintaining configs/databases between them), and an outbound MTA. This will hopefully alleviate some issues we’ve had where a lot of outbound email has effectively stopped inbound email due to loading on the MTA.

Links

Categories
MetaNET Tool of the Week WLUG

Restricting ssh password auth by group or shell

Matt Brown asked if I could think of any way to allow a certain group of users to scp into a host and use a password, while requiring a valid key pair for most other users. Perry suggested a solution to this a while ago, so I sat down and had a quick look at it, and got it working.

I configured sshd such that:

[code]

PasswordAuthentication no
ChallengeResponseAuthentication yes
UsePAM yes
[/code]

This bypasses direct /etc/passwd auth, but allows standard PAM based auth via the ChallengeResponseAuthentication mechanism. This will allow everyone to login with a password if possible, so we need to configure pam. For this, I used the pam_listfile module, checking that the user had a particular shell, /usr/bin/scponly, as their shell:

[code]

cat “/usr/bin/scponly” > /etc/scpshells

[/code]

I then edited /etc/pam.d/sshd:

[code]

auth required pam_env.so
auth required pam_listfile.so item=shell sense=allow file=/etc/scpshells onerr=fail
auth sufficient pam_unix.so likeauth nullok
auth required pam_deny.so
auth required pam_nologin.so

session required pam_limits.so
session required pam_unix.so

account required pam_unix.so

password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3
password sufficient pam_unix.so nullok md5 shadow use_authtok
password required pam_deny.so

[/code]

I probably don’t need all of that in the sshd pam snippet, but I just dumped the contents of the included files into to to make editing it easier.

To test this I added /bin/bash to /etc/scpshells, and verified that I could ssh in by using a pasword. I then removed it, and verified that I could no longer ssh in with a password. Combine this with a suitable shell (/usr/bin/scponly), and I can create users that can scp in with a password – or with a key if they care – but cannot get a local shell; all other users cannot authenticate via PAM, and so must provide a valid key.

Categories
General WLUG

Backspace in Firefox 2

As part of my upgrade to Edgy the other day, Firefox was upgraded to 2.0. It’s been upgraded every day since then, and is I think finally running a real 2.0 build

[code]

$ apt-cache show firefox | grep Version
Version: 2.0+0dfsg-0ubuntu3
[/code]

The biggest interface changes I’ve noticed to Firefox 2 so far include some cosmetic changes to the tab panel layout, which I’m mostly used to, and the ‘backspace’ button now no longer steps backwards in your history.

This behaviour is controllable via about:config however. Setting the following will revert to the old behaviour.
[code]

browser.backspace_action = 0
[/code]

Categories
General WLUG

Edgy Eft RC1 announced

After seeing this announcement for the Edgy Eft RC1 release, I decided to upgrade my Dapper laptop to Edgy. Thanks to the NZ mirror already being up to date, it didn’t take long to download the 700MB of packages that I needed.

I’d like to say the upgrade went smoothly, but it didn’t. Part of that is my own fault – I accidentally used apt-get instead of aptitude to handle the upgrade, and so a lot of packages were missed, and some dependency resolution was fumbled which meant the upgrade process broke hard along the way.

After manually removing a bunch of packages then getting the upgrade to restart, then repeating “aptitude dist-upgrade” about 6 times after it thought it’d finished, each time installing a couple of new packages, and then finally rebooting one more time because I couldn’t get X to start again, it all looked good.

Except that when I logged in, GNOME didn’t appear to start. I killed X and added a new user, then logged in as them – worked fine. Tried my user – no go. I spent a long time trying to move various GNOME configs out of the way, and eventually resorted to creating a new blank homedir for myself – still wouldn’t work. So I rebooted one more time and it started working after that. Very strange.

I’d suggest waiting for the final release to upgrade, but if you do go ahead, make absolutely sure you use aptitude and not apt-get. It may also work better if you use the CD and boot into an upgrade mode, I can’t comment.

I would file a bug, but I’m not sure it’ll help. I can’t pin down what was wrong because I used the wrong tool to upgrade. I have a Dapper install on my desktop at home, and I’ll try upgrading that next week when I get some free time, however it’ll probably “just work” by then anyway.

New things noticed in Edgy Eft so far:

  • Firefox 2
  • Network-manager-applet has a dialup account plugin.

Yeah. It looks the same. Edgy does have new features under the hood, but I haven’t looked into those yet.

Update: Yeah, it’s called Edgy Eft, not Efty Edge.

Categories
General NSP WLUG

Xensource Xen Enterprise

I’ve been following the Xensource Xen Enterprise product for a couple of months at work. The current release ships with an install CD which preps a barebones server. It installs linux with a Xen kernel and the Xen toolset, but doesn’t ask you many questions – the dom0 is really only there to support the hypervisor after all. There are no options for software raid in the installer, but that might be because software raid isn’t considered an “enterprise” tool by some people.

Once it’s installed, you can run a JAVA based console from your desktop. This will connect to the XenEnterprise server and let you run some of the hypervisor commands as well as provision and configure domU.  XE ships with support for installing a debian server from a template, and for installing RHEL from a network install server. Apparently  it’s fairly straight forward to modify the templates or to create your own, I haven’t looked into that yet.

The console provides some monitoring of the dom0 and the domUs – network, cpu, disk and memory utilisation. The
console will connect to multiple XE hosts, letting you monitor and configure your domUs across your entire network.

One other neat tool that ships with XE is a P2V migration tool. That’s Physical to Virtual migration – you run a program on your existing physical machine, and XE will create a domU suitable for it and migrate the filesystem into the new host. However, I’ve yet to use this to see how well it works.
The kicker is, of course, the pricing. XE’s pricing is available online, and it starts at $750 + $150 annual maintainance for a 2 cpu server. The big benefits of XE come in when you have multiple servers in use, so start to scale that price up accordingly.   XE is also a bit limited in that you can’t do anything outside of the box yet. Which means that if you want, for example, pass a PCI device (eg, network card or SCSI controller) through to a specific domU, you are out of luck. This may not happen very often or at all, but it does make it somewhat less useful.

Overall, it’s a nice enough tool. If you are looking at managing a large number of densely packed Xen servers and want to be able to quickly provision new servers, clone existing servers, and migrate guests easily between hosts, it’s probably spot on.