Categories
linux NSP Xen

Linux iSCSI stacks and multiple initiators per target LUN

I’ve used a few hardware based iSCSI stacks for Xenserver shared storage backends, but never had spare hardware to run up software based stacks. This is rather backwards from the usual way people would test things I guess, but it’s how it worked out for us.

However, we’re now getting some new hardware for internal use – a couple of frontend servers and a storage server apparently, and we’re going to use a software based iSCSI stack on it. We’ve had a look at some of the commercial offerings – SANmelody, Open-e etc, but I’d much rather not spend money where it’s not needed. This iSCSI backend is going to have one or two LUNs, shared to a static number of hosts, and that’s it.

I’d steered away from the various open-source iSCSI target stacks, because it wasn’t clear whether they supported multiple initiators to access a single LUN concurrently. This surprised me somewhat – it seemed like it should just work, however we kept getting caught by people asking about the “MaxConnections” parameter for IETd, which sounds like it means “Maximum number of initiators to this LUN”, and has a rather depressing note beside it stating that the only valid parameter is “1” at this stage.

This didn’t sit right with me though – surely there are lots of people using fully opensource iSCSI systems. All the talk about iSCSI being a cheap (free!) SAN alternative can’t just be referring to people consolidating disks but still allocating single-use LUNS. I’ve found lots of references to people even talking about using software iSCSI targets with Xen as a shared storage backend.

And, of course, it’s not right[1]. The IETd “MaxConnections” parameter refers to the number of connections a single initiator can make with respect to a single target, which boils down to whether multipath IO is supported via the iSCSI stack or not. And it’s not, as far as IETd is concerned. This post to iscsi.iscsi-target.devel clears things up quite nicely, but it took me a damned long time to find. So, hopefully, this will help someone else answer this question.

1) multiple ini access different targets on same iet box at same time.
no data concurrency issue. the performance totally depends on your HW.
of course, IET can be improved to support large # of ini better

2) multiple ini access same targets on same iet box at same time. has
data concurrency issue here, so need a clsuter file system or similar
system at client side to coordinate.

3) one ini access different targets on one iet box. it will create
multiple sessions and no data concurrency issue here. performance issue
depends on HW.

all these are MS&OC/S (Multiple Sessions& One Connection per Session)

4) one ini access same target on one iet box.

it might try to use multiple connection in one session (MC/S, Multiple
Connection per Session), but iet doesnot support it and in parameter
negotiation, iet stick to MaxConn=1.

it might try to create multiple sessions with same target (still one
connection per session), which is allowed. usually this is controlled by
client software, for example, linux multi-path.

I read “multiple ini[tiator] access same targets on same iet box at same time” to mean exactly the problem I’m looking at, and the only cavaet is the filesystem issue, which Xenserver deals with. And it clarifies the point about MaxConnections too.

[1] That said, I haven’t tested this properly yet. I ran up IETd on a test box and connected OSX and linux to it concurrently, but while I could format the disk via linux I couldn’t mount it for some reason. OSX saw it fine. I’m not sure if this wasn’t just some transient weirdness on my test boxes or not.

UPDATE: Matt Purvis emailed me to confirm that it does all work as expected. Thankfully. I hope other people find this post useful – if only because it means I’m not the only one that spent hours trying to find a definitive answer on this topic.

8 replies on “Linux iSCSI stacks and multiple initiators per target LUN”

I had the same concerns, so I tried testing it for myself…but I still wanted to know I wasn’t going to bork everything…your blog gave me exactly the confirmation I needed. Thanks.

7 years later and this still pointed me in the right direction…the default packages included with Centos don’t seem to support multi access.

Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.