Categories
General NSP WLUG

Benchmarks of an Intel SRSC16 RAID controller

One of our clients gave us an Intel server with an Intel SRSC16 SATA RAID controller and a 500 GB +hotspare RAID1 set up on it, to install XenServer Express 4.0.1 system. While building the system up for him, I noticed abysmal write perfomance. It was taking around 29 minutes to install a guest from a template, a process which basically involves creating and formatting a filesystem and unpacking a .tar.bz2 file into it. Inspection of the system revealed that the controller lacked a battery backup unit (BBU), and thus the writeback cache was disabled. Also, the firmware on the controller disabled the on-disk cache as well, and the controller listed disk access speed at 1.5Gbps, which I’m presuming means it was operating in SATA-1 mode, so no NCQ either. The controller has 64MB of cache.

I persuaded the customer to buy the BBU for the system, and then ran some quick bonnie++ benchmarks, which I know aren’t the best benchmark in the world, but show a good indication of relative performance gains. Results are as follows:

Note: I didn’t do the tests right either – not specifying a number of blocks of files to stat results in those tests completing too soon for bonnie to come up with an answer. So, the output below only really shows throughput tests, as the sequential create/random create tests all completed too soon. Changing the disk cache settings requires a reboot into the BIOS mode configuration tool, so I’ve avoided doing this too many times. Changing the controller cache settings can be done on the fly.

[code]

RAID controller writeback cache disabled, disk cache disabled:

Version  1.03      ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
localhost.loca 512M  1947   4  2242   0  1113   0 10952  18 36654   0 169.7   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
localhost.locald 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ ++

RAID controller writeback cache enabled, disk cache disabled:

Version  1.03      ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
localhost.locald 512M  7938  19  9195   1  4401   0 28823  50 41961   0 227.0   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
localhost.locald 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++

RAID controller writeback cache disabled, disk cache enabled:

Version  1.03      ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
localhost.loca 512M 19861  47 17094   1  9870   0 28484  47 41167   0 243.8   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
localhost.loca  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++

RAID controller writeback cache enabled, disk cache enabled:

Version  1.03      ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
localhost.locald 512M 38633  95 40436   4 15547   0 32045  54 42946   0 261.4   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
localhost.locald  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
[/code]

Enabling the only the controller write-back cache (64MB in this case) roughly quadrupled the write throughput in all cases. Enabling only the disk cache provided nearly 8 times the performance on it’s own. And enabling both together increased write throughput by about a factor of 20.   I suspect the tests weren’t large enough to actually tax the cache systems on the disk or controller however, as I was running them in a Xen domU with only 256 MB of ram, and actually just wanted some quick results.

I know they aren’t really representative of anything, but here’s a test that is semi-representative: Installing another copy of the Xen domU via a template took 2minutes 55 seconds with disk cache enabled, and 2 minutes 30 seconds with disk cache and controller cache enabled (I didn’t test this with just controller cache enabled as that would have required a reboot and manual intervention, and I wasn’t onsite at that point).  Prior to enabling the disk cache and controller cache, this was taking nearly 30 minutes.

While the above shows that a combination of the controller write-back cache and the disk cache shows the best improvement, merely enabling the disk cache on it’s own had the biggest single effect. Of course, the disk cache isn’t backed up by a battery, so there’s the risk of losing the data that is in the disk cache at the time. The Intel documentation for the controller implied that this is limited to the sector that is being written at the point of powerfailure.

When I get some free time and a SCSI or SAS server, I’ll do some similar benchmarks up for that.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.