Jump to content

A look into SSD RAID performance with hardware RAID and software RAID


anthony

Recommended Posts

Hey all, I’ve been working on something, and I figured I would share the results here for the sake of technical interest. There has been quite a bit of discussion about the cost benefit of hardware RAID over say motherboard chipset RAID. Of course, it goes without saying that hardware RAID will be faster but I would recon there aren’t exactly solid, solid numbers in which to draw real conclusions from.

 

Follows will be a massive set of pictures, so CTRL+F for the titles ;)

 

What I did here was run through a battery of tests, sequential and random throughput, IO performance, CPU utilization, read/ write response time and a mixture of more elementary hardware level tests along with file system level stuff. I apologize for the lack of written description, but the graphs should tell the story. I’m God awful at getting’ things done, but I might come back and update this post later on.

 

Anyways, lets get started. System specs used in signature (fairly accurate), hard drives used: 2x OCZ Vertex 30GB, courtesy of OCZ ;). RAID controller used is a mid end Highpoint 3510. I had a Highpoint 4320, but unfortunately I didn’t have the drives at the time, but the RR 3510 should give a fairly good picture as to RAID card performance ;). OS run form an IDE drive. Just a note, the messier graphs are followed up with ‘averages’, single value comparisons.

 

And table of contents

 

1) Iometer Throughput

2) Iometer IO performance with Intel’s File System access spec

3) SiS Throughput, physical disk

4) CPU Utilization

5) Read/ Write service time

6) ‘Real world’

Edited by roadkill

Share this post


Link to post
Share on other sites

2) Iometer IO performance with Intel’s File System access spec

 

Fairly industry standard for comparing storage/ storage related products, the file system access spec determines an access pattern which the distribution of file sizes, randomness and r/w is preset.

 

intelm.jpg

 

iomiopsall.gif

Share this post


Link to post
Share on other sites

3) SiS Throughput, physical disk

 

Something simple :)

 

sisphyswriter01.gif

 

sisphysreadr01.gif

 

sisphysaverager01.gif

 

4) CPU Utilization

 

ipeakcpua3510r0.gif

 

ipeakcpua3510r1.gif

 

ipeakcpuaich9rr0.gif

 

ipeakcpuaich9rr1.gif

 

5) Read/ Write service time

 

X axis had to be lobbed off at 30% to better represent the results.

 

ipeakst3510r0.gif

 

ipeakst3510r1.gif

 

ipeakstich9rr0.gif

 

ipeakstich9rr1.gif

Share this post


Link to post
Share on other sites

6) ‘Real world’

 

This next part uses Intel’s NASPT, which was initially designed for NAS units, but conveniently it is somewhat based off the IPEAK SPT suite and it combine awesome testing methods, with more user level, file system level testing to get a better picture of what a user will encounter daily. Back to the system access specifications, NASPT comes with trace files so the consistency is constant throughout all tests.

 

Copy From, Copy To, HD Video and Office productivity

 

To get a better idea of how these different types of drive access affect performance, have a look at this next picture.

 

esth.th.jpg

 

It’s the visual representation of hard drive access, x axis being time y axis being location on disk. Copy To and From are the rather straight blue lines, HD video is the somewhat jagged grey line and Office productivity is the blotch of crap :P. Really, a visual representation of sequential and random disk access.

 

3510

 

RAID 0

 

naspt3510r0copyfrom.gif

 

naspt3510r0copyto.gif

 

naspt3510r0hd.gif

 

naspt3510r0office.gif

 

RAID 1

 

naspt3510r1copyfrom.gif

 

naspt3510r1copyto.gif

 

naspt3510r1hd.gif

 

naspt3510r1office.gif

 

ICH9R

 

RAID 0

 

nasptich9rr0copyfrom.gif

 

nasptich9rr0copyto.gif

 

nasptich9rr0hd.gif

 

nasptich9rr0office.gif

 

RAID 1

 

nasptich9rr1copyfrom.gif

 

nasptich9rr1copyto.gif

 

nasptich9rr1hd.gif

 

officem.gif

 

And averages

 

nasptall.gif

 

 

 

 

 

 

And yes that is all! Questions? Comments? Suggestions?

Edited by roadkill

Share this post


Link to post
Share on other sites

You've got some substantial numbers there road runner! Between my IOM block to throughput and yours, my 512kb and 4kb block transfer rates come out a bit lower, but thats just Iometer's method of measurement- its geared more towards a multi user environment where locality is usually negligible, unfortunately this also means Iometer doesn't do a great job at simulating a single user disk access which is probably already reflected enough when comparing against SiS and (ironically) NASPT.

 

When you get your drives road runner, feel free to add some numbers here :) Maybe we can put together a pretty good thread for info. Unfortunately I'll have to be sending one of the vertex drives back too OCZ

Share this post


Link to post
Share on other sites

Roadrunner, do you have before and after ATTOdb numbers?

 

@ TheConqueror, considering that hardware RAID would run anywhere from about 200 up to well maybe 800 (or more) yes, it is expensive. It really depends on your application, for simply booting into an OS, or loading up a game, it probably doesn't make too much sense- given SSDs with on board are plenty fast, and already expensive.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...