anthony Posted June 11, 2009 Posted June 11, 2009 (edited) Hey all, I’ve been working on something, and I figured I would share the results here for the sake of technical interest. There has been quite a bit of discussion about the cost benefit of hardware RAID over say motherboard chipset RAID. Of course, it goes without saying that hardware RAID will be faster but I would recon there aren’t exactly solid, solid numbers in which to draw real conclusions from. Follows will be a massive set of pictures, so CTRL+F for the titles What I did here was run through a battery of tests, sequential and random throughput, IO performance, CPU utilization, read/ write response time and a mixture of more elementary hardware level tests along with file system level stuff. I apologize for the lack of written description, but the graphs should tell the story. I’m God awful at getting’ things done, but I might come back and update this post later on. Anyways, lets get started. System specs used in signature (fairly accurate), hard drives used: 2x OCZ Vertex 30GB, courtesy of OCZ . RAID controller used is a mid end Highpoint 3510. I had a Highpoint 4320, but unfortunately I didn’t have the drives at the time, but the RR 3510 should give a fairly good picture as to RAID card performance . OS run form an IDE drive. Just a note, the messier graphs are followed up with ‘averages’, single value comparisons. And table of contents 1) Iometer Throughput 2) Iometer IO performance with Intel’s File System access spec 3) SiS Throughput, physical disk 4) CPU Utilization 5) Read/ Write service time 6) ‘Real world’ Edited June 11, 2009 by roadkill Quote Share this post Link to post Share on other sites More sharing options...
anthony Posted June 11, 2009 Posted June 11, 2009 1) Iometer Throughput Averages Quote Share this post Link to post Share on other sites More sharing options...
anthony Posted June 11, 2009 Posted June 11, 2009 2) Iometer IO performance with Intel’s File System access spec Fairly industry standard for comparing storage/ storage related products, the file system access spec determines an access pattern which the distribution of file sizes, randomness and r/w is preset. Quote Share this post Link to post Share on other sites More sharing options...
anthony Posted June 11, 2009 Posted June 11, 2009 3) SiS Throughput, physical disk Something simple 4) CPU Utilization 5) Read/ Write service time X axis had to be lobbed off at 30% to better represent the results. Quote Share this post Link to post Share on other sites More sharing options...
anthony Posted June 11, 2009 Posted June 11, 2009 (edited) 6) ‘Real world’ This next part uses Intel’s NASPT, which was initially designed for NAS units, but conveniently it is somewhat based off the IPEAK SPT suite and it combine awesome testing methods, with more user level, file system level testing to get a better picture of what a user will encounter daily. Back to the system access specifications, NASPT comes with trace files so the consistency is constant throughout all tests. Copy From, Copy To, HD Video and Office productivity To get a better idea of how these different types of drive access affect performance, have a look at this next picture. It’s the visual representation of hard drive access, x axis being time y axis being location on disk. Copy To and From are the rather straight blue lines, HD video is the somewhat jagged grey line and Office productivity is the blotch of crap . Really, a visual representation of sequential and random disk access. 3510 RAID 0 RAID 1 ICH9R RAID 0 RAID 1 And averages And yes that is all! Questions? Comments? Suggestions? Edited June 11, 2009 by roadkill Quote Share this post Link to post Share on other sites More sharing options...
Jump4h Posted June 12, 2009 Posted June 12, 2009 Wow. Awesome post; how much time did this benchmarking consume? Quote Share this post Link to post Share on other sites More sharing options...
anthony Posted June 13, 2009 Posted June 13, 2009 Thank you Jump4h , I guess there isn't too much interest with SSDs though . Took me about a day, but I was damn entertained doing it Quote Share this post Link to post Share on other sites More sharing options...
road-runner Posted June 13, 2009 Posted June 13, 2009 Nice post, I have 2 of the 30gb vertex in raid 0 on the EVGA classified, I have 2 more coming to use 4 in raid 0. Quote Share this post Link to post Share on other sites More sharing options...
anthony Posted June 13, 2009 Posted June 13, 2009 You've got some substantial numbers there road runner! Between my IOM block to throughput and yours, my 512kb and 4kb block transfer rates come out a bit lower, but thats just Iometer's method of measurement- its geared more towards a multi user environment where locality is usually negligible, unfortunately this also means Iometer doesn't do a great job at simulating a single user disk access which is probably already reflected enough when comparing against SiS and (ironically) NASPT. When you get your drives road runner, feel free to add some numbers here Maybe we can put together a pretty good thread for info. Unfortunately I'll have to be sending one of the vertex drives back too OCZ Quote Share this post Link to post Share on other sites More sharing options...
road-runner Posted June 13, 2009 Posted June 13, 2009 Just upgraded the firmware this morning from 1.10 to 1.30, not sure if I needed to really but done it anyway. I just ran atto bench... Quote Share this post Link to post Share on other sites More sharing options...
TheConqueror Posted June 13, 2009 Posted June 13, 2009 Cool, nice to know.. But I don't see enough of an increase to buy hardware RAID Quote Share this post Link to post Share on other sites More sharing options...
anthony Posted June 13, 2009 Posted June 13, 2009 Roadrunner, do you have before and after ATTOdb numbers? @ TheConqueror, considering that hardware RAID would run anywhere from about 200 up to well maybe 800 (or more) yes, it is expensive. It really depends on your application, for simply booting into an OS, or loading up a game, it probably doesn't make too much sense- given SSDs with on board are plenty fast, and already expensive. Quote Share this post Link to post Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.