anthony Posted June 11, 2009 Posted June 11, 2009 (edited) Hey all, I’ve been working on something, and I figured I would share the results here for the sake of technical interest. There has been quite a bit of discussion about the cost benefit of hardware RAID over say motherboard chipset RAID. Of course, it goes without saying that hardware RAID will be faster but I would recon there aren’t exactly solid, solid numbers in which to draw real conclusions from. Follows will be a massive set of pictures, so CTRL+F for the titles What I did here was run through a battery of tests, sequential and random throughput, IO performance, CPU utilization, read/ write response time and a mixture of more elementary hardware level tests along with file system level stuff. I apologize for the lack of written description, but the graphs should tell the story. I’m God awful at getting’ things done, but I might come back and update this post later on. Anyways, lets get started. System specs used in signature (fairly accurate), hard drives used: 2x OCZ Vertex 30GB, courtesy of OCZ . RAID controller used is a mid end Highpoint 3510. I had a Highpoint 4320, but unfortunately I didn’t have the drives at the time, but the RR 3510 should give a fairly good picture as to RAID card performance . OS run form an IDE drive. Just a note, the messier graphs are followed up with ‘averages’, single value comparisons. And table of contents 1) Iometer Throughput 2) Iometer IO performance with Intel’s File System access spec 3) SiS Throughput, physical disk 4) CPU Utilization 5) Read/ Write service time 6) ‘Real world’ Edited June 11, 2009 by roadkill Quote Share this post Link to post Share on other sites More sharing options...
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.