Jump to content

Corsair Force GT SATA 3 RAID 0 Performance


Speedway

Recommended Posts

You must be looking at different numbers than me...the 4 KB results for a pair of drives is essentially double at any queue depth higher than 1.

 

Yes, I was looking at the 4k reads/writes for queue depth 1. What you said may be true (I say may as it isn't always double). But realistically everyday use will be closer to the 1 depth than the 32 depth. And even though performance does increase at the higher queue depths, the seek times do not improve--at least as far as I know they do not.

Share this post


Link to post
Share on other sites

  • Replies 25
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Yes, I was looking at the 4k reads/writes for queue depth 1. What you said may be true (I say may as it isn't always double). But realistically everyday use will be closer to the 1 depth than the 32 depth. And even though performance does increase at the higher queue depths, the seek times do not improve--at least as far as I know they do not.

Actually queue depths are usually above 1 by a pretty decent margin. 3-4 is common in normal usage whenever doing any real disk access. I'll try to capture some normal use (including gaming and such) tonight if I have time.

 

As for the response times - they are faster by up to a factor of 2 since both drives can service requests simultaneously.

 

Note that all of the above is strictly talking about read access. For writes, a pair of drives in RAID 0 almost always double in speed even with queue depths around 1 unless you have all disk caching disabled. This is because most RAID controllers will batch write requests to the drive to increase bandwidth and this scales very very well with SSDs.

 

 

I'm not saying you should run RAID 0 for the speed (I did it for the capacity) but there is a significant increase in speed with two drives versus one.

Share this post


Link to post
Share on other sites

Actually queue depths are usually above 1 by a pretty decent margin. 3-4 is common in normal usage whenever doing any real disk access. I'll try to capture some normal use (including gaming and such) tonight if I have time.

 

As for the response times - they are faster by up to a factor of 2 since both drives can service requests simultaneously.

 

Note that all of the above is strictly talking about read access. For writes, a pair of drives in RAID 0 almost always double in speed even with queue depths around 1 unless you have all disk caching disabled. This is because most RAID controllers will batch write requests to the drive to increase bandwidth and this scales very very well with SSDs.

 

 

I'm not saying you should run RAID 0 for the speed (I did it for the capacity) but there is a significant increase in speed with two drives versus one.

 

From what I have read the access/response times will not improve with RAID 0. However, you do get an increase in IOPS with RAID, but will you even notice it in the real world for those who don't work with large files often?

 

It just seems to me that the cons outweigh the pros for those who do not work with large files on a daily basis, yet everywhere I go to I see people setting their SSDs in RAID0. What am I missing here?

 

Edit: In terms of capacity, wouldn't it be more beneficial for those who do not run VMs or work with large files to use them as two separate drives?

Edited by PremiumAcc

Share this post


Link to post
Share on other sites

Access/response times won't improve with RAID 0 as much as you would see them in RAID 1 (or RAID 10). You will get increase in IOPS and sequential read/writes with RAID 0. However, faster response times is meaningless as it's already fast as it is with a single SSD.

 

@ PremiumAcc I use SSD's as separate drives.

Share this post


Link to post
Share on other sites

From what I have read the access/response times will not improve with RAID 0. However, you do get an increase in IOPS with RAID, but will you even notice it in the real world for those who don't work with large files often?

The minimum response time won't change but the average does. IOPs are most important with small files, not large files.

 

I can't say that I notice a huge difference in single-task performance but I literally cannot load my SSDs down to the point where I'm waiting on them.

Share this post


Link to post
Share on other sites

The minimum response time won't change but the average does. IOPs are most important with small files, not large files.

 

I can't say that I notice a huge difference in single-task performance but I literally cannot load my SSDs down to the point where I'm waiting on them.

 

Thanks Waco, I learned something new.

Share this post


Link to post
Share on other sites

Well Waco and Wevs pretty much summed up why I use them in RAID 0 - Capacity and because I can ;) Also, wasn't it you Waco that did the test showing after an extended period of use the "loss of trim" really didn't have much affect on the drives? If not, I know somebody here on OCC showed that they didn't experience a performance drop, when losing trim over time. Either way, I OC quite a bit, and when you are constantly messing with BIOS settings to achieve higher OCs, you face the reality that it is much easier to corrupt something and have to wipe and reinstall Windows :cry: So for me the Trim support issue is somewhat moot, since I am reinstalling Windows every few months anyway :pfp::lol:

Share this post


Link to post
Share on other sites

I've seen lots of posts and performance comparisons all over the web on the subject of Trim and RAID0 drive performance degradation. From everything I've read and researched, with the very effective hardware garbage collection on today's SSDs, you don't have much to worry about even without Trim functioning.

Share this post


Link to post
Share on other sites

I've seen lots of posts and performance comparisons all over the web on the subject of Trim and RAID0 drive performance degradation. From everything I've read and researched, with the very effective hardware garbage collection on today's SSDs, you don't have much to worry about even without Trim functioning.

 

Exactly! I rem at first everyone making a huge deal out of Trim or no Trim support. I haven't had any issues not having it and I ran my Intel X18s in RAID 0 for prob close to 2 years :thumbsup:

Share this post


Link to post
Share on other sites

Yeah TRIM doesn't seem quite as important with newer drives. Even my first generation Indilinx drives hardly showed any degredation at all over the course of a year.

 

I'll still update my rig to use TRIM in RAID when Intel releases the new option ROM and software.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×
×
  • Create New...