Jump to content

Need a reliable SSD


Darth_Tom

Recommended Posts

Very long post...

Most of what you said is true with the exception of access times - they typically go down the more drives you have. Be it SSDs or HDDs more drives == more IOPs. Sure, it gets harder to actually get all of the drives active, but especially with SSDs your IOPs tend to scale pretty linearly with additional drives.

 

There's a pretty noticeable difference in loading times when putting even two lower-end SSDs into a striped array.

Share this post


Link to post
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

When I compared by system using dual SSDs in RAID 0 vs single SSD the difference was not enough to matter in system start or app/game loads. I mean if I had a stop watch it might have been a bit quicker, but not enough for me to notice in real world usage.

Share this post


Link to post
Share on other sites

BTW in case you did not know this RAID 0 also increases the chance of hardware failure

 

I would like to understand this more, why would your hardware be more at risk to failure because of this? Does raid 0 make your hardware operate out of it's intended operating specs? Seems like the data would still be read and written to the disks the same only spread across several disks at once which seems to me should increase the life as each drive works less time to to the same work 1 drive would do?

 

and the risk to your data.

 

This most definately as you increase the amount of drives the better your chances of losing data since only 1 disk failure in a raid 0 destroys the entire array. Which in fact is why they call it raid 0 the zero stands for zero redundancy

Share this post


Link to post
Share on other sites

I would like to understand this more, why would your hardware be more at risk to failure because of this? Does raid 0 make your hardware operate out of it's intended operating specs? Seems like the data would still be read and written to the disks the same only spread across several disks at once which seems to me should increase the life as each drive works less time to to the same work 1 drive would do?

It doesn't affect when your drives will die - but it increases your chance of losing data. The increased chance of failure comes from the increased number of drives and controllers - if you're already using 4 drives then going to RAID won't increase the chance of anything but if you go from a single drive to a 4 drive RAID 0 you're increasing your chance of a hardware failure several times over.

Edited by Waco

Share this post


Link to post
Share on other sites

It doesn't affect when your drives will die - but it increases your chance of losing data. The increased chance of failure comes from the increased number of drives and controllers - if you're already using 4 drives then going to RAID won't increase the chance of anything but if you go from a single drive to a 4 drive RAID 0 you're increasing your chance of a hardware failure several times over.

Not to mention if one goes then all the rest are inaccessible until formatted

Share this post


Link to post
Share on other sites

It doesn't affect when your drives will die - but it increases your chance of losing data. The increased chance of failure comes from the increased number of drives and controllers - if you're already using 4 drives then going to RAID won't increase the chance of anything but if you go from a single drive to a 4 drive RAID 0 you're increasing your chance of a hardware failure several times over.

 

If it don't affect when the drives are going to die, then why do they say it is going to increase my chance of them failing? this makes no sense to me. I completely understand the chance is increased for the raid array to fail as only 1 drive failure kills the entire array, and the data will be at greater risk of loss. if we look at it mathmatically.

 

lets say you buy 4 brand new HDD. Each has lets say 100,000hour MTF rate and you assemble these 4 drives into a raid0 array. Your MTF rate should be 100,000hours, as the raid0 array has no affect on drive failure rate.

 

If you only use 1 drive alone with a MTF rate of 100,000hours it too should run for 100,000hours

 

I realize that if say 1 of those 4 drives in raid0 was say at the bottom of the life span avarage lets say 20,000hours that your array is then broken and data loss can occur, but it didn't go bad because it was in a raid0 array, the very same drive could just as easily fail by itself, this is why I don't understand why people keep saying that raid0 increases your chance of drive failures or hardware failures.

 

And from my personal experiences just to add if you have 1 drive fail in a raid0 and you take the other 3 drives and say put them into another pc you can useually just drag you files off the drive and recover most if not all your data useually. of course a simple back up method would make all that process pointless anyway, but just saying I have recoverd files this way before.

Share this post


Link to post
Share on other sites

If it don't affect when the drives are going to die, then why do they say it is going to increase my chance of them failing? this makes no sense to me. I completely understand the chance is increased for the raid array to fail as only 1 drive failure kills the entire array, and the data will be at greater risk of loss. if we look at it mathmatically.

 

lets say you buy 4 brand new HDD. Each has lets say 100,000hour MTF rate and you assemble these 4 drives into a raid0 array. Your MTF rate should be 100,000hours, as the raid0 array has no affect on drive failure rate.

 

If you only use 1 drive alone with a MTF rate of 100,000hours it too should run for 100,000hours

 

I realize that if say 1 of those 4 drives in raid0 was say at the bottom of the life span avarage lets say 20,000hours that your array is then broken and data loss can occur, but it didn't go bad because it was in a raid0 array, the very same drive could just as easily fail by itself, this is why I don't understand why people keep saying that raid0 increases your chance of drive failures or hardware failures.

 

And from my personal experiences just to add if you have 1 drive fail in a raid0 and you take the other 3 drives and say put them into another pc you can useually just drag you files off the drive and recover most if not all your data useually. of course a simple back up method would make all that process pointless anyway, but just saying I have recoverd files this way before.

Yeah but if you have four 500GB drives Raided and one fails then you're screwed out of far more than one drive's worth of data.

 

If just the one drive fails alone you only lose less than 500GB

Share this post


Link to post
Share on other sites

Yeah but if you have four 500GB drives Raided and one fails then you're screwed out of far more than one drive's worth of data.

 

If just the one drive fails alone you only lose less than 500GB

 

Actually if you have 4x500GB drives in raid0 and one fails IF all 4 drives were 100% full you would only lose 500GB of data as the other 3 drives are fine and the data is easily recovered. Technically you CAN recover the other missing 500GB if you desire, however this may cost you a significant amount of money.

Share this post


Link to post
Share on other sites

Actually if you have 4x500GB drives in raid0 and one fails IF all 4 drives were 100% full you would only lose 500GB of data as the other 3 drives are fine and the data is easily recovered. Technically you CAN recover the other missing 500GB if you desire, however this may cost you a significant amount of money.

How?? I thought the data would be cut up into even quarters and not be complete?

Share this post


Link to post
Share on other sites

If it don't affect when the drives are going to die, then why do they say it is going to increase my chance of them failing? this makes no sense to me. I completely understand the chance is increased for the raid array to fail as only 1 drive failure kills the entire array, and the data will be at greater risk of loss. if we look at it mathmatically.

 

lets say you buy 4 brand new HDD. Each has lets say 100,000hour MTF rate and you assemble these 4 drives into a raid0 array. Your MTF rate should be 100,000hours, as the raid0 array has no affect on drive failure rate.

 

If you only use 1 drive alone with a MTF rate of 100,000hours it too should run for 100,000hours

 

I realize that if say 1 of those 4 drives in raid0 was say at the bottom of the life span avarage lets say 20,000hours that your array is then broken and data loss can occur, but it didn't go bad because it was in a raid0 array, the very same drive could just as easily fail by itself, this is why I don't understand why people keep saying that raid0 increases your chance of drive failures or hardware failures.

Reread my post. The increased chance of failure comes from having more drives and overall system complexity. It doesn't mean your drives will die any faster - nobody was implying that. You're also calculating the MTTF incorrectly. With 4 drives your MTTF is 1/4 the MTTF with a single drive (not counting the MTTF of the drive controller itself).

 

Actually if you have 4x500GB drives in raid0 and one fails IF all 4 drives were 100% full you would only lose 500GB of data as the other 3 drives are fine and the data is easily recovered. Technically you CAN recover the other missing 500GB if you desire, however this may cost you a significant amount of money.

This is incorrect.

 

With RAID 0 there is NO redundancy. If you lose a single drive all of your data is lost, permanently, unless you pay for data recovery on the dead drive.

Edited by Waco

Share this post


Link to post
Share on other sites

Reread my post. The increased chance of failure comes from having more drives and overall system complexity. It doesn't mean your drives will die any faster - nobody was implying that. You're also calculating the MTTF incorrectly. With 4 drives your MTTF is 1/4 the MTTF with a single drive (not counting the MTTF of the drive controller itself).

So your drives won't die faster but your failure rate or chance for a drive to fail has been divided by 1/4. the MTTF rate is a 1/4 of any single drive that sounds to me like your saying your hdd will fail faster when you raid0. Since having more drives and more complexity also mean, that if you have your controller in achi mode with 4 drives attached to it that your MTTF rate will be 1/4 of the time as well? Wouldn't with this same logic also mean that 4 drives in raid1 would be 1/4 the life span and a lower MTTF rate?

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×
×
  • Create New...