Jump to content

hard drives & raid - benchmark and compare!


Angry_Games

Recommended Posts

but dude, isnt the new Raptor150s SATA2? my WD250Gb SATA2 is posting a burst around 175Mb/s......

 

and his average are in the 70's MB/s. My average is about 90Mb/s.

 

how could a fat single-drive WD2500KS 250Gb SATAII/3.0 w/ 16mb cache beat the crap of the brand-new Raptors150 wit 10,000rpm???

 

90Mb/s on a WD 250gig .. you got a screen shot of that ?

 

ic. but even my older 160Gb WD SATA1 can reach about 135Mb/s burst and 95mb/s average.

 

how come 10,000rpm doesnt help at all?

 

and one for this ? ..i dont mean to doubt you but 95mb/s on that drive

 

Btw ppls ..the two programs you are using are not the best ..HDtach being the better one .. also for ppl running Raid arrays the "real" world performance is obtained with a stripe size of 64k ..Raid5 smaller is probably better

 

the things that affect performance the most

 

firmware

drive speed ..RPM

access times

STR

and the interface

 

just some info for you :)

Share this post


Link to post
Share on other sites

  • Replies 1.5k
  • Created
  • Last Reply

Top Posters In This Topic

90Mb/s on a WD 250gig .. you got a screen shot of that ?
All benchmarks I have seen would have me belive that above 70mb/s is rare among non SCSI non RAID drives.. and for a "storage" drive as the WD 250gig hitting 90mb/s as a average... well.. screenshot please ;)

Share this post


Link to post
Share on other sites

All benchmarks I have seen would have me belive that above 70mb/s is rare among non SCSI non RAID drives.. and for a "storage" drive as the WD 250gig hitting 90mb/s as a average... well.. screenshot please ;)

 

which is why i asked for one :) ( the new Raptor will do about 75mb/s ..85 i think if you use the windows xp benchmark program)

 

i have a WD 200gig Sata2 drive and its about 55mb/s

Share this post


Link to post
Share on other sites

This thread has been a great help to me: I have been attempting to set-up my NVRAID for months, several hours at a time, each time finding some failure point. Initially I thought is was my Maxtor Ultra16 PATA drives with SATA converters. I could initilize the RAID, set-up my stripe and load windows XP: But the RAID would either dissapear or break, so I went back to my PCI controller card.

 

Once I decided to go SATA 300, I also bought a PCI card with SATA 300 & NCQ caoabilities. How dumb is that? But a set of 300 MB/s drives for a 133 MB/s bus. It took me a while to realize what I had done.

 

Ultimately I got the RAID set-up, which was much easier than I fathomed, from an existing RAID, cloning to a spare drive with the NVRAID drivers in place.

 

ATTO4-1-2006.png

sisandranvidiaraid3-31-06.jpg.png

 

Not only are the benches nice..which is really what I'm enjoying, but boot time, DVD encoding and file access is perceptably faster.

 

I wanted to let the DFI Street community know how much the input is appreciated.

Share this post


Link to post
Share on other sites

I have decided to upgrade to RAID 0 for my HD's and I would like some guidance.

 

Currently I am running the Raptor drive in my sig with a WD Caviar 40GB drive that I have been using to back-up the primary drive once per week.

 

I am planning on building another machine in the near future, so here is my question - should I buy two 80GB Hitachi's to replace the Raptor drive and save it for the new build or buy a second Raptor for my RAID 0 array? Looking for best performance here - primarily a gaming rig.

 

If I buy a second Raptor, is there a way to save all of my files/software to my back-up drive and re-install after setting-up the array?

 

If you have seen other threads or posts on this, just point me in the right direction, my eyes hurt from searching and reading. :sad:

Share this post


Link to post
Share on other sites

Has anyone here experimented with the diskpar / diskpart options for track-aligning your partitions? It is based on where the MBR ends and the partition begins. More info on this subject can be found here . I don't suspect it will have an impact on HDTach readings, since I believe those benches are more drive specific than partition specific (and it runs even when no partitions exist). It should however have an impact on real system performance. It should be especially significant for those using larger sector sizes, as larger sector sizes will equate to higher frequency of double IOs for a particular sector access (if i understand correctly). Larger sectors would euqate to 1 of every 8/4/2 accesses requiring 2 IOs vs. 1.

 

Any other insight into this may be helpful to all...

Share this post


Link to post
Share on other sites

ok, after having read through this entire thread (as suggested in the previous thread i had going on this subject), i am going to see if anyone here can make heads or tails of the oddball results i have observed with some of these raid setups...

 

First of all, the disk speed results from hdtach appear to not mesh perfectally with various stripe sizes, which causes a sort of interference pattern in the results. Sometimes you get a nice sine wave looking pattern, sometimes you get a funky sawtooth looking line. The best example of such interference patterns is demonstrated here and here, and especially here.

 

In most cases, the line will either be roughly horizontal (limited by bus / interface speed), or it will ramp off (limited by absolute drive throughput), or in some cases you get a combo of both (1st part capped by bus, 2nd part capped by drive). Sometimes things 'sync up' just right and you get nice smooth curves that show off both cases nicely, such as here (bandwidth limited), and here (drive limited).

 

Most posts have been revolving around the on-board controllers for these motherboards. The pci bussed parts cap at around 110-120 meg/sec, while the nvidia parts cap at either 150ish or 300ish, varying with how many drives are used (i guess each controller on the nforce can handle roughly 150/sec absolute throughput).

 

OK, with that said, some have jumped in with heavy hitting controllers (acera). And while the burst speed was insane, the sustained transfer rate just did not seem as it should have been for a 0-stripe. The related post is here, notice how the average transfer rate is relatively low compared to the total bandwidth of those drives. The graph actually looks bandwidth limited vs. drive limited.

 

And then we come to my situation. I went after the promise ex8350 controller after observing some very good benchmarks here (check the 1st and 4th entries). With essentially identical hardware, I am getting results equivalent to (if not worse than) a standard pci controller. All results I have achieved are posted in my original thread here.

 

Basically, instead of getting this:

 

size.jpg

 

I'm getting this:

 

2tb-r5-s.png

 

Which makes absolutely no sense at all. I've tried all the usual checks / troubleshooting, and just cannot figure out why a pci-e card is behaving at/below pci-32 speeds.

 

Any input on this is not only welcome, but much appreciated.

 

Thanks in advance for any possible assistance.

Share this post


Link to post
Share on other sites

Ive just installed some new hds, 4x 250 Gb Hitachi Deskstar T7K250 250GB SATA2 8MB 7200RPM.

All in RAID 0 on the nf4 chip.

Using stripe 16 cluster 4

NCQ and ReadCache disabled

3.0Gb and Spread spectrum enabled with ftool.

 

Had some trouble with my RAID that i wrote about in this post, but i think i have managed to get it right now.

So i will post my results for the db instead.

 

hemsidaatto2nm.jpg

 

hemsidahdtach9up.jpg

 

A Sandra bench also:

sandra3gbejncq4x2507bv.th.jpg

 

Do you guys think this is the most i can get out of these disks???

Would like to know if theres anything more to do before i start copying stuff to it.

Share this post


Link to post
Share on other sites

  • 2 weeks later...

Hi guys i was wondering if my results are good with my setup. It is a new machine i just built first time RAIDer so I was winging it. Just wanted the pros input and maybe some recommedations on how to make it faster if possible thanks.

 

ATTOTest1.jpg

 

HdtestQuick.jpg

 

HdtestLong.jpg

Share this post


Link to post
Share on other sites

patrik_e:

 

Your benchmarks looks good. Don't have any experience with your drives so I'm not sure, but I think you have reached your limit.

 

[PORSCHE]911:

 

Looks good. If your planning to use the array as a OS boot drive, you might consider using a smaller stripe size.

I'm using 64kb stripe and 4kb cluster, but next time I install Windows I will make it 32/8 or 16/4.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×
×
  • Create New...