Jump to content

hard drives & raid - benchmark and compare!


Angry_Games

Recommended Posts

  • Replies 1.5k
  • Created
  • Last Reply

Top Posters In This Topic

Just got a 80GB raptor from DELL... WD800GD

 

Any good for this? Better or worse than the 74GB?

 

Same thing, there is no such thing as an 80gb raptor, there are only 36gb, 74gb, and 150gb models.

 

So more than likely you got the 74gb model, and they are very good for raid arrays.

 

Edite: apparently dell and other oem's get the 80gb raptors. The only difference is the 74gb has two platters with 3gb that is unusable on each. While the 80gb uses the full 40gb on both platters.

Share this post


Link to post
Share on other sites

That's not entirely correct. WD offers those sizes to retail and "most" customers.

WD OEMs it for Dell. My understanding is it is identical.

Different Naming nomenclenture to give Dell al little something to talk about.

Be interesting to see some tests.

 

EDIT: I type too slow. Delerious has the correct information.

Share this post


Link to post
Share on other sites

My spontaneous thoughts about the 80gb raptors, is that you probably will have a little drop in sustainable transfer rate and the last part of each platter. That would be my guess as to why it was not enabled in the first place.

 

But other than that, you have a 80gb drive instead of a 74gb drive :)

Share this post


Link to post
Share on other sites

Oh yeah, you could be right soundx98... I actually have a 18GB Seagate ST-118202LC Ultra2 SCSI Wide (LVD) drive that is OEM. It is custom made, so Seagate cannot disclose what kind of firmware it has and what the specifications is. But for a single drive, it performs worse than the regular version they have.

 

They don't even tell you what company it was who it was sold too :)

 

I have to get that information from overstock.com.

Share this post


Link to post
Share on other sites

Hehehe,

 

I'm working on puttin' together a little report on HDD benchmarks (cause I'm a nerd)

 

I'm testing a 3 year old WD160 PATA, 2x 74G Raptors (2 years old), and some brand new WD2500KS (250G with SE16 and SATA 2).

Wanted to see how the would do in single and RAID but also compare the diferent benchmarks tests - Latest ATTO, HD Tach, HD Tune and Everest Ultimate Edition.

Ran all the benchs at least 3 times and took the average.

 

Thought it might be kewl to see how technology has progressed from the WD folks and differences in the benchmarks. Gonna be a real bandwidth eater (but of course will comply with the forum rules.

Got lots of screenshots already and waiting on a 3rd WD2500KS to arrive (cause I'm goofy in da haid)

 

But I couldn't help but wonder what would happen if I threw the Raptos with their superfast access time and the WD2500KS with SATA 2 and 16M cache into an array

(cause I'm insane in the membrane)

 

Muhahahaha, hehehehe

 

HD-Tach-4xHDD.JPG

 

ATTO-4xHDD.JPG

 

Ev-4xHDD.JPG

 

Stay tuned for more details

 

IITM

Share this post


Link to post
Share on other sites

aah, you have that kind of benchmark in Everest? I didn't see that.. I need to start Everest right away and check mine :sweat:

 

I just started Everest to do latency test on my memory every time.. never did anything else :shake:

 

That array looks good btw. You have good read/write above 8kb in ATTO, but a little lower on smaller sizes. Guess access time come into play here?

Share this post


Link to post
Share on other sites

I'm not sure if the Ultimate Trial does benchmarks as the trial does have some limitations (software keys, etc) but it's probably worth trying it to see if you like it.

 

Everest does seem to make major upgrades/updates in the newer programs as they release them. It still misreports the names of CPU, PWMIC, and Chipset.

Many point out that there are free apps that can accomplish 95% of what Ultimate does. To me it was worth the money to get everything in one and official support.

Share this post


Link to post
Share on other sites

I also purchased the Ultimate Edition of Everest and I like it very well. Now, off of my commentary and on to my observations and questions.

 

I've built systems utilizing both add-in card based and onboard RAID controllers for about three years now. I've always followed the 16/4 stripe/cluster arrangement because my earliest benchmarking always showed that stripe/cluster to have best benchmarking performance in those particular rigs.

 

However, the last two systems I've built I decided to just go with the optimal (64) stripe and windows ntfs default (4) cluster. I've observed over this course of time that read benchmarks (the majority anyway) tend to favor 16/4 arrangements. But I've also noticed that there is a larger range between minimum and maximum read speeds. On the latest 64/4 systems I've built, the read speeds are lower (usually by about 20-27 mbs) but there isn't as much variation in the minimum and maximum read speeds. I've also noticed that CPU utilization is usually lower in the 64/4 arrangement vs. the 16/4 arrangement.

 

So for all of the RAID experts in here;

 

1. Other than reducing disk slack what other real world benefits result from the X4 divisor rule i.e. 16/4, 32/8, 64/16?

 

2. An earlier forum member made a blanket statement that 64kb is the best "real world" stripe size. I'd like some evidence or proof that a one-size fits all approach yields the best array performance. Because, in my experience stripe and cluster size should be based on the intended use of the pc. i.e. workstation, server, gaming, business etc.

 

3. Even if the 16/4 arrangement usually yields the best hard disk benchmarks, when considering "overall" pc performance, do you think that the addtional cpu usage and overhead negates the gain of improved read performance at 16/4?

 

4. Why in the world do all of the RAID BIOS's i've seen set 64 as the optimal striping size, if 32 or 16 are actually better? I mean, face it, for the majority of users most aren't going to load windows to a spare drive, set up their array, change the cluster size, format the drive, choose a custom stripe size and then load the OS to the array. Are they? It just seems that if 32 or 16 were the optimal stripe sizes that the RAID BIOS would default to those stripe sizes if you chose "optimal" (maybe thats more of a gripe than a question).

 

Any thoughts, links, opinions, facts etc. are welcome. You are also free to challenge or debunk any of the observations I've noted above. As long as you keep it constructive and respectful.

Share this post


Link to post
Share on other sites

@wevsspot

 

The only way to know which stripe/cluster works best is to test on your rig with your applications. Drive hardware, interface, number of drives and OS all have an effect on performance.

 

In general, for a two drive RAID-0 array, 16/4 works good for the Windows XP OS partition. 64/16 works good for large files like audio and video.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×
×
  • Create New...