Jump to content

allyn

Members
  • Content Count

    16
  • Joined

  • Last visited

About allyn

  • Rank
    New Member
  1. Happy, I'm guessing Darth edited / resized after your reply post, but I figured I would ask for clarification: Is it ok to have thumbs linked to full size pics, even if the linked pics are greater than 800x600 (as the post currently reflects)?
  2. Voltages all checked out very close to expected. Side note: i discovered my raid boot problem was due to my using a slipstreamed xpsp2 disc for the install. Thank you for suggesting that hard drive benchmark thread, ExRoadie, it will no doubt save me some major headaches in the future. As for this ram thing, i can't help but think that I'm missing something simple. It is currently running fine overclocked (due to the lower multiplier slightly underclocking the ram). My other ram was pretty old, and perhaps it is a tad picky about this board. I need to nail the problem down so i can buy some different ram if necessary (or rma the board - ugh). I also noticed that i seemed to be hitting a stability brick wall at around 280 mhz fsb, and changing vcore had no impact at all. just another data point. i'll do more testing tongiht.
  3. ok, after having read through this entire thread (as suggested in the previous thread i had going on this subject), i am going to see if anyone here can make heads or tails of the oddball results i have observed with some of these raid setups... First of all, the disk speed results from hdtach appear to not mesh perfectally with various stripe sizes, which causes a sort of interference pattern in the results. Sometimes you get a nice sine wave looking pattern, sometimes you get a funky sawtooth looking line. The best example of such interference patterns is demonstrated here and here, and especially here. In most cases, the line will either be roughly horizontal (limited by bus / interface speed), or it will ramp off (limited by absolute drive throughput), or in some cases you get a combo of both (1st part capped by bus, 2nd part capped by drive). Sometimes things 'sync up' just right and you get nice smooth curves that show off both cases nicely, such as here (bandwidth limited), and here (drive limited). Most posts have been revolving around the on-board controllers for these motherboards. The pci bussed parts cap at around 110-120 meg/sec, while the nvidia parts cap at either 150ish or 300ish, varying with how many drives are used (i guess each controller on the nforce can handle roughly 150/sec absolute throughput). OK, with that said, some have jumped in with heavy hitting controllers (acera). And while the burst speed was insane, the sustained transfer rate just did not seem as it should have been for a 0-stripe. The related post is here, notice how the average transfer rate is relatively low compared to the total bandwidth of those drives. The graph actually looks bandwidth limited vs. drive limited. And then we come to my situation. I went after the promise ex8350 controller after observing some very good benchmarks here (check the 1st and 4th entries). With essentially identical hardware, I am getting results equivalent to (if not worse than) a standard pci controller. All results I have achieved are posted in my original thread here. Basically, instead of getting this: I'm getting this: Which makes absolutely no sense at all. I've tried all the usual checks / troubleshooting, and just cannot figure out why a pci-e card is behaving at/below pci-32 speeds. Any input on this is not only welcome, but much appreciated. Thanks in advance for any possible assistance.
  4. Has anyone here experimented with the diskpar / diskpart options for track-aligning your partitions? It is based on where the MBR ends and the partition begins. More info on this subject can be found here . I don't suspect it will have an impact on HDTach readings, since I believe those benches are more drive specific than partition specific (and it runs even when no partitions exist). It should however have an impact on real system performance. It should be especially significant for those using larger sector sizes, as larger sector sizes will equate to higher frequency of double IOs for a particular sector access (if i understand correctly). Larger sectors would euqate to 1 of every 8/4/2 accesses requiring 2 IOs vs. 1. Any other insight into this may be helpful to all...
  5. good idea, i'll do that this evening. extra check couldnt hurt. if only i had the necessary hardware to see exactly what is going wrong to cause those errors in memtest.
  6. Umm, I could probably RMA all of my hardware before getting through that 72 page thread . I did notice that someone here got hdtach graphs that looked similar, and it was with a non-nvidia controller. What keeps confusing me is that I have seen this same raid card + mobo combo do much better. Other posts in the HD benchmark thread mention "flatlining" at ~100-120 meg/sec, which makes sense for a pci controller, but is rather confusing when a multi-lane pci-e controller gives the same result. I havent run atto, but i can guess it will give the same ~120meg/sec results that everyone else seems to get with their on-board controllers. Side note: i underclocked the heck out of the ram and upped the ram voltage to 2.80, which got it stable enough to do some overclocking runs ont he cpu. i ended up settling on 3.0x HT with 280FSB (cpu at 2.5 ghz, stock multiplier. That all worked out with no change in cpu voltage. Also, I put different ram in there (the 1gb of geil ram from the other pc), and tested it with all other settings returned to normal (no overclocking). i again got errors in memtest, which is starting to make me wonder if this board might have an electrical problem with the ram address/data lines...
  7. so i recalled some issues from a couple of years back where people would have poor performance from their promise cards - but only if the system was booting off of an array from the card itself. so... i moved teh 2 raptors back to the motherboard (nforce), set up raid-0, set up the various bios settings (as per angry's guide), and did the f6 thing installing windows. after the initial file copy, the system would just re-enter setup (from the cd, not the drives). No matter what i did, the system would simply not boot from the array. I tried many things, including changing raid settings, trying the SI controller instead, physically removing the promise controller. No dice. I finally had to disable raid entirely and install windows on a single raptor. Once installed, i found the promise array (2tb) still only transferred at 100 meg/sec, while the single raptor had the expected 70 meg/sec tapering off to ~50 meg/sec near the end of the drive. Since it's going on a full week of troubleshooting, I have thrown in the towel, putting the raptor-0 array back on the promise card and completely disabling on-board raid. I will just have to deal with the relatively poor transfer speeds until the next upgrade cycle. I have not even tried any overclocking as of yet, and I still have to tackle this OCZ ram that doesnt seem to play well at all with this board (worked fine in the PC-DL). These DFI boards may be sweet from an overclocking standpoint, but it appears they still have a very long way to go on the maturity of their bios and other system compatibility. :mad:
  8. i'm using most recent firmware and driver (updated both before installing windows). I may just have to live with the 100 meg/sec. I need to run some benches to figure out write speed in raid-5/6/etc and just go with what works well...
  9. if i cant get this board/controller combo above 100meg/sec, my "new sig" will be very short lived . Besides, my current rig remains the PC-DL, and i'm swapping hardware (ram, etc) around so much in the DFI rig that updating the sig will likely confuse more than help...
  10. the graph along the top is the data transfer rate from start to end of the array/drive. the bottom left shows the burst transfer rate to the controller/drive. with an add-on raid controller, burst rate will depend on the interface from motherboard to controller. with a single hard drive, burst rate will indicate the transfer rate of the drives interface itself (i.e. 150 meg/sec or 300 meg/sec for sata).
  11. the raid drivers are actually for the raid card, which is an 8 port promise card. windows is on the 2 raptors, while the benchmarks are being ran on an array of 6 400 gig drives, which are also on that controller. there are no drivers for any particular drive, just the raid controller. this is similar to what happens when you use on-board raid, as you have to do the "F6" thing during windows install and use a driver disk. Al
  12. 32 kbyte stripes is the smallest i can select with the PAM software. Also, changing from 64 to 32 only smoothed out the line, and had no impact on burst speeds. BTW, burst has nothing to do with stripe size - it indicates how fast the controller is communicating with the motherboard. I strongly suspect that if i find the cause of the burst speed being so low, the rest will fall into place. here is what this setup should be doing: Notice the significantly higher burst speed - this is to be expected with PCIE, but its just not happening here for some odd reason.
  13. ok, i went through the promise array mgmt utility stuff. the write cache was disabled for the 400 drives, but enabled for the 72 gig drives. i enabled the write cache for the 400's and reperformed the test with the same result. controller cache was already enabled, and there are no sata/300/ncq options that i could fine (and they shouldnt really impact a sequential read test from 6 drives, anyhow). I then recreated the raid-0 with 32 vs. 64 kb stripes: smoother transfer, but still the same limit. it really has me stumped... Al
  14. ok, finally got all parts in and did the build. everything is going decently, but my array performance is not up to snuff for some odd reason... rough specs: opteron dual core 1.8 ghz evga 7800gt promise 8350 8 port pcie sata controller 2x 74 gb wd raptors 6x 400 gb wd re build pics: here. after coming to the realization that my ocz 2gb of ram absolutely refused to work properly with the board, and even after trying these settings, i threw in the towel and put in a 512mb stick of kingston. I'll worry about the other ram later. Then i went through a fiasco where the windows install kept restarting from the cd after every reboot (i.e. after the first file copy). I finally gave up on the on-board raid controllers and moved the pair of raptors over to the promise board - with the other 6 drives. ok, now for the fun. I got into windows, using all recent firmware and drivers. I had set up the other 6 drives (6x400) as raid-5. I fired up hdtach and this is what i saw: which seemed kind of odd to me. i thought perhaps raid-5 might be taxing the controller, so i switched the array over to raid-0: hmm, same result, but the speed seems to be bouncing around at a different rate this time. I then dug around online and found some benchmarks that were very similar to my setup (same card / motherboard). Check out entry 1 and 4 on that chart (click the right column for the hdtach output). Those guys are hitting 450 meg/sec burst speed, not to mention the much faster transfer from the drives. those benches were done with stripe size of 64k, which matched my choice. so... i figured it had to do with my setup, so i tried the following: - switching to/from sli mode (8-4-N-8 / 2-1-1-16) - moving the controller to various slots (8x / 4x) - reinstalling xp with no other drivers except the promise controller - installing vista beta 64 bit version - disabling all possible extras in the bios - varying PCIE bus speed (100/105/110) every one of the above gave the exact same results. something is capping the transfers to about 150meg/sec burst and 100meg/sec sustained. I got very similar results from the raid-0 stripe of my pair of raptors. so... im stumped. any suggestions on what else i can check? I really don't get why my bench is nowhere near what those other guys are getting with the same hardware... TIA for any assistance Al
  15. I dont get it. The newegg site says 2-3-2-5, which I admit could be wrong, but the OCZ site says the same thing for that exact model. Links: Newegg: http://www.newegg.com/Product/Product.asp?...N82E16820227210 OCZ: http://www.ocztechnology.com/products/memo...hannel_platinum What am i missing? (mainly asking because i have this same ram, bought from newegg as well, and its running at tras of 5, not 8). I did up the cas but only because my tired PC-DL doesnt like cas 2 anymore (with any ram). tis ok though - my lanparty board is ont he way
×
×
  • Create New...