Jump to content

Hd 4870x2 From Sapphire Reviewed


razor

Recommended Posts

I'm a bit disappointed i don't think this card lived up to the hype or at least mine ;>

But...its still early and still more tests will come. Hopefully they will release better drivers.

I think i can do with a 1Gb 4870, overclock it a bit, and maybe some mods (pencil/volt) B:)

 

EDIT: sweet 666th post :ph34r:

Edited by damian

Share this post


Link to post
Share on other sites

  • Replies 53
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Oh for the love...

 

Read all the reviews....

http://forums.overclockersclub.com/index.p...t=0&start=0

 

Ull see...

 

4870 X2 is god..

 

I'm not saying it's not, that things a beast.

I guess i was just disappointed in some of the games with the OCC review

 

 

ATI = driver lag.

Everyone hang back and chill for a while.

 

QFT

Edited by damian

Share this post


Link to post
Share on other sites

If the improvement I have seen with the 3870X2 as the drivers matured is any indication then the card will get even better. When you overclock the CPU this thing is just a beast. 22520 In 3D06. Almost 7000 in the Extreme preset in 3D Vantage with stock CPU clocks. With the Processor at 3.6GHZ the 9800GX2 struggles to get close to 19000 in 3D06 with a massive overclock. The potential is massive on the 4870 X2 when the eye candy is turned up..

Share this post


Link to post
Share on other sites

@Puck

 

you really defended these results? what the heck $500-550 ? your better off with crossfire 4850's

 

actually for that price cant yout get 3 4850s? lol

 

im REALLY dissapointed. i expected alot higher scores on.. everything.

im going to re-read it because im just shocked.

 

 

 

:edit:

 

@puck

 

"qoute"

Also, the HD 4870X2 uses two 256-bit memory interfaces instead of just one, since each core has its own memory. The memory for the Sapphire HD 4870X2 is made by Hynix, uses modules listed as H5GQ1H24MJR TOC, and are rated for 4.0GB/sec.

 

i thought they had shared memory. and not a xfire bridge?

Are you seriously going to start this again?

 

First of all, do you want to know why performance between low and high res was so similar? It was tested with 2.6ghz chip! That is a bottleneck for a single 4870, let alone an x2. I know it is something that is chosen to due because of repeatability and comparison sake, but that is just not nearly enough computing power to back up a card that fast.

 

Second, if I hear crysis be used as a comparison again I am going to punch a baby. That game is SCREWED. It favors nvidia just like Call of Juarez favors ati(which should also not be used anymore). Is that the cards fault? No, that is due to a mix of horrible coding on the developers part, and bad coding from the drivers team. Using your logic I should bring up how it DESTROYED anything from nvidia in the CoJ tests. Please, allow me to take part in my own blind fanboy immaturity:

 

cojbenchyz2.jpg

(I think I'm supposed to say something like "GG" after that :shrugs:)

 

Third, if you read my last post, I never said all the memory was shared. That would be rediculous and cause all sorts of problems with each core having to keep from addressing the other cores memory locations. I clearly stated that although the "bridge chip" is still there, it is a totally new version with twice the bandwith - eliminating it on paper as a communications bottleneck. It is shared through a hub accessable by both cores. I do not know where you got shared memory from, since I have always referred to it as a hub or cache...even though cache may not be the 100% correct term for it.

 

Read a few other reviews(and re-reading this one like you said sounds like a great idea as well) before you voice your negative opinions so openly - there is no doubt that it is not only the fastest card out right now, but it releases at a price point significantly lower then nvidias $650 mistake of a card, the 280.

Share this post


Link to post
Share on other sites

Ati's drivers do seem to improve more month to month since amd picked them up so I would expect to see better performance soon.

There is also apparantly an "XSP"(Crossfire SidePort) bridge between gpus allowing an additional bi-directional 5gb/s of bandwidth, that, get this - is not even enabled yet since the cards do not need it for current gen titles. It will be enabled in the future through a driver update(or, *cough*, hacked drivers).

 

It will scale nicely with driver updates, in both current and future titles.

 

::edit:: More sideport info. Amazing.

The sideport is a GPU-to-GPU CrossFire interconnect that should give the HD4870X2 an edge that the HD3870X2 did not have. For those of you that have had two or more video cards running in a CrossFire setup, you might have wondered how much data could be passed through the the flimsy and thin-looking CrossFire bridge(s). The answer is, relatively speaking, not that much. While the CrossFire bridges were capable of passing .9 GB/second, the new sideport interconnect allows for the theoretical limit of 5 GB/second, which, according to AMD, should increase a HD4870X2 performance over a two HD4870 cards CrossFire'd by %10-%15.

Will be actually faster then two crossfire 4870s.

Share this post


Link to post
Share on other sites

Did you guys also notice how *usually* COD4 puts heavy favor on the Nvidia? and the 4870X2 still wiped the floor.

 

Now imagine a 280gx2.....starting price bein 1399.99 (after 100.oo mail in rebate) but the performance...

Share this post


Link to post
Share on other sites

Did you guys also notice how *usually* COD4 puts heavy favor on the Nvidia? and the 4870X2 still wiped the floor.

 

Now imagine a 280gx2.....starting price bein 1399.99 (after 100.oo mail in rebate) but the performance...

Not going to happen, at least not on its current proccess.

 

Die is too large physically, the power draw is too high, the price to produce(and therefore sell) would be outrageous, and Nvidia are still "stuck" on dual PCB designs for multi-gpu setups so the heat output would not be controllable with a small heatsink sandwitched between two dies facing each other.

 

They would need a die shrink ASAP, while switching to a single PCB manufacturing process...and the power draw would still probably be too high. A "9800GX2+", using the GTX+ die shrink may work, but that would put nvidia in a place where their last gen tech matches and beats their current stuff - which is not good publicity when trying to push the GT200 series cores.

Share this post


Link to post
Share on other sites


×
×
  • Create New...