Jump to content

Operation Skynet has begun...


cjloki

Recommended Posts

Over 14,336 Zeons and 7,168 NVIDIA Tesla M2050's. Tesla M2050's are passively cooled kick butt workstation GTX 470's. That is just insane.

They should take the 3d Mark Record with this once just for kicks and giggles O.O Of course it would be cheating but who cares. 7000 non gaming gpu's probably wouldn't scale so well but the CPU mark would be in the millions.

This could do more folding in a day than all of OCC can do per year.

I can't even begin to fathom what a pain in the butt this was to set up.

Share this post


Link to post
Share on other sites

Btw. Does .2 petaflops actually make much difference?

Considering it's Linpack? No, not really. Linpack is one of those "trivially easy" to parallelize problems that scales with pretty much everything you throw at it.

 

Until it's actually on the Top500 list I won't believe it though. :P I wonder if they even used the GPUs to get that score - if they did - it's still FAR slower than Jaguar for pretty much any real HPC workload right now.

 

 

EDIT: They did use the GPUs to get that score - which means it's nowhere near as fast any a conventional machine for established workloads. It's pretty difficult to get Tesla cards to run efficiently on real work. :P For comparison, Jaguar has 224,000 Opterons with no GPUs. This Chinese machine only has 14,000 Xeons. :P

 

Even Linpack is difficult on GPUs - look at the Top500 list for the other Chinese machine - it's sitting at less than 50% efficiency in terms of sustained FLOPs versus peak FLOPs.

Edited by Waco

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...