Jump to content

Intel Dunnington


momoceio

Recommended Posts

Replacing Harpertown and being optimized for multi-processor intercommunications it's unlikely we will see a desktop version of this.

 

Larrabee is the chip that holds promise for desktop use. Wonder if nVidia is starting to feel the pressure.

Share this post


Link to post
Share on other sites

Wonder if nVidia is starting to feel the pressure.

 

I think their worries are vastly blown out of proportion. I feel that game and app developers are feeling more pressure to integrate multi-core compatibility more than nvidia is worried about losing their market share. At least in the short term.

 

can't helkp but think this is a waste, it's like they're just slapping on more cores and speeding up processors as a habit now. Not that some people don't need it, by all means some people will need more speed, but man they're just spitting out new stuff every few months now.

 

Pretty impressive to watch though.

Share this post


Link to post
Share on other sites

Right now the constant improvements and evolutions seem unnecessary but in a short time they'll allow impressive AI stuff, voice, video recognition to name a few. Image being able to order your AI team mates in Rainbow Six vocally or direct commands at different planets/fleets in SOASE without having to change views, select, move to new screen view etc. etc. R6 Las Vegas pt 5 includes Dragon Naturally issuing kill orders.

 

Many small multicore chips everywhere. You'll never spot the nutters then as everyone will be talking to inanimate stuff.

Share this post


Link to post
Share on other sites

once software devs and compilers start to mature in an SMP environment, you'll see the type of evolution (revolution?) in computing that we did when the Pentium2 came out...as Pentium was a hardware revolution for sure, but the software still didn't add up...but by Pentium2 we had Windows 95 maturing and Windows 98 in it's infancy and that's about the time the Internet took hold...a rare but proper combination of events that blew up and now look where we are.

 

The next step in the evolution is not faster speed but more independence (ie a core assigned to each task and the software to run it properly).

Share this post


Link to post
Share on other sites

Yeah, I heard the next step is a chip with about 100 cores but each core running at much slower speeds. Then each core would run a single process or bundle a few cores together to run more intensive tasks.

Share this post


Link to post
Share on other sites

  • 2 weeks later...
can't helkp but think this is a waste, it's like they're just slapping on more cores and speeding up processors as a habit now. Not that some people don't need it, by all means some people will need more speed, but man they're just spitting out new stuff every few months now.

This is a waste from the viewpoint of home users and paper pushing type businesses. But to the companies that need the processing power the CPU will probably always be a step or two behind.

 

I've linked a couple of pictures that I rendered with V-Ray in 3ds Max. The first one is one of the things I use for benchmarking different systems. The quality isn't what it could be because of all the modding I've done to it to meet the completion time I set. Using an Intel dual core 4.0GHz and 2GB of ram it takes 12 to 14 minutes to render, depending on the operating system used. This time frame is pretty much ideal for benchmarking. Slower system won't take hours and the fastest quad systems still take a matter of minutes.

 

 

lounge_night_benchmark_800.jpg

 

 

 

12 minutes doesn't seem like much time until things are put into perspective. Imagine every time a change is made having to rerender the scene to see the effect. Even a file as small as this one could turn into days of work after it is drawn before a finale product would be considered finished.

 

I rendered this from a commercially available wireframe. While quite a bit more complex then the scene above it's nowhere near the complexity of what companies are working with everyday. With an E8400 at 4.0GHz, 2GB of ram and an 8800GTS (G92) this scene took 8h31m16s to render.

 

 

house.jpg

 

 

 

Now imagine doing 20, 30 or more renders of a complex scene in order to reach a finished product. This time is substantially reduced through the use of render farms. And the more machine in the farm the faster the time. But from a cost stanpoint it is less expensive to have more capable processors then adding more machines.

 

For us, the average user, these new processors are a waste. But in industry not only are they needed but they are long overdue.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...