Jump to content

Physics : What It's All About


Zertz
 Share

Recommended Posts

Introduction

 

Before delving right into it, let’s figure out what physics do exactly to enhance our gaming experience. Things that benefits the most from advanced physics simulation are the environmental details, such as cloth flowing in the wind, collisions and explosions. It also allows the player to not only directly interact with its environment, but also visually modify it. Throwing an object on a bunch of others now provide life-like feedback. This could be a missile blowing through a wall and leaving a ton of debris flying all over the place and not simply a burnt wall.

 

The hard part was to accomplish this in real-time. Complex cinematics always had an edge over in-game graphics because they only playback on your computer. A short cinematic may have taken hours to render at the studio. For this reason, much more advanced techniques could, and can, be used.

 

Before the introduction of Ageia’s physics processing unit back in 2006, physics was hardly known even to the enthusiast crowd. Since then, for various more or less valid reasons, it has been at the heart of some rather heated discussions. Ageia’s implementation consisted of a software layer that ran on their proprietary hardware physics processing unit, PPU for short. It was specifically designed to run their PhysX API and was actually pretty good at it and since that’s all it could do so it better be.

 

The venture was undoubtedly risky and the chances of a wide success ranged from slim to none. A hefty price tag led to abysmal adoption rate by customers and, thus, developers. Programming for such a minority of the market just wasn’t worth the effort, time and money. Strictly ASUS and BFG sold Ageia’s PPU, adding to the fact that their hardware was pretty much bound to fail.

 

However, the concept itself certainly made sense. General purpose processors such as those designed by AMD and Intel are mediocre at running massively parallel code such as graphics and physics simulation. ATI and NVIDIA, whom basically never talked about physics before, were suddenly interested in physics simulation on their GPUs. They couldn't let a small player steal a part of this potentially lucrative market. Then, chaos ensues.

 

One Technology, Three Companies, Three APIs

 

The world’s largest graphics and processor manufacturers, AMD, Intel and NVIDIA, headed into different directions as to how to implement physics.

In September 2007, Intel bought Havok, a software physics engine designed to run on processors, unlike Ageia’s implementation. The takeover made sense considering Intel’s business and their, then secret, Larrabee project. Speaking of Larrabee, expect to see final silicon in the first half of 2010. It will be a 45nm part consisting of most likely 24 to 32 cores and totaling over 2 billion transistors.

 

In February 2008, less than two years after the initial introduction of the actual PhysX hardware, NVIDIA bought Ageia and their associated physics technologies. Literally days after the acquisition, they announced the PhysX engine was being ported to CUDA so it would run on G80 and future architectures. That was the final blow to the standalone PPU.

 

Of course, AMD wasn’t to be left behind and, in June 2008, they surprisingly announced a partnership with Intel-owned Havok to bring their physics toolkit to ATI GPUs. Last March, AMD showcased OpenCL-powered Havok physics running on an ATI GPU, but we haven’t heard much more since then.

 

Right out of the oven is AMD’s announcement of a joint venture with Pixelux Entertainment and their open-source Bullet Physics engine, also built upon OpenCL. Until recently, Bullet was mostly aimed at the professional industry. Just like the Havok engine, this one will also run on any GPU, providing it supports OpenCL, which ATI’s HD4000 and newer series, NVIDIA’s GTX family and Intel’s upcoming Larrabee do.

 

Basically, the big three are PhysX, Havok and Bullet. The last two can run on any GPU supporting OpenCL, while PhysX is implemented through CUDA which is only supported on NVIDIA GPUs, for now at least. A multitude of other APIs and in-house toolkits grab the rest of the pie.

 

Conclusion

 

Whether any of the APIs are destined for massive success or utter failure is subject of various opinions, but, with the advent of OpenCL technologies that can work on any GPU from AMD, NVIDIA and Intel, some developers are likely to be pulled away from the NVIDIA exclusive PhysX. Unless NVIDIA ports their engine to OpenCL... who knows? Since CUDA code structure is fairly similar (Warning : PDF) to OpenCL so I wouldn’t be too surprised to see that happening.

 

I will conclude this with some statistics courtesy of the August 2009 issue of Game Developer Magazine. Take them with a grain of salt as the survey was conducted on a relatively small sample of just over 100 so called senior developers. However, the numbers match what both Havok and NVIDIA are advertising on their websites, so it seems like the figures are fairly accurate.

 

Engine__________Market Share___PC Games

PhysX (CUDA)____26.8%_________Over 75 (Incomplete list)

 

Havok (OpenCL) __22.7%_________65

 

Bullet (OpenCL)___10.3%_________About 29 (No official list)

 

As you most likely noticed, the numbers don’t quite add up to a 100%, that’s because the remaining 40% market share is divided among a bunch of other APIs and in-house tools having less than 4% of the market.

 

For years now, it seems like physics have been the next big thing in graphics, but the technology is still only in it’s infancy and the best still remains to be seen. Can three APIs remain competitive and survive side by side? Possibly. However, the real question is more like : is NVIDIA’s PhysX technology compelling enough for developers to totally ignore ATI and upcoming Intel GPUs?

 

Finding reliable market share numbers for the discrete GPU market isn't the easiest task, although the numbers generally favor NVIDIA's. However, as a developer, would you aim for the majority of the market, or the whole market?

 

I brought you the facts (let me know if anything is wrong) along with some mild opinions scattered here and there, now feel free to voice your opinions!

 

 

 

TLDR : Read the last 6 lines at least!

Edited by Zertz

Share this post


Link to post
Share on other sites

  • Replies 27
  • Created
  • Last Reply

Top Posters In This Topic

Good information Zertz and about time someone started an appropriate thread for all of the physics engine talk and the advantages/disadvantages between ATI/nVidia. Funny how nobody that was arguing/discussing it elsewhere has saw fit to post comments here.

Share this post


Link to post
Share on other sites

It'll be interesting to see how it plays out. I very seriously doubt Nvidia's (cuda) physX will go away any time soon, tho it may not last long in the gaming market. I think it will likely always have a market in other industries.

 

EDIT: Another thought I think nvidia's PhysX can run on CPU's, well at least in the vantage benchmark, everyones CPU runs the PhysX test, just not as good a score as if you ran it on the GPU.

Edited by Bigfwd69

Share this post


Link to post
Share on other sites

Thanks guys! I may do this more often if there's interest ;)

 

EDIT: Another thought I think nvidia's PhysX can run on CPU's, well at least in the vantage benchmark, everyones CPU runs the PhysX test, just not as good a score as if you ran it on the GPU.

 

It can indeed, it's just ridiculously slow. I don't think CUDA is going anywhere in the near future, but looking at Tesla sales it doesn't look like NVIDIA has convinced the world how amazing CUDA is

Share this post


Link to post
Share on other sites

I believe that everyone would benefit from the developers focusing more on an open standard. Not only would it benefit the consumers with a possible more competitive pricing on the NVIDIA side but would allow (as you have mentioned) a whole market approach.

Share this post


Link to post
Share on other sites

Good information Zertz and about time someone started an appropriate thread for all of the physics engine talk and the advantages/disadvantages between ATI/nVidia. Funny how nobody that was arguing/discussing it elsewhere has saw fit to post comments here.

 

bla, bla, blah, lol

 

But do you think its really not appropriate to bring up physX when a person is trying to decide between a GTX series and ATI right now? I just dont see how anyone could think it is not relevant, but then again most all those who bash it, havent even used it because they cant. But they will get involved and say its not relevant, no big deal, or a gimmick. I just cant see how ppl who havent ran it can dis it so harshly. Its kinda wrong for them to jump in in my opinion. Because physX is jus as much part of a decision if a person isnt sure what card to get right now. its not going away, and there are some great titles coming up this yr that are gonna implement physX more and more advancing the technology.

 

The reason I dont bring a whole new discussion up is i am just tryn to help ppl when they are needing help deciding. I am not 100% behind physX, not in it current state. But i am 100% behind advanced physics in games, its a very neat direction. It is still in its very infancy. I see physX changing in a yr or another PC standard will be born. But even then there will exist a few nvidia physX only titles, but they will be fewer. At least until the next round of gaming consoles come out. GPU PhysX is an advancement of the same free physX that the game consoles use, which is why most physX titles on PC are console ports, but the PC gets more bells and whistles. Thats why i cant see a physX capable card as bad investment, they will get some good use out of it for the next yr/s. Also if there is a new standard on the horizon which isnt physX, you can bet the nvidia card will still run it as well. But i dont see physX goin away without a fight.

 

Everyone must remember physX is free for all developers right now, not only that their support is free as well, nvidia is working with developers like nobody else has, all for free. What company out of nowhere can afford to give away their work? Nvidia makes their money back on the cards, or so they intend to. Thats an attractive feature thats gonna be hard to match. I am not saying physX will become the standard because of this, but i am saying thats why it is getting so popular so quickly. I am also convinced its its gonna stick around a bit even if a good alternative open to all GPU pops up, if that alternative isnt free for developers, physX can still remain popular.

 

I think nvidia made some bad moves with physX. There are a few things they are doing that are slowing the adoption down. I am not a fan of these tactics. I cannot see why they arent pushing separate cheap nvidia cards for a ppu in ATI systems. This seems really stupid, and will slow them down. just think of all the money they could be making? I dont get these politics at all. Its insane. I actually thought this was gonna be one of the reasons for a separate card for physX. I thought this was a genius plan that couldnt loose. Look how many boards have multiple pcie slots these days. It only makes since to try to sell in that market. A cheap ppu for physX for anybody, kinda like Agiea had but cheap. Then physX would have much more to offer a chance for everyone.

 

Either way, physX is no more then a step right now. Its not the end or beginning. Nvidia has made some ground in this. A major step in the right direction. No matter where it goes from here, nvidia has pushed this hard and the ball will roll now, quicker and quicker. PhysX isnt the standard, it may not ever be. But it will be around for some time, and there will be some good alternatives in the near future. PhysX has a good start on them, and will remain attractive, as long as it remains free. But it is in no condition to be a standard, thats a given.

 

As far as cuda, cuda was and is a big deal. But it is just a step too. OpenCL is here now and will seem to be a much better alternative. But i see the opencl movement from Cuda, because if cuda wasnt pushed as hard, we wouldnt have seen openCL like we do today. I dont think it was a waste, nor will physX be even if it is 100% replaced by an open standard. Someone had to push into these directions, and i dont blame nvidia for trying to make money off their investments while they can. So as much as it may get replaced, it served its place, and it is just as important that we get there as how we get there.

Edited by ocre

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share


×
×
  • Create New...