Sunday, August 17, 2008

Beyond Programmable Shading

Beyond Programmable Shading

An interesting presentation there from iD Software, discussing some of the theory behind their next generation engine (iD Tech 6, noting that iD Tech 5 is what's being used in Rage). It also agrees and expounds upon with the voxel based approach we see in Cinema 2.0 with great detail. Very interesting. Although, it does not fully resolve the issues of animating voxel structures, suggesting rendering dynamic geometry polygonally. On the other hand, the procedural generate of voxel surfaces for infinite detail is an interesting prospect I had not considered.
http://s08.idav.ucdavis.edu/olick-cu...m-in-games.pdf

It's hard to imagine that we've gone from fully fixed function GPUs architectures to fully programmable ones in less than a decade.

Thursday, August 14, 2008

AMD's new Cinema 2.0 Inititive

Seems pretty cool, some of the stuff coming out of it is pretty mind blowing.

Cinema 2.0: The Next Chapter in the Ultimate Visual Experience? Story

Excerpts from the new Ruby 2.0 demo and the scorpion one:
http://download.amd.com/Corporate/AMD_RUBY_S04.mov
http://download.amd.com/Corporate/Ci...FINCHER_HD.mov

Stills form Ruby 2.0:
PCGH - Ruby 2.0: Screenshots und Video der neuen Radeon-Technologiedemo - 2008/08/Ruby_new_demo_000.jpg

Keep in mind this is all running real time on a quad core phenom x4 9850, ATI 4870 X2 platform.

Apparently this technology also ties into a new "cloud computing" rendering platform from a company called OTOY, and is being used in a new online world created by one of the MySpace founders called LivePlace/CitySpace. There's more info on them here, and yes, it's very very pretty!
http://www.techcrunch.com/2008/07/09...ng-technology/
http://www.techcrunch.com/2008/08/11...-in-the-cloud/
It's interesting to note that the same technology behind OTOY was used to render some of the Transformers TV commercials in real time, as demonstrated in the videos on the sites I linked to. So it is absolutely cinema quality.

Another technology apparently developed in partnership with OTOY for the LivePlaces platform avatars is a new 3D human model capture and real time rendering system. The results speak for themselves, they look real.
http://www.joystiq.com/2008/08/12/am...-human-models/

Now, if they can deliver this level of photorealism into the gaming world, I will be a very happy person. And considering the recent strategic partership between AMD and Blizzard, which will include the bundling of ATI GPUs with WoW, and noting that Activision Blizzard is now the largest gaming entity on Earth (Market cap of $18 billion vs EA's $16 billion), this may actually be a possibility.

Update:

http://www.tomshardware.com/news/Lar...cing,5769.html

Apparently, a company called JulesWorld is behind OTOY, and OTOY is just a technology product they developed. My bad. Interestingly, they have been using it to do real time raytracing and global illumination (photon mapping in particular) using voxel data sets since the 2900XT came out. That technology is the basis of the new Ruby demos as well. It's only with the 4870 that they've been able to do 60fps with AA though.

The trouble with mapping this over to games, is that voxel data sets are not always the easiest thing to animate, and they take up a LOT of storage space and memory. Although with size, the work of Ben Houstan has helped here a bit (see here). Also in a demonstation video, the people from JulesWorld say they have a novel compression method developed in partnership with AMD (see here). And then, if you watch the video, there is clearly a great deal of animation and destruction going on. The question is, is that baked animation, or generated dynamically? For games, it must be the latter, but I remain hopeful.

It's not as though voxel based approaches to physics, animation and destruction don't exist. Just look at Digital Molecular Matter (ok, more voxel-like than voxels exactly, it splits objects into tetrahedron volumes), as used in the upcoming Star Wars: The Force Unleashed game. So it's entirely possible that this will work out. If this is really how the next generation of gaming is going to be, I can tell you I never would have predicted it. Looking at recent presentations on DirectX 11, it clearly leans towards subdivision surfaces and bezier patches with displacement mapping. I thought it was either that, or further advancement of relief mapping (which operates using local per surface raytracing into the displacement map to find the correct location/depth of the current pixel). But ray traced voxels? I never seriously considered it until now.

I mean, yeah I could see voxels for the physical simulation of destructive objects, which is what DMM does, and definately for fluid/smoke simulation (it's the only way). However, in all those cases, either the voxel object is substituted with a polygonal one for rendering (DMM) or a polygonal iso-surface is generated (fluid simulation), and only in rare cases is the voxel field directly ray traced/cast (smoke, certain variants of relief mapping). But I guess technology has improved faster than my personal imagination.

There are still more questions. Like how do they deal with aliasing inherint to voxel representations? Are they using level sets? How does their custom AA system work? Is simply the monte-carlo method of casting random rays into the pixel? When will we see production quality fluid simulations, like RealFlow, since clearly the renderer can handle it? Is it possible to blend voxel data with polygonal data and maintain performance? What kind of global illumination algorithms are they using exactly? Are they programming in pure Direct3D 9.0/10.0, or are they also using the CTM/Stream SDK to program the GPUs directly? Are voxels stuck onto a regular grid, or more free-form "volumes" as in DMM? Mainly, can they stretch and deform so that dynamically animated bodies remain contiguous and visually pleasing?