Pages

Friday, June 20, 2008

Terrain and Atmospheric Scattering Source



I figured I would release my source for the terrain rendering demo. There aren't that many terrain and atmospheric scattering examples on the net, so I figured I might as well post mine for anyone that would like to see it.
It's still a work in progress (that I paused about a year ago), so it's kind of immature. But it is the same code that produced the images I posted awhile back. You'll probably need at least a 6 series Nvidia graphics card (or ATi equivalent). For the terrain, the demo uses a simple quad tree with frustum culling, so it's not the greatest performer for anything above 512x512 heigtmaps. Also, it is in Managed DirectX, but it shouldn't be too hard to port to XNA.



Anyway, enough of that. If you didn't know, the Creature Creator for Spore has been released, and it's an absolute blast to mess around with. The possibilities on what you can create are pretty much endless. Here's a couple that I made:
CRE_spider-0685c592_sml
CRE_Graptilosaurus-0685c593_sml

Tuesday, June 3, 2008

Post Process Framework Sample

dof

Today I'm posting a post processing framework sample. For those that don't know, post processing is any manipulation of the frame buffer after the scene has been rendered (but usually before the UI), hence the word "post" in the name. This process could be a simple blurring, or it could be a motion blur or depth of field technique. But what if you have many post processes? For instance, maybe we have bloom, motion blur, heat haze, depth of field, and other post processes that we need to apply after we render our scene. Connecting all these post processes together can get a little hairy, and that's what this sample helps us do.

So lets talk about how we're going to structure our framework. At the very bottom of the hierarchy, we have a PostProcessComponent (component from here on out). And what this class represents, is a single atomic process, that is, it cannot be broken up into smaller pieces/objects. Each component has an input and an output in the form of textures. A component is also able to update the backbuffer. It is the simplest object in the hierarchy and is combined with other components to form a PostProcessEffect (effect from here on out).

An effect contains a chain of components to implement a single post process such as bloom or motion blur. An effect also handles output from one component to input to another. In this way the components can be independent of one another, and the effect handles linking all the components that it owns. Because of this, each component is very simple. Also, like components, effects have inputs and outputs in the form of textures. Effects can also be enabled/disabled dynamically at runtime.

Next we have the PostProcessManager(aka manager). The manager is comprised of a chain of effects, and it handles linking the output of one effect to the input of the next effect. And just like with components, this enables each effect to be independent of the next. The manager also takes care of linking the backbuffer and depth buffer to components that need it.

Because each component is not dependent on other components, each one is very simple in its implementation. This not only helps in understanding the code, but also in robustness and tracking down any possible errors in a component. Another nice feature about this framework is that you do not need to create a separate class deriving from PostProcessEffect for each effect. In this way, you can dynamically create your effects at run time. Most of the "magic" occurs in the LoadContent() function inside PostProcessManager. So lets have a look at it:

public void LoadContent()
{
#region Create common textures to be used by the effects
PresentationParameters pp = mGraphicsDevice.PresentationParameters;
int width = pp.BackBufferWidth;
int height = pp.BackBufferHeight;

SurfaceFormat format = pp.BackBufferFormat;

resolveTexture = new ResolveTexture2D(mGraphicsDevice, width, height, 1, format);
#endregion

int
i = 0;
foreach (PostProcessEffect effect in effects)
{
effect.LoadContent();

int j = 0;
//if a component requires a backbuffer, add their function to the event handler
foreach (PostProcessComponent component in effect.Components)
{
//if the component updates/modifies the "scene texture"
//find all the components who need an up to date scene texture
if (component.UpdatesSceneTexture)
{
int k = 0;
foreach (PostProcessEffect e in effects)
{
int l = 0;
foreach (PostProcessComponent c in e.Components)
{
//skip previous components and ourself
if (k < i)
{
continue;
}
else if (k == i && l <= j)
{
l++;
continue;
}
else if (c.RequiresSceneTexture)
{
component.OnUpdateSceneTexture += new UpdateSceneTextureEventHandler(c.UpdateSceneTexture);
}

l++;
}

k++;
}
}

//add the compontent's UpdateBackBuffer method to the event handler
if (component.RequiresBackbuffer
(effect == effects[0] && component == effect.Components[0]))
OnBackBufferResolve += new BackBufferResolveEventHandler(component.UpdateBackbuffer);

if (component.RequiresDepthBuffer)
OnDepthBufferResolve += new DepthBufferResolveEventHandler(component.UpdateDepthBuffer);

j++;
} //components foreach

i++;
} //effects foreach

if (effects.Count > 0)
{
//ensure the last component renders to the backbuffer
effects[effects.Count - 1].IsFinal = true;

if (OnDepthBufferResolve != null)
{
depthBuffer = new BuildZBufferComponent(mContent, mGraphicsDevice);
depthBuffer.Camera = camera;
depthBuffer.Models = models;
depthBuffer.LoadContent();
}
}
}

Here we first setup the resolve texture that will resolve the backbuffer every frame. Then we search through the components, looking for ones that update the backbuffer(scene texture). If a component does this, then we need to find all the components after it that need the most recent version of the backbuffer and assign its update function to the OnUpdateSceneTexture event. Next, we search through all the effects, looking for components that need the backbuffer and depth buffer and attach its backbuffer update function to the OnBackBufferResolve and OnDepthBufferResolve events.

Below is what a sample post process effect chain could look like:

PostProcessDiagram

Here we have three post process effects: bloom, motion blur, and depth of field. Each contains its own components that perform operations on the texture given to it. Some components need to blend with the backbuffer, and therefore need to update the backbuffer in turn so that other components are able to have the most recent copy of the backbuffer (components linked with red lines).

Here is an example of how we could setup the above effect chain:

PostProcessEffect bloom = new PostProcessEffect(Content, Game.GraphicsDevice);
bloom.AddComponent(new BrightPassComponent(Content, Game.GraphicsDevice));
bloom.AddComponent(new GaussBlurComponent(Content, Game.GraphicsDevice));
bloom.AddComponent(new BloomCompositeComponent(Content, Game.GraphicsDevice));

PostProcessEffect motionblur = new PostProcessEffect(Content, Game.GraphicsDevice);
motionblur.AddComponent(new MotionVelocityComponent(Content, Game.GraphicsDevice));
motionblur.AddComponent(new MotionBlurHighComponent(Content, Game.GraphicsDevice));

PostProcessEffect depthOfField = new PostProcessEffect(Content, Game.GraphicsDevice);
depthOfField.AddComponent(new DownFilterComponent(Content, Game.GraphicsDevice));
depthOfField.AddComponent(new PoissonBlurComponent(Content, Game.GraphicsDevice));
depthOfField.AddComponent(new DepthOfFieldComponent(Content, Game.GraphicsDevice));

postManager.AddEffect(bloom);
postManager.AddEffect(motionblur);
postManager.AddEffect(depthOfField);

In the sample, I have included 3 post processes: bloom, depth of field, and a distortion process. I've documented the classes and functions pretty well, so these combined with this write up should be enough to help you understand how everything works. Should you have any questions, just post in the comments.

Homework for the reader:

In this sample, I create a new render target fore every component that needs one. This is bad. In my own implementation that I use for my own projects, I create a render target pool. The render target pool stores and fetches render targets. When a componet requests a render target, the pool will check to see if one of the specified dimensions and format has been stored already. If it has it will return it, if it hasn't it will return a new render target. Now we won't create a new render target for every component. The pool allows us to reuse render targets which helps save gpu memory and increase performance. So if we have 5 effects with 3 components each, we might only create 5 render targets (depending on how many share similar formats and dimensions) instead of 15.




Edit (6/6/2008):Added support for cards that do not support the R32F (Single) SurfaceFormat (hopefully :) ). Removed in-progress auto focusing code in depth-of-field effect.

Edit (6/5/2008): Added some new functionality, namely being able to specify the states and color channel modulation of the spritebatch.