Pages

Thursday, November 13, 2008

Water Game Component



I had some requests awhile back for a more independent water effect that I had provided in my camera animation tutorials. And just the other day I remembered that I had totally forgot about this (doh!). So here is a DrawableGameComponent for the water effect. Have a look here to see it in action.

To setup the component we need to fill out a WaterOptions object that will be passed to the water component.

WaterOptions options = new WaterOptions();
options.Width = 257;
options.Height = 257;
options.CellSpacing = 0.5f;
options.WaveMapAsset0 = "Textures/wave0";
options.WaveMapAsset1 = "Textures/wave1";
options.WaveMapVelocity0 = new Vector2(0.01f, 0.03f);
options.WaveMapVelocity1 = new Vector2(-0.01f, 0.03f);
options.WaveMapScale = 2.5f;
options.WaterColor = new Vector4(0.5f, 0.79f, 0.75f, 1.0f);
options.SunColor = new Vector4(1.0f, 0.8f, 0.4f, 1.0f);
options.SunDirection = new Vector3(2.6f, -1.0f, -1.5f);
options.SunFactor = 1.5f;
options.SunPower = 250.0f;

mWaterMesh = new Water(this);
mWaterMesh.Options = options;
mWaterMesh.EffectAsset = "Shaders/Water";
mWaterMesh.World = Matrix.CreateTranslation(Vector3.UnitY * 2.0f);
mWaterMesh.RenderObjects = DrawObjects;

So here will fill out various options such as width and height, cell spacing, the normal map asset names, etc. We then create the water component, and assign it the options object. We then provide the filename of the Water.fx shader, the water's position and we then assign its RenderObjects delegate a function that will be used to draw the objects in your scene.

The component tries to be relatively independent of how your represent your game objects. All that it asks for is that you provide a function that takes a reflection matrix. This function should go through the objects that you want to be reflected/refracted and combine the reflection matrix with the object's world matrix.

Here's an example of what your DrawObjects() function might look like.
private void DrawObjects(Matrix reflMatrix)
{
foreach (DrawableGameComponent mesh in Components)
{
Matrix oldWorld = mesh.World;
mesh.World = oldWorld * reflMatrix;

mesh.Draw(mGameTime);

mesh.World = oldWorld;
}
}
mWaterMesh.RenderObjects is the delegate that has the signature of: public delegate void RenderObjects(Matrix reflectionMatrix);

Basically this function should just go through your game objects and render them.

Lastly, before you draw your objects in the scene, you need to send to the water component the ViewProjection matrix and the camera's position by using WaterMesh.SetCamera(). And you need to call WaterMesh.UpdateWaterMaps() to update the reflection and refraction maps. After this, you can clear your framebuffer and draw your objects. For how this effect looks you can take a look at my camera animation tutorials.

Water GameComponent:

using System;
using System.Collections.Generic;
using System.Text;

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Content;

namespace WaterSample
{
//delegate that the water component to call to render the objects in the scene
public delegate void RenderObjects(Matrix reflectionMatrix);

/// <summary>
///
Options that must be passed to the water component before Initialization
/// </summary>
public class WaterOptions
{
//width and height must be of the form 2^n + 1
public int Width = 257;
public int Height = 257;
public float CellSpacing = .5f;

public float WaveMapScale = 1.0f;

public int RenderTargetSize = 512;

//offsets for the texcoords of the wave maps updated every frame
public Vector2 WaveMapOffset0;
public Vector2 WaveMapOffset1;

//the direction to offset the texcoords of the wave maps
public Vector2 WaveMapVelocity0;
public Vector2 WaveMapVelocity1;

//asset names for the normal/wave maps
public string WaveMapAsset0;
public string WaveMapAsset1;

public Vector4 WaterColor;
public Vector4 SunColor;
public Vector3 SunDirection;
public float SunFactor;
public float SunPower;
}

/// <summary>
///
Drawable game component for water rendering. Renders the scene to reflection and refraction
/// maps that are projected onto the water plane and are distorted based on two scrolling normal
/// maps.
/// </summary>
public class Water : DrawableGameComponent
{
#region Fields
private RenderObjects mDrawFunc;

//vertex and index buffers for the water plane
private VertexBuffer mVertexBuffer;
private IndexBuffer mIndexBuffer;
private VertexDeclaration mDecl;

//water shader
private Effect mEffect;
private string mEffectAsset;

//camera properties
private Vector3 mViewPos;
private Matrix mViewProj;
private Matrix mWorld;

//maps to render the refraction/reflection to
private RenderTarget2D mRefractionMap;
private RenderTarget2D mReflectionMap;

//scrolling normal maps that we will use as a
//a normal for the water plane in the shader
private Texture mWaveMap0;
private Texture mWaveMap1;

//user specified options to configure the water object
private WaterOptions mOptions;

//tells the water object if it needs to update the refraction
//map itself or not. Since refraction just needs the scene drawn
//regularly, we can:
// --Draw the objects we want refracted
// --Resolve the back buffer and send it to the water
// --Skip computing the refraction map in the water object
private bool mGrabRefractionFromFB = false;

private int mNumVertices;
private int mNumTris;
#endregion

#region
Properties

public RenderObjects RenderObjects
{
set { mDrawFunc = value; }
}

/// <summary>
///
Name of the asset for the Effect.
/// </summary>
public string EffectAsset
{
get { return mEffectAsset; }
set { mEffectAsset = value; }
}

/// <summary>
///
The render target that the refraction is rendered to.
/// </summary>
public RenderTarget2D RefractionMap
{
get { return mRefractionMap; }
set { mRefractionMap = value; }
}

/// <summary>
///
The render target that the reflection is rendered to.
/// </summary>
public RenderTarget2D ReflectionMap
{
get { return mReflectionMap; }
set { mReflectionMap = value; }
}

/// <summary>
///
Options to configure the water. Must be set before
/// the water is initialized. Should be set immediately
/// following the instantiation of the object.
/// </summary>
public WaterOptions Options
{
get { return mOptions; }
set { mOptions = value; }
}

/// <summary>
///
The world matrix of the water.
/// </summary>
public Matrix World
{
get { return mWorld; }
set { mWorld = value; }
}

#endregion

public
Water(Game game) : base(game)
{

}

public override void Initialize()
{
base.Initialize();

//build the water mesh
mNumVertices = mOptions.Width * mOptions.Height;
mNumTris = (mOptions.Width - 1) * (mOptions.Height - 1) * 2;
VertexPositionTexture[] vertices = new VertexPositionTexture[mNumVertices];

Vector3[] verts;
int[] indices;

GenTriGrid(mOptions.Height, mOptions.Width, mOptions.CellSpacing, mOptions.CellSpacing,
Vector3.Zero, out verts, out indices);

//copy the verts into our PositionTextured array
for (int i = 0; i < mOptions.Width; ++i)
{
for (int j = 0; j < mOptions.Height; ++j)
{
int index = i * mOptions.Width + j;
vertices[index].Position = verts[index];
vertices[index].TextureCoordinate = new Vector2((float)j / mOptions.Width, (float)i / mOptions.Height);
}
}

mVertexBuffer = new VertexBuffer(Game.GraphicsDevice,
VertexPositionTexture.SizeInBytes * mOptions.Width * mOptions.Height,
BufferUsage.WriteOnly);
mVertexBuffer.SetData(vertices);

mIndexBuffer = new IndexBuffer(Game.GraphicsDevice, typeof(int), indices.Length, BufferUsage.WriteOnly);
mIndexBuffer.SetData(indices);

mDecl = new VertexDeclaration(Game.GraphicsDevice, VertexPositionTexture.VertexElements);
}

protected override void LoadContent()
{
base.LoadContent();

mWaveMap0 = Game.Content.Load<Texture2D>(mOptions.WaveMapAsset0);
mWaveMap1 = Game.Content.Load<Texture2D>(mOptions.WaveMapAsset1);

PresentationParameters pp = Game.GraphicsDevice.PresentationParameters;
SurfaceFormat format = pp.BackBufferFormat;
MultiSampleType msType = pp.MultiSampleType;
int msQuality = pp.MultiSampleQuality;

mRefractionMap = new RenderTarget2D(Game.GraphicsDevice, mOptions.RenderTargetSize, mOptions.RenderTargetSize,
1, format, msType, msQuality);
mReflectionMap = new RenderTarget2D(Game.GraphicsDevice, mOptions.RenderTargetSize, mOptions.RenderTargetSize,
1, format, msType, msQuality);

mEffect = Game.Content.Load<Effect>(mEffectAsset);

//set the parameters that shouldn't change.
//Some of these might need to change every once in awhile,
//move them to updateEffectParams if you need that functionality.
if (mEffect != null)
{
mEffect.Parameters["WaveMap0"].SetValue(mWaveMap0);
mEffect.Parameters["WaveMap1"].SetValue(mWaveMap1);

mEffect.Parameters["TexScale"].SetValue(mOptions.WaveMapScale);

mEffect.Parameters["WaterColor"].SetValue(mOptions.WaterColor);
mEffect.Parameters["SunColor"].SetValue(mOptions.SunColor);
mEffect.Parameters["SunDirection"].SetValue(mOptions.SunDirection);
mEffect.Parameters["SunFactor"].SetValue(mOptions.SunFactor);
mEffect.Parameters["SunPower"].SetValue(mOptions.SunPower);

mEffect.Parameters["World"].SetValue(mWorld);
}
}

public override void Update(GameTime gameTime)
{
float timeDelta = (float)gameTime.ElapsedGameTime.TotalSeconds;

mOptions.WaveMapOffset0 += mOptions.WaveMapVelocity0 * timeDelta;
mOptions.WaveMapOffset1 += mOptions.WaveMapVelocity1 * timeDelta;

if (mOptions.WaveMapOffset0.X >= 1.0f mOptions.WaveMapOffset0.X <= -1.0f)
mOptions.WaveMapOffset0.X = 0.0f;
if (mOptions.WaveMapOffset1.X >= 1.0f mOptions.WaveMapOffset1.X <= -1.0f)
mOptions.WaveMapOffset1.X = 0.0f;
if (mOptions.WaveMapOffset0.Y >= 1.0f mOptions.WaveMapOffset0.Y <= -1.0f)
mOptions.WaveMapOffset0.Y = 0.0f;
if (mOptions.WaveMapOffset1.Y >= 1.0f mOptions.WaveMapOffset1.Y <= -1.0f)
mOptions.WaveMapOffset1.Y = 0.0f;
}

public override void Draw(GameTime gameTime)
{
UpdateEffectParams();

Game.GraphicsDevice.Indices = mIndexBuffer;
Game.GraphicsDevice.Vertices[0].SetSource(mVertexBuffer, 0, VertexPositionTexture.SizeInBytes);
Game.GraphicsDevice.VertexDeclaration = mDecl;

mEffect.Begin(SaveStateMode.None);

foreach (EffectPass pass in mEffect.CurrentTechnique.Passes)
{
pass.Begin();
Game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, mNumVertices, 0, mNumTris);
pass.End();
}

mEffect.End();
}

/// <summary>
///
Set the ViewProjection matrix and position of the Camera.
/// </summary>
/// <param name="viewProj"></param>
/// <param name="pos"></param>
public void SetCamera(Matrix viewProj, Vector3 pos)
{
mViewProj = viewProj;
mViewPos = pos;
}

/// <summary>
///
Updates the reflection and refraction maps. Called
/// on update.
/// </summary>
/// <param name="gameTime"></param>
public void UpdateWaterMaps(GameTime gameTime)
{
/*------------------------------------------------------------------------------------------
* Render to the Reflection Map
*/
//clip objects below the water line, and render the scene upside down
GraphicsDevice.RenderState.CullMode = CullMode.CullClockwiseFace;

GraphicsDevice.SetRenderTarget(0, mReflectionMap);
GraphicsDevice.Clear(ClearOptions.Target ClearOptions.DepthBuffer, mOptions.WaterColor, 1.0f, 0);

//reflection plane in local space
Vector4 waterPlaneL = new Vector4(0.0f, -1.0f, 0.0f, 0.0f);

Matrix wInvTrans = Matrix.Invert(mWorld);
wInvTrans = Matrix.Transpose(wInvTrans);

//reflection plane in world space
Vector4 waterPlaneW = Vector4.Transform(waterPlaneL, wInvTrans);

Matrix wvpInvTrans = Matrix.Invert(mWorld * mViewProj);
wvpInvTrans = Matrix.Transpose(wvpInvTrans);

//reflection plane in homogeneous space
Vector4 waterPlaneH = Vector4.Transform(waterPlaneL, wvpInvTrans);

GraphicsDevice.ClipPlanes[0].IsEnabled = true;
GraphicsDevice.ClipPlanes[0].Plane = new Plane(waterPlaneH);

Matrix reflectionMatrix = Matrix.CreateReflection(new Plane(waterPlaneW));

if (mDrawFunc != null)
mDrawFunc(reflectionMatrix);

GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;

GraphicsDevice.SetRenderTarget(0, null);


/*------------------------------------------------------------------------------------------
* Render to the Refraction Map
*/

//if the application is going to send us the refraction map
//exit early. The refraction map must be given to the water component
//before it renders
if (mGrabRefractionFromFB)
{
GraphicsDevice.ClipPlanes[0].IsEnabled = false;
return;
}

//update the refraction map, clip objects above the water line
//so we don't get artifacts
GraphicsDevice.SetRenderTarget(0, mRefractionMap);
GraphicsDevice.Clear(ClearOptions.Target ClearOptions.DepthBuffer, mOptions.WaterColor, 1.0f, 1);

//reflection plane in local space
waterPlaneL.W = 2.5f;

//if we're below the water line, don't perform clipping.
//this allows us to see the distorted objects from under the water
if (mViewPos.Y < mWorld.Translation.Y)
{
GraphicsDevice.ClipPlanes[0].IsEnabled = false;
}

if (mDrawFunc != null)
mDrawFunc(Matrix.Identity);

GraphicsDevice.ClipPlanes[0].IsEnabled = false;

GraphicsDevice.SetRenderTarget(0, null);
}

/// <summary>
///
Updates effect parameters related to the water shader
/// </summary>
private void UpdateEffectParams()
{
//update the reflection and refraction textures
mEffect.Parameters["ReflectMap"].SetValue(mReflectionMap.GetTexture());
mEffect.Parameters["RefractMap"].SetValue(mRefractionMap.GetTexture());

//normal map offsets
mEffect.Parameters["WaveMapOffset0"].SetValue(mOptions.WaveMapOffset0);
mEffect.Parameters["WaveMapOffset1"].SetValue(mOptions.WaveMapOffset1);

mEffect.Parameters["WorldViewProj"].SetValue(mWorld * mViewProj);

mEffect.Parameters["EyePos"].SetValue(mViewPos);
}

/// <summary>
///
Generates a grid of vertices to use for the water plane.
/// </summary>
/// <param name="numVertRows">
Number of rows. Must be 2^n + 1. Ex. 129, 257, 513.</param>
/// <param name="numVertCols">
Number of columns. Must be 2^n + 1. Ex. 129, 257, 513.</param>
/// <param name="dx">
Cell spacing in the x dimension.</param>
/// <param name="dz">
Cell spacing in the y dimension.</param>
/// <param name="center">
Center of the plane.</param>
/// <param name="verts">
Outputs the constructed vertices for the plane.</param>
/// <param name="indices">
Outpus the constructed triangle indices for the plane.</param>
private void GenTriGrid(int numVertRows, int numVertCols, float dx, float dz,
Vector3 center, out Vector3[] verts, out int[] indices)
{
int numVertices = numVertRows * numVertCols;
int numCellRows = numVertRows - 1;
int numCellCols = numVertCols - 1;

int mNumTris = numCellRows * numCellCols * 2;

float width = (float)numCellCols * dx;
float depth = (float)numCellRows * dz;

//===========================================
// Build vertices.

// We first build the grid geometry centered about the origin and on
// the xz-plane, row-by-row and in a top-down fashion. We then translate
// the grid vertices so that they are centered about the specified
// parameter 'center'.

//verts.resize(numVertices);
verts = new Vector3[numVertices];

// Offsets to translate grid from quadrant 4 to center of
// coordinate system.
float xOffset = -width * 0.5f;
float zOffset = depth * 0.5f;

int k = 0;
for (float i = 0; i < numVertRows; ++i)
{
for (float j = 0; j < numVertCols; ++j)
{
// Negate the depth coordinate to put in quadrant four.
// Then offset to center about coordinate system.
verts[k] = new Vector3(0, 0, 0);
verts[k].X = j * dx + xOffset;
verts[k].Z = -i * dz + zOffset;
verts[k].Y = 0.0f;

Matrix translation = Matrix.CreateTranslation(center);
verts[k] = Vector3.Transform(verts[k], translation);

++k; // Next vertex
}
}

//===========================================
// Build indices.

//indices.resize(mNumTris * 3);
indices = new int[mNumTris * 3];

// Generate indices for each quad.
k = 0;
for (int i = 0; i < numCellRows; ++i)
{
for (int j = 0; j < numCellCols; ++j)
{
indices[k] = i * numVertCols + j;
indices[k + 1] = i * numVertCols + j + 1;
indices[k + 2] = (i + 1) * numVertCols + j;

indices[k + 3] = (i + 1) * numVertCols + j;
indices[k + 4] = i * numVertCols + j + 1;
indices[k + 5] = (i + 1) * numVertCols + j + 1;

// next quad
k += 6;
}
}
}
}
}

Water.fx shader:

//Water effect shader that uses reflection and refraction maps projected onto the water.
//These maps are distorted based on the two scrolling normal maps.

float4x4 World;
float4x4 WorldViewProj;

float4 WaterColor;
float3 SunDirection;
float4 SunColor;
float SunFactor; //the intensity of the sun specular term.
float SunPower; //how shiny we want the sun specular term on the water to be.
float3 EyePos;

// Texture coordinate offset vectors for scrolling
// normal maps.
float2 WaveMapOffset0;
float2 WaveMapOffset1;

// Two normal maps and the reflection/refraction maps
texture WaveMap0;
texture WaveMap1;
texture ReflectMap;
texture RefractMap;

//scale used on the wave maps
float TexScale;

static const float R0 = 0.02037f;

sampler WaveMapS0 = sampler_state
{
Texture = <WaveMap0>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = WRAP;
AddressV = WRAP;
};

sampler WaveMapS1 = sampler_state
{
Texture = <WaveMap1>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = WRAP;
AddressV = WRAP;
};

sampler ReflectMapS = sampler_state
{
Texture = <ReflectMap>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};

sampler RefractMapS = sampler_state
{
Texture = <RefractMap>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};

struct OutputVS
{
float4 posH : POSITION0;
float3 toEyeW : TEXCOORD0;
float2 tex0 : TEXCOORD1;
float2 tex1 : TEXCOORD2;
float4 projTexC : TEXCOORD3;
float4 pos : TEXCOORD4;
};

OutputVS WaterVS( float3 posL : POSITION0,
float2 texC : TEXCOORD0)
{
// Zero out our output.
OutputVS outVS = (OutputVS)0;

// Transform vertex position to world space.
float3 posW = mul(float4(posL, 1.0f), World).xyz;
outVS.pos.xyz = posW;
outVS.pos.w = 1.0f;

// Compute the unit vector from the vertex to the eye.
outVS.toEyeW = posW - EyePos;

// Transform to homogeneous clip space.
outVS.posH = mul(float4(posL, 1.0f), WorldViewProj);

// Scroll texture coordinates.
outVS.tex0 = (texC * TexScale) + WaveMapOffset0;
outVS.tex1 = (texC * TexScale) + WaveMapOffset1;

// Generate projective texture coordinates from camera's perspective.
outVS.projTexC = outVS.posH;

// Done--return the output.
return outVS;
}

float4 WaterPS( float3 toEyeW : TEXCOORD0,
float2 tex0 : TEXCOORD1,
float2 tex1 : TEXCOORD2,
float4 projTexC : TEXCOORD3,
float4 pos : TEXCOORD4) : COLOR
{
projTexC.xyz /= projTexC.w;
projTexC.x = 0.5f*projTexC.x + 0.5f;
projTexC.y = -0.5f*projTexC.y + 0.5f;
projTexC.z = .1f / projTexC.z;

toEyeW = normalize(toEyeW);
SunDirection = normalize(SunDirection);

// Light vector is opposite the direction of the light.
float3 lightVecW = -SunDirection;

// Sample normal map.
float3 normalT0 = tex2D(WaveMapS0, tex0);
float3 normalT1 = tex2D(WaveMapS1, tex1);

//unroll the normals retrieved from the normalmaps
normalT0.yz = normalT0.zy;
normalT1.yz = normalT1.zy;

normalT0 = 2.0f*normalT0 - 1.0f;
normalT1 = 2.0f*normalT1 - 1.0f;

float3 normalT = normalize(0.5f*(normalT0 + normalT1));
float3 n1 = float3(0,1,0); //we'll just use the y unit vector for spec reflection.

//get the reflection vector from the eye
float3 R = normalize(reflect(toEyeW,normalT));

float4 finalColor;
finalColor.a = 1;

//compute the fresnel term to blend reflection and refraction maps
float ang = saturate(dot(-toEyeW,n1));
float f = R0 + (1.0f-R0) * pow(1.0f-ang,5.0);

//also blend based on distance
f = min(1.0f, f + 0.007f * EyePos.y);

//compute the reflection from sunlight, hacked in color, should be a variable
float sunFactor = SunFactor;
float sunPower = SunPower;

if(EyePos.y < pos.y)
{
sunFactor = 7.0f; //these could also be sent to the shader
sunPower = 55.0f;
}
float3 sunlight = sunFactor * pow(saturate(dot(R, lightVecW)), sunPower) * SunColor;

float4 refl = tex2D(ReflectMapS, projTexC.xy + projTexC.z * normalT.xz);
float4 refr = tex2D(RefractMapS, projTexC.xy - projTexC.z * normalT.xz);

//only use the refraction map if we're under water
if(EyePos.y < pos.y)
f = 0.0f;

//interpolate the reflection and refraction maps based on the fresnel term and add the sunlight
finalColor.rgb = WaterColor * lerp( refr, refl, f) + sunlight;

return finalColor;
}

technique WaterTech
{
pass Pass1
{
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_2_0 WaterVS();
pixelShader = compile ps_2_0 WaterPS();

CullMode = None;
}
}

Edit: A demo of the water component is now available.

Friday, November 7, 2008

Scientific Visualization

This semester I've been taking a class in scientific visualization. It's pretty interesting and it covers a lot of techniques such as color visualization, human vision and color perception, contours, isosurfacing, volume rendering, flow visualization such as stream lines and stream surfaces and texture based methods.

There are good and bad parts to this course. The bad part is that it is a fairly new graduate course and has no prerequisites. The professor is just trying to build up interest in the course. This equates to us not actually coding the different algorithms, but using a visualization framework, VTK, and c++/java/python/tcl to implement the techniques. The good part is that this semester is one of the busiest I've had, so minimizing my work is a good thing :)

Now on to some pictures.

Contours, Heightmaps:

brain1 brainHM2

Isosurfacing:

25

2_0 6

7

Volume Rendering:

dist_0_25 vol1_inv

mip1

Glyphs, Stream lines, stream surfaces:

glyph2 glyph0

streamlines0 streamlines3

streamtubes0 streamtubes5

streamsurface0 streamsurface5

streamsurface3

Tuesday, October 7, 2008

Interviews, research, and future posts

I don't have a new sample/tutorial today. I have been very busy with interviews, research, and school as of late.

As this is my last year of school, I've been interviewing with various companies and trying to get an on-line portfolio together. I recently had a great time in Portland meeting with Intel.

Last spring I was involved in the research of using non-pinhole impostors for reflections and refractions. We submitted to Eurographics (specifically EGSR), but unfortunately didn't get accepted. So this semester we are looking to work on the short comings that some of the reviewers noted and resubmit to I3D. So until I get that out of the way there probably won't be any new samples/tutorials for a couple of weeks.

As far as for future posts, I've been working on rain as a particle system and as a post process (see Tatarchuck's AMD/ATi paper on Rain). Besides this I've been wanting to have a series of samples/tutorials on different lighting methods. We all know Gouraud and [Blinn-]Phong shading, but I wanted to cover other methods such as Cook-Torrance, Oren-Nayar, Ward lighting and others. I also might do a tutorial on Depth Impostors that would build off of the Billboard Impostor tutorial I wrote earlier in the year.

For now, here's a couple of teaser images that we submitted to EGSR. You're looking through a glass bunny's ear. You can see that with regular depth impostors, you are missing a significant amount of data for the teapot lid.

Planar Pinhole Camera Depth Impostor


Non-Pinhole Camera Depth Impostor

Wednesday, July 30, 2008

Other recent blog posts

There are a few posts that I have seen other blogs/sites that I think are interesting, and would like to share with anyone who reads my blog.

Andy Patrick has a series of useful "efficient development" posts regarding speeding up and making game development easier. Check it out:

Efficient Development, Part I
Efficient Development, Part II
Efficient Development, Part III
Efficient Development, Part IV


Next up there were a couple of articles on Gamasutra related to 2D fluid dynamics that are also pretty interesting.

Fluid Dynamics, Part I
Fluid Dynamics, Part II


Christer Ericson has an interesting post on using cellular automata for path finding. And one of the guys over at XNAInfo already has a working XNA demo implementing the idea.

Path finding with cellular automata
Game of Life on the GPU

Got any interesting links? Post 'em in the comments!

Friday, July 18, 2008

Dual-Paraboloid Variance Shadow Mapping

dp_vsm

Edit: Added the video that I recently made

I have to say, I really like variance shadow mapping. It's such a simple(ingenious) technique to implement, but it provides such nice looking results. I haven't had the need to implement the technique before, but I'm glad I did. Last post we implemented dual-paraboloid shadow mapping. And those of you with a PS 3.0 graphics card were able to have semi-soft shadows with percentage closer filtering. But now when we get rid of the PCF filter, and replace it with variance shadow mapping, we can fit all the code inside the PS 2.0 standard. Anyway, on to the code.

Variance Shadow Mapping Paper + Demo

Building the shadow maps:

Variance shadow mapping is really simple to implement. First thing we need to change is to create either a RG32F or RG16F surface format for our front and rear shadow maps (instead of R32F/R16F). This allows us to store the depth of the pixel in the red channel and the squared depth of the pixel in the green channel. So our new pixel shader for building the depth/shadow maps is this:

return float4(z, z * z, 0, 0, 1);

Blurring the shadow maps:

Variance shadow mapping improves upon standard shadow mapping by storing a distribution of depths at each pixel (z * z, and z) instead of the single depth (as with standard shadow mapping). And because it stores a distribution of depth, we can blur the shadow maps. This would produce some funky/incorrect results if we were just doing standard shadow mapping with a PCF filter.

So, after we have created our depth maps, we will blur them with a separable Gaussian blur. This will perform two passes on each shadow map; the first will perform a horizontal blur and the second will perform a vertical blur. There is a wealth of information on the internet on how to do this so I won't explicitly cover this. Here's what our front shadow map looks like after being blurred:

depth_front

Variance shadow mapping:

We build our texture coordinates exactly the same as the previous method of shadow mapping. But the depth comparison is a little different. You can refer to the VSM paper for an in-depth discussion, but here is the gist of it. Since we filtered our shadow maps with a Gaussian blur, we need to recover the moments over that filter region. The moments are simple the depth and squared depth we stored in the texture. From these we can build the mean depth and the variance at the pixel. And as such the variance can be interpreted as a quantitative measure of the width of a distribution (Donelly/Lauritzen). This measure places a bound on the distribution and can be represented by Chebychev's inequality.

float depth;
float mydepth;
float2 moments;
if(alpha >= 0.5f)
{
moments = tex2D(ShadowFrontS, P0.xy).xy;
depth = moments.x;
mydepth = P0.z;
}
else
{
moments = tex2D(ShadowBackS, P1.xy).xy;
depth = moments.x;
mydepth = P1.z;
}

float lit_factor = (mydepth <= moments[0]);

float E_x2 = moments.y;
float Ex_2 = moments.x * moments.x;
float variance = min(max(E_x2 - Ex_2, 0.0) + SHADOW_EPSILON, 1.0);
float m_d = (moments.x - mydepth);
float p = variance / (variance + m_d * m_d); //Chebychev's inequality

texColor.xyz *= max(lit_factor, p + .2f); //lighten the shadow just a bit (with the + .2f)

return texColor;

5x5 Guassian Blur

dp_vsm2


9x9 Guassian Blur

dp_vsm3

And there you go. Nice looking dual-paraboloid soft shadows thanks to variance shadow mapping.

As before, your card needs to support either RG16F or RG32F formats (sorry again Charles :) ). You can refer to the VSM paper and demo on how to map 2 floats to a single ARGB32 pixel if your card doesn't support the floating point surface formats.


Thursday, July 17, 2008

Dual-Paraboloid Shadow Maps

DPShadow2

Last time I introduced using dual-paraboloid environment mapping for reflections. Well now we're going to apply the same process to shadows. So if you haven't looked at my previous post, read it over before going on.

Creating the depth/shadow maps is exactly the same as when we created the reflection maps with one exception. Instead of outputting color in the pixel shader, we output the depth of the 3d pixel, like so:

return depth.x / depth.y;

Where depth.x is the depth of the pixel and depth.y is the w component. And here is the resulting depth/shadow map for the front hemisphere.

depth_f

Now, to map the shadows the process is also very similar to how we generated the reflections. We follow a similar process in the pixel shader:

  • Generate the texture coordinates for the front and rear paraboloids
  • Generate the depth of the pixel
  • Test to see if the pixel is in shadow

We generate the texture coordinates exactly as when we generated the reflection texture coordinates. To generate the depth of the pixel we take the length of the vector from the vertex to the origin of the paraboloid (0, 0, 0) and divide by the light attenuation. Also to check which hemisphere we are in, we calculate an alpha that is the Z value of the transformed vertex and offset by .5f;

float L = length(pos);
float3 P0 = pos / L;

float alpha = .5f + pos.z / LightAttenuation;
//generate texture coords for the front hemisphere
P0.z = P0.z + 1;
P0.x = P0.x / P0.z;
P0.y = P0.y / P0.z;
P0.z = L / LightAttenuation;

P0.x = .5f * P0.x + .5f;
P0.y = -.5f * P0.y + .5f;

float3 P1 = pos / L;
//generate texture coords for the rear hemisphere
P1.z = 1 - P1.z;
P1.x = P1.x / P1.z;
P1.y = P1.y / P1.z;
P1.z = L / LightAttenuation;

P1.x = .5f * P1.x + .5f;
P1.y = -.5f * P1.y + .5f;

Now that we have generated our texture coordinates we need to test the depth of the pixel against the depth in the shadow map. To do this we index either the front or rear shadow map with the texture coordinates we generated to get the depth and compare this to our depth. If the depth is less than our depth, then the pixel is in shadow.

float depth;
float mydepth;
if(alpha >= 0.5f)
{
depth = tex2D(ShadowFrontS, P0.xy).x;
mydepth = P0.z;
}
else
{
depth = tex2D(ShadowBackS, P1.xy).x;
mydepth = P1.z;
}

//lighten the shadow just a bit so it isn't completely black
if((depth + SHADOW_EPSILON) < mydepth)
texColor.xyz *= 0.3f;

return texColor;

DPShadow

And that's it. Now we have dual-paraboloid shadow mapping. If you have a pixel shader 3.0 graphics card, then the shadow also has a percentage closer filter applied to it. You also may notice seams in the shadows. This is because the splitting plane of the paraboloids is the x-axis (since the paraboloids look down the +/- z-axis). This is one of the problems of using paraboloid mapping for shadows. One has to be careful where they place the split plane to avoid this situation. Pixels that are in the center of either hemisphere suffer little distortion. But this is just a tutorial so I didn't worry too much about it.

Also you're graphics card must be able to support R32F or R16F surface formats to run the demo out of the box (sorry Charles ;) ). Otherwise, you must use the ARGB32 format and pack the depth values in all 4 channels. Here is some code to pack/unpack to/from an ARGB32 surface format. You pass the depth value to the pack method when you render to the shadow maps, and you pass the float4 color to the unpack method when you fetch from the shadow maps. I decided not to implement this so the code wouldn't become complicated by something that doesn't add to the tutorial.

//pack the depth in a 32-bit rgba color
float4 mapDepthToARGB32(const float value)
{
const float4 bitSh = float4(256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
const float4 mask = float4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);
float4 res = frac(value * bitSh);
res -= res.xxyz * mask;
return res;
}

//unpack the depth from a 32-bit rgba color
float getDepthFromARGB32(const float4 value)
{
const float4 bitSh = float4(1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0);
return(dot(value, bitSh));
}

Next time I'll introduce using variance shadow mapping with our dual-paraboloid shadow mapping to give nice soft shadows that we can still use with pixel shader 2.0 cards.


Wednesday, July 16, 2008

Dual-Paraboloid Reflections

dragon

I recently had to investigate dual-paraboloid reflections at work for an unnamed console. What are these you ask? Great question! :) Lets start with some background.

The standard way of calculating reflections is to use an environment map, more specifically a cube map. In my last tutorial on reflections, this basic type of reflection mapping was used to compare against billboard impostor reflections. Now, cubemaps are great for static scenes, and are relatively low cost to perform a texture fetch on in a shader. However if you have a dynamic scene you have to update all 6 sides of the cubemap (this is not technically true, aggressive culling and other optimizations can guarantee at most 5 sides). Holy crap, now we have to render our scene 6 times!

This is where dual-paraboloid reflections come in. They are a view-independent method of rendering reflections just like cubemaps. Except you only have to update 2 textures, not 6! The downside is that you are going to lose quality for speed, but unless you have to have high-quality reflections, paraboloid reflections will probably provide sufficient results.

Reference articles:

View-Independent Environment Maps

Shadow Mapping for Hemispherical and Omnidirectional Light Sources

Dual Paraboloid Mapping in the Vertex Shader

In the interest of keeping this post from getting too long, I won't go into great detail on the mathematical process. I suggest you refer to the first and third papers for an in-depth discussion on the details.

Now lets move on to what exactly paraboloid mapping is. Lets look at what a paraboloid is.

paraboloid

The basic idea is that for any origin O, we can divide the scene into two hemispheres, front and rear. For each of these hemispheres there exists a paraboloid surface that will focus any ray traveling in the direction of O into the direction of the the hemisphere. Here is a 2d picture demonstrating the idea:

paraboloid2d

A brief math overview:

What we need to find is the intersection point where the incident ray intersects the paraboloid surface. To do this we need to know the incident ray and the reflected ray. Now because the paraboloid reflects rays in the same direction, it is easy to compute the reflection vector: it's the forward direction of the hemisphere! So the front hemisphere's reflection vector will always be <0, 0, 1> and the rear hemisphere's reflection vector will always be <0, 0, -1>. Easy! And the incident ray is calculated the same as with environment mapping by reflecting the ray from the pixel position to the eye across the normal of the 3D pixel.

Now all we have to do is find the normal of the intersection which we will use to map our vertices into paraboloid space. To find the normal, we add the incident and reflected vectors and divide the x and y components by the z value.

Generating the Paraboloid maps:

What we are basically going to do is, in the vertex shader, place each vertex ourselves that has been distorted by the paraboloid. First we need to transform the vertex by the view matrix of the paraboloid 'camera'. We don't apply the projection matrix since we're going to place the point ourselves

output.Position = mul(input.Position, WorldViewProj);

Next we need to find the vector from the the vertex to the origin of the paraboloid, which is simply:

float L = length( output.Position.xyz );
output.Position = output.Position / L;

Now we need to find the x and y coordinates of the point where the incident ray intersects the paraboloid surface.

output.Position.z = output.Position.z + 1;
output.Position.x = output.Position.x / output.Position.z;
output.Position.y = output.Position.y / output.Position.z;

Finally we set the z value as the distance from the vertex to the origin of the paraboloid, scaled and biased by the near and far planes of the paraboloid 'camera'.

output.Position.z = (L - NearPlane) / (FarPlane - NearPlane);
output.Position.w = 1;

And the only thing we need to add in the pixel shader is to make sure and clip vertices that are behind the viewpoint using the intrinsic clip() function of HLSL.


front

frontWF

Reflections with paraboloid maps:

In the reflection pixel shader we will: generate the reflection vector the same way as cube mapping, generate texture coordinates for both the front and rear paraboloids' textures, and blend the samples taken from the textures.

The texture coordinates are generated exactly as how we generated them before in the generation step. We also scale and bias them to correctly index a D3D texture. And then we take a sample from each map and pick the sample with the greater color value:

// calculate the front paraboloid map texture coordinates
float2 front;
front.x = R.x / (R.z + 1);
front.y = R.y / (R.z + 1);
front.x = .5f * front.x + .5f; //bias and scale to correctly sample a d3d texture
front.y = -.5f * front.y + .5f;

// calculate the back paraboloid map texture coordinates
float2 back;
back.x = R.x / (1 - R.z);
back.y = R.y / (1 - R.z);
back.x = .5f * back.x + .5f; //bias and scale to correctly sample a d3d texture
back.y = -.5f * back.y + .5f;

float4 forward = tex2D( FrontTex, front ); // sample the front paraboloid map
float4 backward = tex2D( BackTex, back ); // sample the back paraboloid map

float4 finalColor = max(forward, backward);

ss1

Optimizations:

If you align the paraboloid 'camera' such that it is always facing down the +/- z axis, you don't need to transform the vertices by the view matrix of the camera. You only need to do a simple translation of the vertex by the camera position.

Conclusion:

As you can see, paraboloid maps give pretty good results. The won't give you the quality of cubemaps, but they are faster to update and require less memory. And in the console world, requiring less is almost reason enough to pick this method over cubemaps.

One drawback of paraboloid maps is that the environment geometry has to be sufficiently tessellated or will we will have noticeable artifacts on our reflector. Another drawback is that on spherical objects we will see seems. However with objects that are reasonably complex (such as the Stanford bunny or dragon) and are not simple shapes, the seams will not be as noticeable.

Next time I will present dual-paraboloid mapping for use with real-time omnidirectional shadow mapping of point lights.

Friday, June 20, 2008

Terrain and Atmospheric Scattering Source



I figured I would release my source for the terrain rendering demo. There aren't that many terrain and atmospheric scattering examples on the net, so I figured I might as well post mine for anyone that would like to see it.
It's still a work in progress (that I paused about a year ago), so it's kind of immature. But it is the same code that produced the images I posted awhile back. You'll probably need at least a 6 series Nvidia graphics card (or ATi equivalent). For the terrain, the demo uses a simple quad tree with frustum culling, so it's not the greatest performer for anything above 512x512 heigtmaps. Also, it is in Managed DirectX, but it shouldn't be too hard to port to XNA.



Anyway, enough of that. If you didn't know, the Creature Creator for Spore has been released, and it's an absolute blast to mess around with. The possibilities on what you can create are pretty much endless. Here's a couple that I made:
CRE_spider-0685c592_sml
CRE_Graptilosaurus-0685c593_sml

Tuesday, June 3, 2008

Post Process Framework Sample

dof

Today I'm posting a post processing framework sample. For those that don't know, post processing is any manipulation of the frame buffer after the scene has been rendered (but usually before the UI), hence the word "post" in the name. This process could be a simple blurring, or it could be a motion blur or depth of field technique. But what if you have many post processes? For instance, maybe we have bloom, motion blur, heat haze, depth of field, and other post processes that we need to apply after we render our scene. Connecting all these post processes together can get a little hairy, and that's what this sample helps us do.

So lets talk about how we're going to structure our framework. At the very bottom of the hierarchy, we have a PostProcessComponent (component from here on out). And what this class represents, is a single atomic process, that is, it cannot be broken up into smaller pieces/objects. Each component has an input and an output in the form of textures. A component is also able to update the backbuffer. It is the simplest object in the hierarchy and is combined with other components to form a PostProcessEffect (effect from here on out).

An effect contains a chain of components to implement a single post process such as bloom or motion blur. An effect also handles output from one component to input to another. In this way the components can be independent of one another, and the effect handles linking all the components that it owns. Because of this, each component is very simple. Also, like components, effects have inputs and outputs in the form of textures. Effects can also be enabled/disabled dynamically at runtime.

Next we have the PostProcessManager(aka manager). The manager is comprised of a chain of effects, and it handles linking the output of one effect to the input of the next effect. And just like with components, this enables each effect to be independent of the next. The manager also takes care of linking the backbuffer and depth buffer to components that need it.

Because each component is not dependent on other components, each one is very simple in its implementation. This not only helps in understanding the code, but also in robustness and tracking down any possible errors in a component. Another nice feature about this framework is that you do not need to create a separate class deriving from PostProcessEffect for each effect. In this way, you can dynamically create your effects at run time. Most of the "magic" occurs in the LoadContent() function inside PostProcessManager. So lets have a look at it:

public void LoadContent()
{
#region Create common textures to be used by the effects
PresentationParameters pp = mGraphicsDevice.PresentationParameters;
int width = pp.BackBufferWidth;
int height = pp.BackBufferHeight;

SurfaceFormat format = pp.BackBufferFormat;

resolveTexture = new ResolveTexture2D(mGraphicsDevice, width, height, 1, format);
#endregion

int
i = 0;
foreach (PostProcessEffect effect in effects)
{
effect.LoadContent();

int j = 0;
//if a component requires a backbuffer, add their function to the event handler
foreach (PostProcessComponent component in effect.Components)
{
//if the component updates/modifies the "scene texture"
//find all the components who need an up to date scene texture
if (component.UpdatesSceneTexture)
{
int k = 0;
foreach (PostProcessEffect e in effects)
{
int l = 0;
foreach (PostProcessComponent c in e.Components)
{
//skip previous components and ourself
if (k < i)
{
continue;
}
else if (k == i && l <= j)
{
l++;
continue;
}
else if (c.RequiresSceneTexture)
{
component.OnUpdateSceneTexture += new UpdateSceneTextureEventHandler(c.UpdateSceneTexture);
}

l++;
}

k++;
}
}

//add the compontent's UpdateBackBuffer method to the event handler
if (component.RequiresBackbuffer
(effect == effects[0] && component == effect.Components[0]))
OnBackBufferResolve += new BackBufferResolveEventHandler(component.UpdateBackbuffer);

if (component.RequiresDepthBuffer)
OnDepthBufferResolve += new DepthBufferResolveEventHandler(component.UpdateDepthBuffer);

j++;
} //components foreach

i++;
} //effects foreach

if (effects.Count > 0)
{
//ensure the last component renders to the backbuffer
effects[effects.Count - 1].IsFinal = true;

if (OnDepthBufferResolve != null)
{
depthBuffer = new BuildZBufferComponent(mContent, mGraphicsDevice);
depthBuffer.Camera = camera;
depthBuffer.Models = models;
depthBuffer.LoadContent();
}
}
}

Here we first setup the resolve texture that will resolve the backbuffer every frame. Then we search through the components, looking for ones that update the backbuffer(scene texture). If a component does this, then we need to find all the components after it that need the most recent version of the backbuffer and assign its update function to the OnUpdateSceneTexture event. Next, we search through all the effects, looking for components that need the backbuffer and depth buffer and attach its backbuffer update function to the OnBackBufferResolve and OnDepthBufferResolve events.

Below is what a sample post process effect chain could look like:

PostProcessDiagram

Here we have three post process effects: bloom, motion blur, and depth of field. Each contains its own components that perform operations on the texture given to it. Some components need to blend with the backbuffer, and therefore need to update the backbuffer in turn so that other components are able to have the most recent copy of the backbuffer (components linked with red lines).

Here is an example of how we could setup the above effect chain:

PostProcessEffect bloom = new PostProcessEffect(Content, Game.GraphicsDevice);
bloom.AddComponent(new BrightPassComponent(Content, Game.GraphicsDevice));
bloom.AddComponent(new GaussBlurComponent(Content, Game.GraphicsDevice));
bloom.AddComponent(new BloomCompositeComponent(Content, Game.GraphicsDevice));

PostProcessEffect motionblur = new PostProcessEffect(Content, Game.GraphicsDevice);
motionblur.AddComponent(new MotionVelocityComponent(Content, Game.GraphicsDevice));
motionblur.AddComponent(new MotionBlurHighComponent(Content, Game.GraphicsDevice));

PostProcessEffect depthOfField = new PostProcessEffect(Content, Game.GraphicsDevice);
depthOfField.AddComponent(new DownFilterComponent(Content, Game.GraphicsDevice));
depthOfField.AddComponent(new PoissonBlurComponent(Content, Game.GraphicsDevice));
depthOfField.AddComponent(new DepthOfFieldComponent(Content, Game.GraphicsDevice));

postManager.AddEffect(bloom);
postManager.AddEffect(motionblur);
postManager.AddEffect(depthOfField);

In the sample, I have included 3 post processes: bloom, depth of field, and a distortion process. I've documented the classes and functions pretty well, so these combined with this write up should be enough to help you understand how everything works. Should you have any questions, just post in the comments.

Homework for the reader:

In this sample, I create a new render target fore every component that needs one. This is bad. In my own implementation that I use for my own projects, I create a render target pool. The render target pool stores and fetches render targets. When a componet requests a render target, the pool will check to see if one of the specified dimensions and format has been stored already. If it has it will return it, if it hasn't it will return a new render target. Now we won't create a new render target for every component. The pool allows us to reuse render targets which helps save gpu memory and increase performance. So if we have 5 effects with 3 components each, we might only create 5 render targets (depending on how many share similar formats and dimensions) instead of 15.




Edit (6/6/2008):Added support for cards that do not support the R32F (Single) SurfaceFormat (hopefully :) ). Removed in-progress auto focusing code in depth-of-field effect.

Edit (6/5/2008): Added some new functionality, namely being able to specify the states and color channel modulation of the spritebatch.


Tuesday, May 20, 2008

New XNA site and other happenings

If you didn't know there's a new version of the creators site up, go check it out.

I've been pretty busy lately. This summer I have an internship with a local game developer, Gabriel Entertainment. They have me working with a Wii dev kit to help bring one of their PC games to the Wii. It's pretty cool stuff.

I'm also working on a remake of the Missile Game 3D flash game in XNA in my free time. I'll be posting the post process manager that I'm using for this project once I port it from XNA 1.0 to 2.0. So stay tuned for this. I'll probably include with this motion blur and depth-of-field post processes.

Saturday, May 3, 2008

Camera Animation, Part II

Camera Animation with a cubic spline:





So last time I lied a little bit about our camera path. I showed this picture:


to describe the path that we wanted. But with linear interpolation, what we really ended up with was something like:


But today, I'll introduce cubic spline interpolation, and this will produce a truly smooth curve. Most of the code is the same, the only thing that has changed is that I have added a cubic option to the build function and have changed the animation to be reliant on time.

Building the natural cubic spline is more math intensive than with linear interpolation, so to start out here's a few links on cubic splines:

//a point on a cubic spline is affected by every other point
//so we need to build the cubic equations for each control point
//by looking at the entire curve. This is what calculateCubicSpline does
Vector3[] looks = new Vector3[mKeyFrames.Count];
Vector3[] positions = new Vector3[mKeyFrames.Count];
Vector3[] ups = new Vector3[mKeyFrames.Count];

for (int i = 0; i < mKeyFrames.Count; i++)
{
looks[i] = mKeyFrames[i].Look;
positions[i] = mKeyFrames[i].Position;
ups[i] = mKeyFrames[i].Up;
}

Cubic[] pos_cubic = calculateCubicSpline(mKeyFrames.Count - 1, positions);
Cubic[] look_cubic = calculateCubicSpline(mKeyFrames.Count - 1, looks);
Cubic[] up_cubic = calculateCubicSpline(mKeyFrames.Count - 1, ups);

for (int i = 0; i < mKeyFrames.Count - 1; i++)
{
for (int j = 0; j < mPathSteps; j++)
{
float k = (float)j / (float)(mPathSteps - 1);

Vector3 center = pos_cubic[i].GetPointOnSpline(k);
Vector3 up = up_cubic[i].GetPointOnSpline(k);
Vector3 look = look_cubic[i].GetPointOnSpline(k);

Vector3 r = Vector3.Cross(up, look);
Vector3 u = Vector3.Cross(look, r * -1f);

Camera cam = new Camera();
cam.SetLens(mFOV, mAspect, mNearZ, mFarZ);
cam.Place(center, look, u);

cam.BuildView();

mCameraPath.Add(cam);
}
}
So the first thing we do is build the cubic polynomials for each control point. And because each point controls the shape of the spline, we have to consider each point when building the polynomials. In this way, it is very different from linear interpolation in that we have to know information about the whole curve to perform a cubic interpolation rather than just the current and next control point as with linear interpolation. You might be thinking that "well Vector3 already contains a SmoothStep function that performs cubic interpolation". And you're right it does, but just interpolating the two points without knowledge of the whole curve still produces segmented(i.e. not smooth) animation. Once we have the polynomials for each control point, we can perform a cubic interpolation by just looking at the current and next control point, in the same fashion that we did with linear interpolation.

This is what the calculateCubicSpline function does, it builds polynomials for each control point, successively building off of the last control point's polynomial. Let's look at it:

Vector3[] gamma = new Vector3[n + 1];
Vector3[] delta = new Vector3[n + 1];
Vector3[] D = new Vector3[n + 1];
int i;
/* We need to solve the equation
* taken from: http://mathworld.wolfram.com/CubicSpline.html
[2 1 ] [D[0]] [3(v[1] - v[0]) ]
: 1 4 1 : :D[1]: :3(v[2] - v[0]) :
: 1 4 1 : : . : = : . :
: ..... : : . : : . :
: 1 4 1: : . : :3(v[n] - v[n-2]):
[ 1 2] [D[n]] [3(v[n] - v[n-1])]

by converting the matrix to upper triangular.
The D[i] are the derivatives at the control points.
*/

//this builds the coefficients of the left matrix
gamma[0] = Vector3.Zero;
gamma[0].X = 1.0f / 2.0f;
gamma[0].Y = 1.0f / 2.0f;
gamma[0].Z = 1.0f / 2.0f;
for (i = 1; i < n; i++)
{
gamma[i] = Vector3.One / ((4 * Vector3.One) - gamma[i - 1]);
}
gamma[n] = Vector3.One / ((2 * Vector3.One) - gamma[n - 1]);

delta[0] = 3 * (v[1] - v[0]) * gamma[0];
for (i = 1; i < n; i++)
{
delta[i] = (3 * (v[i + 1] - v[i - 1]) - delta[i - 1]) * gamma[i];
}
delta[n] = (3 * (v[n] - v[n - 1]) - delta[n - 1]) * gamma[n];

D[n] = delta[n];
for (i = n - 1; i >= 0; i--)
{
D[i] = delta[i] - gamma[i] * D[i + 1];
}

// now compute the coefficients of the cubics
Cubic[] C = new Cubic[n];
for (i = 0; i < n; i++)
{
C[i] = new Cubic(v[i], D[i], 3 * (v[i + 1] - v[i]) - 2 * D[i] - D[i + 1],
2 * (v[i] - v[i + 1]) + D[i] + D[i + 1]);
}
return C;
Once we find the cubic equation for each control point (key frame), we can use GetPointOnSpline() uses the cubic equation to calculate the intermediate point along the cubic spline.

public Vector3 GetPointOnSpline(float s)
{
return (((d * s) + c) * s + b) * s + a;
}
And that's it. Now we have a smooth animation of our camera. There are many more ways to calculate the curve, including: hermite curves, b-splines, bezier curves, catmull-rom splines, NURBS(Nonuniform Rational B-Splines) and others. Microsoft also has a sample on creating a camera path that appears to be using hermite curves. But they hide all the details from you, and it's just more interesting to know the math behind it isn't it ;) ?