Pages

Thursday, April 17, 2008

Reflections with Billboard Impostors

Last time I mentioned that I would post a tutorial on generating reflections using billboard impostors, well here it is. Amidst the craziness of school right now, I've found time (or rather worked on this since it's more fun) to write up this tutorial. So lets get started.

Some prerequisites for this tutorial:

  • Shader Model 3.0 compliant graphics card
  • Experience with RenderTargets
  • Experience with Custom Content Processors (not too important, I just won't go over these specifically)

Anyways, Shawn Hargreaves posted about using environment maps for reflections a few weeks ago. But we're going to take it one step further and use impostors as well as the environment map for reflections. The main problem with environment mapped refections is that everything is assumed to be infinitely far away, and thus objects appear distorted and don't intersect with geometry correctly. Using billboarded impostors tries to alleviate some of this problem by taking the reflected rays and intersecting them with billboards of the scene geometry.

So here's a couple of images to demonstrate the limitations of environment mapping.

Environment mapping:


As you can see the sphere looks like it's floating above the quad when it is actually intersecting it. Also, the dragon's reflection is too far away in the reflection.


Impostors:


Now with the impostors, the reflection of the dragon is much more accurate, and the sphere now correctly intersects the floor quad.

The details:
First we want to render each diffuse object from the reflector's point of view. So we first get the bounding box corners of the impostor geometry and setup the camera to be positioned at the center of the reflector, and looking at the center of the diffuse geometry.


// Get the 8 corners of the bounding box and transform by the world matrix
Vector3[] corners = new Vector3[8];
Vector3.Transform(mBoundBox.GetCorners(), ref mWorld, corners);

// Get the transformed center of the mesh
// alternatively, we could use mBoundingSphere.Center as sometimes it works better
Vector3 meshCenter = Center;

// The quad is a special case. Since it's so much bigger than the reflector
// we have to make sure that our camera is looking straight at the quad and
// not it's center or we'll get a distorted view of the quad.
if (mMeshType == MeshType.Quad)
{
meshCenter.X = reflectorCenter.X;
meshCenter.Z = reflectorCenter.Z;
}

// Construct a camera to render our impostor
Camera impostorCam = new Camera(camera);

// Set the camera to be at the center of the reflector and to look at the diffuse mesh
impostorCam.LookAt(reflectorCenter, meshCenter);
impostorCam.BuildView();



Next we want to project these corners to screen space and find the bounding box that fits around these projected corners:

// Now we project the vertices to screen space, so we can find the AABB of the screen
// space vertices
Vector3[] screenVerts = new Vector3[8];
for (int i = 0; i < 8; i++)
{
screenVerts[i] = mGraphicsDevice.Viewport.Project(corners[i],
impostorCam.Projection, impostorCam.View, mWorld);
}

// compute the screen space AABB
Vector3 min, max;
ComputeBoundingBoxFromPoints(screenVerts, out min, out max);

// construct the quad that will represent our diffuse mesh
Vector3[] screenQuadVerts = new Vector3[4];
screenQuadVerts[0] = new Vector3(min.X, min.Y, min.Z);
screenQuadVerts[1] = new Vector3(max.X, min.Y, min.Z);
screenQuadVerts[2] = new Vector3(max.X, max.Y, min.Z);
screenQuadVerts[3] = new Vector3(min.X, max.Y, min.Z);


Now we want to unproject these screen space vertices into world space so that we can form a 3D impostor quad of our geometry. We will use this quad later in the reflection shader for a reflective object.

We also render our diffuse object to a RenderTarget that will be the texture for our impostor quad. To do this, we setup an Orthographic Projection so that we don't have any of the perspective distortion that comes with a regular perspective projection matrix. We clear the scene with zero alpha, and we render the impostor with full alpha so that we only capture the diffuse object and nothing else.


//now unproject the screen space quad and save the
//vertices for when we render the impostor quad
for (int i = 0; i < 4; i++)
{
mImpostorVerts[i] = mGraphicsDevice.Viewport.Unproject(screenQuadVerts[i],
impostorCam.Projection, impostorCam.View, mWorld);
}

//compute the center of the quad
mImpostorCenter = Vector3.Zero;
mImpostorCenter = mImpostorVerts[0] + mImpostorVerts[1] + mImpostorVerts[2] + mImpostorVerts[3];
mImpostorCenter *= .25f;

// calculate the width and height of the imposter's vertices
float width = (mImpostorVerts[1] - mImpostorVerts[0]).Length() * 1.2f;
float height = (mImpostorVerts[3] - mImpostorVerts[0]).Length() * 1.2f;

// We construct an Orthographic projection to get rid of the projection distortion
// which we don't want for our impostor texture
impostorCam.Projection = Matrix.CreateOrthographic(width, height, .1f, 100);
impostorCam.BuildView();

//save the WorldViewProjection matrix so we can use it in the shader
mWorldViewProj = impostorCam.ViewProj;

mGraphicsDevice.SetRenderTarget(0, mRenderTarget);
mGraphicsDevice.Clear(ClearOptions.Target, new Color(1, 0, 0, 0), 1.0f, 1);

Draw(impostorCam);

mGraphicsDevice.SetRenderTarget(0, null);


// finally, compute the normal for the impostor quad, and push the vertices
// back to the center of the diffuse mesh
Vector3 trans = meshCenter - mImpostorCenter;
for (int i = 0; i < 4; ++i)
{
mImpostorVerts[i] += trans;
}

mNormal = impostorCam.Look;


Here's our resulting impostor image:


Now that we have constructed our impostor quad, it's time to render our reflective object. Here's the pixel shader code that we will use to do this:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
input.NormalW = normalize(input.NormalW);

float3 fromEyeW = normalize(input.PosW - EyePos);

//find the relflected ray by reflecting the fromEyeW vector across the normal
float3 reflectedRay = reflect(fromEyeW, input.NormalW);

float4 finalColor = texCUBE(EnvTex, reflectedRay);

if(UseImpostors)
{
for(int i = 0; i < 2; i++)
{
//take the dot product of the reflected ray and the normal of the impostor
//quad to see how orthogonal the ray is to the quad
//if a = 0, then the ray is parallel to the quad
//if a = 1, then it is orthogonal to the quad
float a = dot(-reflectedRay, Impostors[i].Normal);

//if less than this, the ray is nearly or entirely parallel to the plane
if(a > 0.001f)
{
//we construct the vector from a point on the quad to the pixel position
float3 vec = Impostors[i].Vertex - input.PosW;

//the signed distance from the pixel position to the quad is given by the negative
//dot product of the quad normal and vec
float b = -dot(vec, Impostors[i].Normal);

//divide b by a to get our distance from the world pixel position
float r = b / a;

if(r >= 0)
{
//Using the Ray equation: P(t) = S + tV. Where S is the orgin, V is the direction,
//and t is the distance along V
//We find the intersection by starting at the origin (PosW), and walk to the end
//of the ray by multiplying the reflectedRay by r - the distance to the quad
//and adding it to the orgin
float3 intersection = input.PosW + r * reflectedRay;

float2 texC;

//project the intersection point with the WVP matrix used to render the quad
float4 projIntersect = mul(float4(intersection, 1.0), Impostors[i].WVP);

//perform the perspective divide and transform to NDC coordinates [0, 1]
texC = projIntersect.xy / projIntersect.w * .5 + .5;

//make sure the intersection is in the bounds of the image [0, 1]
if((texC.x <= 1 && texC.y <= 1) &&
(texC.x >= 0 && texC.y >= 0))
{
float4 color = tex2D(ImpostorSamplers[i], float2(texC.x, 1-texC.y));

//blend based on the alpha of the sampled color
finalColor = lerp(finalColor, color, color.a);
}
}
}
}
}

finalColor.rgb *= MaterialColor;
return finalColor;
}



So, we iterate over each impostor in the Impostors array. We find the reflected ray as with environment mapping, and we use this ray to see if it intersects any of the impostor quads in our scene. If we find an intersection, we get the color from the impostor texture and blend with the environment map color based on the alpha component of the color from the impostor texture (the alpha will be zero for any part of the texture that isn't part of the diffuse object).

That's pretty much all there is to it. Not too complicated, and still very fast. The demo runs pretty fast: ~400 FPS with 16x MSAA at 1024x768 on my 8800GT.

Limitations:
One of the problems with billboard impostors is that there is no motion parallax. Also, the intersection of complex objects is not possible. This is where depth impostors and non-pinhole impostors come in. Depth impostors allow correct intersections and motion parallax. And non-pinhole impostors add on to depth impostors by allowing you to see almost the entire diffuse object from one viewpoint (whereas depth impostors suffer from occlusion errors because it can only capture the front face of any object).

3 comments:

Dev said...

Hello Kyle, I have a question for you ! :)


"First we want to render each diffuse object from the reflector's point of view. So we first get the bounding box corners of the impostor geometry and setup the camera to be positioned at the center of the reflector, and looking at the center of the diffuse geometry"


Ques 1: How would the imposter rendering handle a very tall cylindrical mirror?

In the example you have a squashed reflecting sphere, the ( approximate) center of which is easy to figure out. However for a mirrored cylinder we would likely have to place the camera somewhere on the major axis and then "look at" the diffuse geometry. Problem is 'where' do we actually place the camera. In your scene, if I place the camera at the base of the mirror cylinder ( at the floor) and the dragon is also sat on the floor things work out. In this case the reflection of the floor and the dragon are consistent. However the moment I displace the dragon upwards along the major axis of the cylinder the reflections start getting inconsistent.

Do you have any recommendations on how to deal with generic reflectors ?

I'll try and upload an image if that provides some clarity .

Thanks for your time!

Kyle Hayward said...

Ah, yes this method does have a problem with very large reflecting surfaces.

Have you tried placing the camera at the center of the cylinder? What results do you get with that? I'd imagine if you have a very large perfect reflecting surface you would have to break it up into separate models (along the largest axis).

Dev said...

Thanks for your reply. Indeed that's what I ended up doing after I wrote to you. Breaking up the cylinder into about 5 'slices' seems to work. There are ofcourse a few distortions, esp at the slice seams where there are some abrupt changes in reflection. However, the cylinder's surface is not exactly a perfect mirror in my case, it's glass. Such low frequency distortions are acceptable in my case.

So yeah ,it worked! Cheers! :)