Since the beginning of the semester I've been researching reflections using Occlusion Camera Depth Impostors. The occlusion camera had been developed in the same graphics lab a few years ago, and my task was to apply it to depth impostors. The idea of using depth impostors is that, you can render a diffuse object to a buffer, and then use the color and depth information to correctly intersect the depth map with reflected rays from a reflective surface. Now the problem here is that we can only see so much from the viewpoint of the reflector. Meaning that some rays would actually intersect the real diffuse object but won't intersect our depth map because we don't have enough information.
What the occlusion camera brings to the table is the ability to store more depth information from the diffuse object. It does this by distorting rays such that we can sample the top and bottom of the diffuse object, where if we were to use a regular camera we wouldn't capture these areas. Yes, kind of vague and hard to visualize without pictures, but those will come in due time. It's a really cool topic of research that I will discuss in more depth once we submit our paper to the Eurographics Symposium.
But for now I'll leave you with some pictures of some early work that I did as a prerequisite for the research. Also, in the coming days (read- after our paper deadline) I will post a tutorial on using billboard impostors for reflections in XNA.
A reflected bunny with correct intersections
3rd order and 2nd order reflections showing the impostor normals
2nd order reflections
Monday, March 31, 2008
Tuesday, March 25, 2008
Volumetric Clouds
Last summer I did some experiments with an unusual way of rendering volumetric clouds. The usual way to render volumetric clouds is with a bunch of quads/voxels/slices. But this approach is much simpler, and at least for the exterior, gives really good results.
The problem came when rendering multiple clouds in a field. Due to the way the cloud is rendered, overlapping clouds would produce artifacts, and short of rendering each cloud by itself I couldn't figure out how to eliminate the artifacts. But then again, I didn't spend too much time on them either.
The general approach is as follows:
A really simple algorithm that produces a pretty good result. And as you can see, it is essentially a post processing technique. However, it seems that this approach is more suitable to large volcanic or nuclear plumes rather than a dense field of many cumulus clouds (e.g. a large cloud model with many mega particles, as can be seen in the author's images).
My experiment:
Author's images:
There's an article on this technique in Shader X5 and you can find the slides here. If you're thinking of trying this technique out, you really don't need the book. The slides are more than you need to implement it. The only detail is that you don't actually use a fractal cube like the slides say, you really just use a billboarded quad.
The problem came when rendering multiple clouds in a field. Due to the way the cloud is rendered, overlapping clouds would produce artifacts, and short of rendering each cloud by itself I couldn't figure out how to eliminate the artifacts. But then again, I didn't spend too much time on them either.
The general approach is as follows:
- Draw your scene as usual
- Then render a low poly cloud mesh - this could be created in Max/Maya with a group of spheres/ellipsoids in the shape of a cloud
- Copy the backbuffer to a render target, and clear the alpha channel
- Render the clouds with full alpha and your lighting of choice (I only implemented simple phong shading, a better lighting algorithm would have helped). We'll call this the cloud map.
- Blur the cloud map using a Gaussian filter, or another if you like.
- After blurring we need to distort the blurred cloud map. To do this, we place a quad at the center of each cloud and billboard each vertex to the camera (make sure the billboard covers the entire cloud from any angle).
- Now we render this quad to distort the cloud map. In the pixel shader for the quad we use the projected position as the texture coordinates and shift the texture coordinates based on the angles to the X and Y axis. We then sample a 2-channel fractal/noise texture with these texture coordinates to obtain our distortion offset. Next, we sample the blurred cloud map using the texture coordinates distorted by the distortion offset and the distance from the quad to the camera:
- float4 distortedColor = tex2D(BlurredCloudSampler, texC + offset/ dist);
- Optionally, after we have distorted our cloud map, we can perform a radial blur for a softer look.
- Finally we merge our render target with the back buffer.
A really simple algorithm that produces a pretty good result. And as you can see, it is essentially a post processing technique. However, it seems that this approach is more suitable to large volcanic or nuclear plumes rather than a dense field of many cumulus clouds (e.g. a large cloud model with many mega particles, as can be seen in the author's images).
My experiment:
Author's images:
There's an article on this technique in Shader X5 and you can find the slides here. If you're thinking of trying this technique out, you really don't need the book. The slides are more than you need to implement it. The only detail is that you don't actually use a fractal cube like the slides say, you really just use a billboarded quad.
Labels:
C#,
DirectX,
managed,
Post Processing,
Volumetric Clouds
Saturday, March 22, 2008
Fluid Dynamics
Fluid dynamics has always been an interest of mine. I've just never found the time to research any of the techniques. Hopefully sometime this summer I will have time...
Anyways, the work by Ron Fedkiw is just amazing. Take a look at his website to see for your self.
http://physbam.stanford.edu/~fedkiw/
Anyways, the work by Ron Fedkiw is just amazing. Take a look at his website to see for your self.
http://physbam.stanford.edu/~fedkiw/
Thursday, March 20, 2008
More Software Rendering...
Quick update. Just found a video I had made of the software renderer.
Not perfect, but pretty good I'd say.
Technorati Profile
Not perfect, but pretty good I'd say.
Technorati Profile
Labels:
C#,
normal mapping,
parallax mapping,
shadow mapping,
Software Rendering
Terrain Rendering and Atmospheric Scattering
Terrain has always been an interest of mine, ever since I loaded my first 8-bit raw heightmap. And last summer I decided to dive into Atmospheric Scattering after reading Ysaneya's developer journal over on gamedev for quite some time.
So I read ATi's paper and looked at their demo and set out to work. Now, I'm not math genious (having only taken calculus and linear algebra), but implementing atmospheric scattering is not for the faint of heart. However, after about a month I had a pretty good working implementation based on the Hoffman and Preetham paper.
Then I started on the water rendering component. There were two articles in Shader X3/4 that were of great help when it came to getting the color just right.
After a couple of months I had a pretty good looking demo. The terrain wasn't anything special, it was just broken down into a quadtree, so I was only able to render 2048x2048 terrain. In order to do this I made the water be at half the height of the terrain and culled the non-visible areas when i was above/below the water, so I was effectively only rendering half of the terrain when I was above/below the water line.
Anyways, on to the pictures:
Scene details:
And a video of it in action:
The water has realistic coastal coloring, soft edges when it intersects the terrain, under water fogging, and depth fogging.
This summer I plan to rewrite the whole application in c++. I also want to extend the terrain rendering with geomipmapping, fix the huge sun, and add volumetric clouds. I had tried implementing volumetric clouds using mega particles, but didn't turn out too well in a dense cloud field. Worked really well for volcanic or nuclear plumes though. More on this in a later post.
So I read ATi's paper and looked at their demo and set out to work. Now, I'm not math genious (having only taken calculus and linear algebra), but implementing atmospheric scattering is not for the faint of heart. However, after about a month I had a pretty good working implementation based on the Hoffman and Preetham paper.
Then I started on the water rendering component. There were two articles in Shader X3/4 that were of great help when it came to getting the color just right.
After a couple of months I had a pretty good looking demo. The terrain wasn't anything special, it was just broken down into a quadtree, so I was only able to render 2048x2048 terrain. In order to do this I made the water be at half the height of the terrain and culled the non-visible areas when i was above/below the water, so I was effectively only rendering half of the terrain when I was above/below the water line.
Anyways, on to the pictures:
Scene details:
- 1024x1024 terrain with multi-texturing and aerial perspective
- Sky dome with sun and skylight scattering
- 2048x2048 water plane(size not number of vertices)
- Bloom post processing
- Written in c# and managed directx
And a video of it in action:
The water has realistic coastal coloring, soft edges when it intersects the terrain, under water fogging, and depth fogging.
This summer I plan to rewrite the whole application in c++. I also want to extend the terrain rendering with geomipmapping, fix the huge sun, and add volumetric clouds. I had tried implementing volumetric clouds using mega particles, but didn't turn out too well in a dense cloud field. Worked really well for volcanic or nuclear plumes though. More on this in a later post.
Labels:
Atmospheric Scattering,
Bloom,
C#,
DirectX,
managed,
Post Processing,
Terrain,
Water
Wednesday, March 19, 2008
Software Rendering
About a year ago, I took a class on software rasterization. It really helped me understand everything that DirectX/Opengl does underneath the hood. We started out by rasterizing 2D lines and images and then moved onto a full 3D renderer all done in software (i.e. on the cpu). It was a pretty cool class. The focus of the class was mainly on implementation rather than performance, so it isn't as fast as other software renderers.
I wrote the rasterizer using c#, and the Tao opengl framework (only for sending the pixel information to the graphics card with glDrawPixels()). I posted this on image of the day at gamedev: http://www.gamedev.net/community/forums/topic.asp?topic_id=446142
Here's what I had accomplished by the end of the semester:
perspective correct texture mapping:
depth of field:
distance based fog:
parallax mapping:
shadow mapping with percentage closer filtering:
I wrote the rasterizer using c#, and the Tao opengl framework (only for sending the pixel information to the graphics card with glDrawPixels()). I posted this on image of the day at gamedev: http://www.gamedev.net/community/forums/topic.asp?topic_id=446142
Here's what I had accomplished by the end of the semester:
- Gouraud shading
- Phong shading
- Blinn and Phong specular reflection
- Directional and Point lights
- Perspective correct texture mapping
- Normal Mapping
- Parallax Mapping
- Projective Texturing
- Shadow Mapping
- Environment Mapping - for skybox and distant reflections
- Distance Fog
- Depth of Field
- Bilinear Filtering
- Camera Interpolation, for movie creation
perspective correct texture mapping:
depth of field:
distance based fog:
parallax mapping:
shadow mapping with percentage closer filtering:
Labels:
C#,
depth of field,
fog,
normal mapping,
parallax mapping,
PCF,
reflections,
shadow mapping,
Software Rendering
Yes... another blog
Why am I starting this blog you ask? Well, after reading other graphics related blogs for some time now, I figured it would be a good place for me to document not only current work but previous work.
I've been developing 3D demos/applications for a couple of years now. It all started when I took a class on game development for the Sony line of mobile phones. After that I started working with DirectX and haven't stopped since.
My interests lie mainly in terrain and atmosphere effects, but more recently I've become interested in post processing techniques as well. This semester I began researching reflections with my professor. We're developing a method to produce accurate reflections of objects using non-pinhole camera depth impostors. This will allow us to have reflected rays intersect the depth map that would not be able to using a regular depth map impostor (more on this to come).
The first few posts will most likely be historical. I'll mainly be posting these previous few projects as sort of documentation for them.
I've been developing 3D demos/applications for a couple of years now. It all started when I took a class on game development for the Sony line of mobile phones. After that I started working with DirectX and haven't stopped since.
My interests lie mainly in terrain and atmosphere effects, but more recently I've become interested in post processing techniques as well. This semester I began researching reflections with my professor. We're developing a method to produce accurate reflections of objects using non-pinhole camera depth impostors. This will allow us to have reflected rays intersect the depth map that would not be able to using a regular depth map impostor (more on this to come).
The first few posts will most likely be historical. I'll mainly be posting these previous few projects as sort of documentation for them.
Subscribe to:
Posts (Atom)