the source of the previous demo as well as the new demo will be released over at the three.js forum soon.
You can follow the discussion here.
Happy deferred lighting!
It took a while, but I finally found the time to play around with WebGL and three.js in particular. I was overly impressed by Florian Bösch’s WebGL demo showing an implementation of deferred irradiance volumes, and so I decided to try to write my own deferred renderer for three.js. You can check the live demo at our demo section at redPlant.
It currently supports point light sources as well as deferred shadow maps. It does not yet support everything else such as spot lights, point light shadows, etc… There’s still a lot to do.
The pipeline works roughly as follows:
- In a first pass store depth as postprojection z/w to a floating point rendertarget.
- The second pass stores view space normals into a rendertarget.
- In a third pass render a shadow map for a simple directional light source. This is just plain old shadowmapping.
- Next render a proxy sphere geometry for each point light inside the scene. Inside the fragment shader for this sphere, sample the depth buffer and reconstruct the pixels view space position by unprojecting z/w and multiplying it with the inverse projection matrix.
- Figure out the lights position in view space and calculate its attenuation with respect to the pixels view space position. I’m using the attenuation formula from this guy.
- Write the result into the framebuffer. Repeat this step for each light source and accumulate every lights contribution.
- In the last pass the lightbuffer is sampled by using the uv’s of a fullscreen quad.
- The pixel’s view space position is again reconstructed as described above.
- The occlusion of the shadow map is determined by projecting the reconstructed view space position into light space. This is done by multiplying view space position with the inverse view matrix. The result is the world position. Multiplying with the light’s viewProjectionMatrix yields the light clip space position.
- Depth of the projected position is compared with the corresponding pixel from the shadowmap and the pixel’s occlusion can be determined. Standard deferred shadow mapping, so to say.
- In a last step compute the directional light’s contribution based on it’s position and the view space normal from the second rendertarget.
The performance is not necessarily jaw-dropping, it can however handle 1.500 point lights on a GTX560. The demo renders around 440 point lights plus one directional light.
I’m sure I will release the source some day, but before that I need to clean up a couple of things. There is still a lot of room for improvements.
In this context, I would like to also point out that we have another demo, that demonstrates how WebGL can be used for realtime product configuration and visualization. The example shows a realtime configurator for furniture and allows the user to freely switch between different materials for pillows, seats and other components.
That’s it for now.
Edit: I reduced the number of lights in the demo, as 440 seemed a bit too heavy for some mobile cards.