This time it’s not a topic about physics, but about rendering. One important thing that gives the user a feeling about how a body is positioned are shadows. Since shadows are a nature given “feature” of reality everyone will be able to at least guess the position of an object by the size and perspective of it’s shadow. A very small shadow usually tells you that an object is far above the shadowed object, and the bigger the shadow get’s the closer the object approaches the shadow receiver. So in my opinion an application like the NewtonPlayGround, which exclusivley deals with dynamic objects, must have shadows. It makes things much easier. Without those shadows, it’s often hard to guess where the object currently is.
So I already implemented shadows some months ago using Shadow Volumes. It’s not the best method, so some days ago I evaluated another method which uses shadow mapping. But both methods have their positive and negative sides, so I’m right now thinking about which of those two methods to use in the release version (maybe I’ll give the user the possibility to switch between them).
But as this blog is geared towards programmers and not end-users, I decided to do a small write-up on the two different techniques to maybe give coders that never implemented them a hint on deciding what they want to use.
Technique 1 : Shadow Volumes
This was the first technique I implemented and I already used it in other projects. It’s physically correct, cause (as the name suggest) it threatens shadows as a volume. But generating the shadow volumes iinvolving heavy maths. You first need to get the silhouette of an object, the outline of it as seen from the light source. This also involves generating generating connectivity information of all the triangles in a mesh, since you need to iterate through the object’s edges for getting that outline. So after you have the outline of the object as seen from the light’s point-of-view the next step is to extrude those edges away from the light source, at best towards infinity. After that math is done you use the stencil buffer (which is a common feature since nVidia’s RivaTNT) to mask the volume of the shadows and then render the scene a second time.
The good :
- Sharp and correct shadows. No aliasing, no matter what geometry and what light position.
- Works for all kinds of light. No matter if directional (e.g. sun) or omni-directional.
The bad :
- Heavy on maths and therefore CPU-bound. Calculations could be offloaded to the GPU using shaders, but the playground is also aimed at lower-end systems, so that’s no choice.
- Also heavy on the GPU cause they burn a huge amount of stencil-fillrate. Especially when AA is enabled they give a huge performance-hit.
- Objects need to be closed. Non-closed objects cause shadow errors. There is no easy way to fix this. So either tell the user to import only closed objects or just disable them if a non-closed object is rendered.
- To have them robust (e.g. when the viewer enters a shadow volume) the shadow volume must be capped (both front and back), which needs additional fillrate and calculations.
- Increased memory usage. For each object you need to store additional information, like a list that stores connectivity information.
Technique 2 : Shadow mapping
I just recently implemented this, but already did a demo on this two years ago. This technique is a so-called “image-space based” shadowing technique which basicly means you don’t need to know anything about the geometry of a scene (so no silhouette-determination, no connectivity, no extruding of the edges). This makes it very easy and fast to implement : First you render only the depth information of your scene into a separate texture (I do this using an offscreen-buffer, cause the shadow map usually is bigger than your viewport). Then you render your scene and project this texture onto this scene using projective texturing from the light’s point-of-view. You also need to set-up some depth compare functions, and then the hardware does the rest for you.
The good :
- Very easy to implement.
- No knowledge of the geometry required. So objects don’t need to be closed.
The bad :
- Depending on size of the depth texture and position of the light-source it gives heavy artifacts and often also low resolution for the shadows.
- Big shadow maps needed (2048 in the above screenshot) which means that copying depth to this texture is heavy on the GPU.
- Need to configure depth-bias to get rid of selfshadowing-artifacts. This is very hard (nearly impossible) for a tool like the NewtonPlayGround where scenes can be any size and can contain any number of objects.
- Normal approach only allows for directional lights. Omni-directional lights need e.g. the use of a shadow-mapping cubemap.
- Objects leaving (and outside of) the light’s frustum give graphical errors due to reaching edge/end of the shadow map.
Note : The first point can be resolved using modern techniques like Lightspace Perspective Shadow maps. But the other drawbacks remain.
So what do I use?
Well, as it looks right now I’ll stay with the shadow volumes. As I already stated in the “bad” section for the shadow maps I can’t determine the size of a scene since the user has absolute freedom about this. He can make a scene that extends to 10 units in all directions, but also one that extends 1000 units in all directions. So tweaking the texture size of the depthmap, the polygon offset for removing artifacts and then setting the light so that the whole scene falls into the light’s frustum would be very hard to realize.
That’s it for today. I hope some of you out there that weren’t shure about what shadow technique to use now have a clue on what to go for. And if you like to discuss my talkings here, please don’t forget that the NewtonPlayGround is also aimed at lower-end systems. This means that I can’t make use of the newest features around there (like using shaders for smoothing the shadow-maps). Please don’t forget this before posting something like “Why don’t you just use shaders to fix negative point X of technique Y”.