Viewpoint
Andrew Woo and Pierre Poulin
Issue: Volume 35 Issue 6: (Oct/Nov 2012)

Viewpoint

Andrew Woo is the co-author of Shadow Algorithms Data Miner, a digital shadow generation resource. He is also the chief technology officer for NGRAIN, provider of interactive 3D simulation software and solutions.

Pierre Poulin is a professor in the Computer Science and Operations Research department of the Université de Montréal, where he teaches and supervises research in computer graphics.

Digital shadow generation is an important aspect of visualization and visual effects in film, games, simulations, engineering, and scientific applications. Shadows cause some of the highest-intensity contrasts in images; they provide strong clues about the shapes, relative positions, and surface characteristics of the objects; they can indicate the approximate location, intensity, shape, size, and distribution of the light source; and they represent an integral part of the sunlight (or lack of) effect in architecture for many buildings.

In the context of this article, a shadow algorithm basically describes how to digitally compute the look of shadows for a 3D rendered output. There are a lot of capabilities that need to be included in shadow computations, many of which have not been completely well solved. They include:

  • Shadow types, including hard shadows, soft shadows, and semitransparent shadows.
  • Geometry representations other than polygons, including voxels, point cloud, higher-order surfaces, and height-fields.
  • Rendering capability support, including complex thin materials (for instance, hair), atmospheric shadows, motion blur, and global illumination.

The sheer volume of resources devoted to the topic of shadow algorithms can be quite daunting for software developers trying to decide which approaches to consider. In fact, there are more than 700 published research papers on shadow algorithms available today. In addition, there have been a lot of rigid views, based on personal preference, as to which approaches to take, as opposed to objectively selecting the best approaches based on the user’s needs.

The Approaches/Considerations

While no two development projects are exactly alike, there are some common high-level recommendations to consider, as most algorithms fall into one of four approaches: planar shadows (cast shadows on planar large surfaces only), shadow volumes, shadow depth map, and raytracing. This article will describe each approach and then provide the top three considerations when determining which algorithms are best suited for certain situations.

Planar shadows: Because this category only computes shadows that fall on very few large planar surfaces, such as a floor or walls (and nothing else), the object’s vertices can be projected from the light to these surfaces. The projected vertices can form shadow polygons that are rendered as black or attenuated surface color to indicate shadow (the left diagram in Figure 1). If the assumption of such planar shadows is good enough, this is usually the approach of choice applied.

Shadow volumes: Shadow polygons are projected from the original polygons in the opposite direction of the light and then placed into the rendering data structure as invisible polygons. To compute shadow determination, an initial shadow count is calculated by counting the number of shadow polygons that contain the viewing position (C in Figure 1). The shadow count is then incremented by one whenever a front-facing shadow polygon (that is, entering the shadow umbra) crosses in front of the nearest visible surface. The shadow count is decremented by one whenever a back-facing shadow polygon (that is, exiting the shadow umbra) crosses in front of the nearest visible surface. If the final shadow count is zero, then the visible surface does not lie in shadow; if positive, it is in shadow. In Figure 1’s diagram on the right, the initial shadow count is one, and it gets incremented/ decremented to zero, one, and two until it hits the shading point (P).

Shadow depth map: A Z-buffer approach is used to determine visibility and depth with respect to the camera, and this process is repeated for the light source. A buffer is created with respect to the viewpoint of the light source (L), except that the buffer contains the smallest Z-depth values (Zn). During rendering of the camera view, each point (P) to be shaded is projected toward the light and intersects the shadow depth-map pixel. If the distance ∥P − L∥ is larger than the Z-depth value Zn (as seen in the left diagram in Figure 2) from this projected shadow depth map pixel, then P lies in shadow; otherwise, it is fully lit.

Raytracing: A shadow ray is shot from the point to be shaded (P) toward the light source (L). If the shadow ray intersects any object between (P) and (L), then it lies in shadow; otherwise, it is fully lit. In the right diagram in Figure 2, P is in shadow because the shadow ray hits the occluder.

Consideration 1: Implications of the Capabilities It is critical for the software developer to understand what capabilities are needed from the shadow algorithms because there may be some limitations with a certain approach. Figure 3 contains a simple table of capabilities matching the key approaches. For example, if motion blur is required, which is often the case for special effects in films, then shadow volumes should not be a first choice; there has been no attempt to create motion blur for shadows using shadow volumes.

Consideration 2: Code Maintenance Benefits In some cases, reusability of the code for visibility determination and then for shadow computations is attractive because it not only reduces the initial coding effort, but it also decreases future code maintenance. This may be an important consideration for visibility approaches such as raytracing (which is the basis for shadow raytracing) and Z-buffer (which is the basis for shadow depth maps).

Similarly, it’s important to determine whether there are opportunities for reduced or shared code maintenance when combining different capabilities (as listed above) of shadows. Using the same approach for the different capabilities means that there is some consistency in behaviors. For example, if hard shadows and soft shadows are both needed, it makes more sense to consider shadow depth-map or raytracing algorithms because there is much shared code between the support of hard and soft shadows.

Consideration 3: Implicit Key Requirements After the above two criteria have been taken into account, a simple decision tree can be used to determine the top-level approach, such as the one outlined in Figure 4. This graphic demonstrates how polygon/other geometry support and performance requirements influence the overall approach.

Trends

Given that the above three considerations are studied at a high level, we can go a bit deeper in terms of the algorithms within the context of entertainment industry trends.

Looking at real-time performance, shadow volume or shadow depth-map algorithms are the top commonplace approaches. GPU-based shadow volumes are good if only (usually well-formed) polygons and small datasets are considered. Otherwise, GPU-based shadow depth maps tend to be used. In dealing with large scenes (an entire world), cascaded shadow depth maps (and, to a lesser extent, perspective shadow depth maps) are the norm. There is also baking of shadow effects for static parts of the scene, which can employ offline rendering.

In terms of offline rendering situations, shadow volumes are almost never offered. The main contenders are shadow depth maps and raytracing. Standard shadow depth maps are favored due to performance (that is, faster than raytracing). However, raytracing performance has gained momentum and proves even faster for highly complex scenes; moreover, it is available when shadow depth maps become troublesome or fail.

The image set illustrates an example of semitransparent shadows due to a semitransparent object (left) and soft shadows due to motion blur (right).

Never that Easy…

This article gives a short summary of three critical considerations for shadow algorithm/approach decisions. However, this is just the tip of the iceberg. It is also important to sit down and evaluate other factors, such as:

  • Real-time versus offline rendering needs.
  • The deployed platforms’ constraints (for instance, tablets are constrained in GPU capabilities, performance, and memory).
  • Whether the chosen algorithms have been patented.
  • The limitations and trends of adoption of specific algorithms that fall within the above high-level approaches (cascaded or perspective shadow depth maps, under the high-level approach of shadow depth maps).

With a solid understanding of the above areas, developers working in the entertainment field and other industries can determine which approach will yield the most efficient process— and the best results.