The Rendering Pipeline
 
 
 

In the pre-processing phase, the RayCaster:

  1. Takes face, patch, and polygon statements from the SDL file.
  2. Transforms the control vertices of each object according to the SDL hierarchy of transformations.
  3. Converts the geometry into triangles.
  4. The number of triangles depends on the tessellation control settings. The vertex of each triangle stores surface U and V parameters, tangent vectors, surface normal, and position in world space.
  5. Applies the perspective viewing transformation, which includes the camera location, viewpoint, field of view, and twist.
  6. This step converts the triangles from world space to perspective (screen) space.
  7. The triangles are “scanned out” to the triangle buffer (each pixel inside the two-dimensional boundary of the screen space triangle is identified).

In the rendering phase, the RayCaster loops through each pixel of the new image and calculates:

  1. The light reflected toward the camera by each surface.
  2. U and V parameter values, tangent vectors, surface normal, and position for each intersection point with a triangle (calculated by interpolating from the information on the intersected triangle in the triangle buffer).
  3. Color and transparency values for the intersected triangle(s) based on the geometry of the intersecting point, the lights in the scene, and the surface shading model.
  4. Final color (RGB) and transparency (alpha) of the pixel based on the colors and transparencies of the intersected triangle(s).

The final step of combining all the intersected triangles into a final color is complex.

If the ray intersects more than one triangle, the list of intersections is sorted by distance from the camera and processed from back to front. The renderer builds the final color and transparency by combining the colors and transparencies of each triangle, along with the atmospheric effects between the triangles. See the Procedural Textures and Natural Phenomena section of the SDL Reference Manual for more information on how these colors are calculated and combined.

This illustration shows the complex process by which the renderer combines the colors of the intersected triangles. It does not show the additional complexities introduced by motion blur.

The following illustration shows an example of how both the surfaces and ray segments are used to calculate the color and transparency of a pixel.

To calculate the ray illustrated above, the renderer starts with the color and transparency of the background. Then it calculates the atmospheric effects of the rightmost ray segment (the segment between the farthest surface and the background) and adds them to the color and transparency of the background. Then it adds the color and transparency of the farthest surface, then the next closest, and so on. Note that if any of the surfaces are opaque (transparency 0), the color information of farther surfaces is lost, since the opaque surface completely obscures the surfaces behind it.