8 Reflection Models
This chapter defines a set of classes for describing the way that light scatters at surfaces. Recall that in Section 5.6.1 we introduced the bidirectional reflectance distribution function (BRDF) abstraction to describe light reflection at a surface, the BTDF to describe transmission at a surface, and the BSDF to encompass both of these effects. In this chapter, we will start by defining a generic interface to these surface reflection and transmission functions.
Scattering from many surfaces is often best described as a spatially varying mixture of multiple BRDFs and BTDFs; in Chapter 9, we will introduce a BSDF object that combines multiple BRDFs and BTDFs to represent overall scattering from the surface. The current chapter sidesteps the issue of reflection and transmission properties that vary over the surface; the texture classes of Chapter 10 will address that problem. BRDFs and BTDFs explicitly only model scattering from light that enters and exits a surface at a single point. For surfaces that exhibit meaningful subsurface light transport, we will introduce the BSSRDF class, which models subsurface scattering, in Section 11.4 after some of the related theory is introduced in Chapter 11.
Surface reflection models come from a number of sources:
- Measured data: Reflection distribution properties of many real-world surfaces have been measured in laboratories. Such data may be used directly in tabular form or to compute coefficients for a set of basis functions.
- Phenomenological models: Equations that attempt to describe the qualitative properties of real-world surfaces can be remarkably effective at mimicking them. These types of BSDFs can be particularly easy to use, since they tend to have intuitive parameters that modify their behavior (e.g., “roughness”).
- Simulation: Sometimes, low-level information is known about the composition of a surface. For example, we might know that a paint is comprised of colored particles of some average size suspended in a medium or that a particular fabric is comprised of two types of threads, each with known reflectance properties. In these cases, light scattering from the microgeometry can be simulated to generate reflection data. This simulation can be done either during rendering or as a preprocess, after which it may be fit to a set of basis functions for use during rendering.
- Physical (wave) optics: Some reflection models have been derived using a detailed model of light, treating it as a wave and computing the solution to Maxwell’s equations to find how it scatters from a surface with known properties. These models tend to be computationally expensive, however, and usually aren’t appreciably more accurate than models based on geometric optics are for rendering applications.
- Geometric optics: As with simulation approaches, if the surface’s low-level scattering and geometric properties are known, then closed-form reflection models can sometimes be derived directly from these descriptions. Geometric optics makes modeling light’s interaction with the surface more tractable, since complex wave effects like polarization can be ignored.
The “Further Reading” section at the end of this chapter gives pointers to a variety of such reflection models.
Before we define the relevant interfaces, a brief review of how they fit into the overall system is in order. If a SamplerIntegrator is used, the SamplerIntegrator::Li() method implementation is called for each ray. After finding the closest intersection with a geometric primitive, it calls the surface shader that is associated with the primitive. The surface shader is implemented as a method of Material subclasses and is responsible for deciding what the BSDF is at a particular point on the surface; it returns a BSDF object that holds BRDFs and BTDFs that it has allocated and initialized to represent scattering at that point. The integrator then uses the BSDF to compute the scattered light at the point, based on the incoming illumination at the point. (The process where a BDPTIntegrator, MLTIntegrator, or SPPMIntegrator is used rather than a SamplerIntegrator is broadly similar.)
Basic Terminology
In order to be able to compare the visual appearance of different reflection models, we will introduce some basic terminology for describing reflection from surfaces.
Reflection from surfaces can be split into four broad categories: diffuse, glossy specular, perfect specular, and retro-reflective (Figure 8.1). Most real surfaces exhibit reflection that is a mixture of these four types. Diffuse surfaces scatter light equally in all directions. Although a perfectly diffuse surface isn’t physically realizable, examples of near-diffuse surfaces include dull chalkboards and matte paint. Glossy specular surfaces such as plastic or high-gloss paint scatter light preferentially in a set of reflected directions—they show blurry reflections of other objects. Perfect specular surfaces scatter incident light in a single outgoing direction. Mirrors and glass are examples of perfect specular surfaces. Finally, retro-reflective surfaces like velvet or the Earth’s moon scatter light primarily back along the incident direction. Images throughout this chapter will show the differences between these various types of reflection when used in rendered scenes.
Given a particular category of reflection, the reflectance distribution function may be isotropic or anisotropic. Most objects are isotropic: if you choose a point on the surface and rotate it around its normal axis at that point, the distribution of light reflected doesn’t change. In contrast, anisotropic materials reflect different amounts of light as you rotate them in this way. Examples of anisotropic surfaces include brushed metal, many types of cloth, and compact disks.
Geometric Setting
Reflection computations in pbrt are evaluated in a reflection coordinate system where the two tangent vectors and the normal vector at the point being shaded are aligned with the , , and axes, respectively (Figure 8.2). All direction vectors passed to and returned from the BRDF and BTDF routines will be defined with respect to this coordinate system. It is important to understand this coordinate system in order to understand the BRDF and BTDF implementations in this chapter.
The shading coordinate system also gives a frame for expressing directions in spherical coordinates ; the angle is measured from the given direction to the axis, and is the angle formed with the axis after projection of the direction onto the plane. Given a direction vector in this coordinate system, it is easy to compute quantities like the cosine of the angle that it forms with the normal direction:
We will provide utility functions to compute this value and some useful variations; their use helps clarify BRDF and BTDF implementations.
The value of can be computed using the trigonometric identity , though we need to be careful to avoid taking the square root of a negative number in the rare case that 1 - Cos2Theta(w) is less than zero due to floating-point round-off error.
The tangent of the angle can be computed via the identity .
We can similarly use the shading coordinate system to simplify the calculations for the sine and cosine of the angle (Figure 8.3). In the plane of the point being shaded, the vector has coordinates , which are given by and , respectively. The radius is , so
The cosine of the angle between two vectors in the shading coordinate system can be found by zeroing the coordinate of the two vectors to get 2D vectors and then normalizing them. The dot product of these two vectors gives the cosine of the angle between them. The implementation below rearranges the terms a bit for efficiency so that only a single square root operation needs to be performed.
There are important conventions and implementation details to keep in mind when reading the code in this chapter and when adding BRDFs and BTDFs to pbrt:
- The incident light direction and the outgoing viewing direction will both be normalized and outward facing after being transformed into the local coordinate system at the surface.
- By convention in pbrt, the surface normal always points to the “outside” of the object, which makes it easy to determine if light is entering or exiting transmissive objects: if the incident light direction is in the same hemisphere as , then light is entering; otherwise, it is exiting. Therefore, one detail to keep in mind is that the normal may be on the opposite side of the surface than one or both of the and direction vectors. Unlike many other renderers, pbrt does not flip the normal to lie on the same side as .
- The local coordinate system used for shading may not be exactly the same as the coordinate system returned by the Shape::Intersect() routines from Chapter 3; they can be modified between intersection and shading to achieve effects like bump mapping. See Chapter 9 for examples of this kind of modification.
- Finally, BRDF and BTDF implementations should not concern themselves with whether and lie in the same hemisphere. For example, although a reflective BRDF should in principle detect if the incident direction is above the surface and the outgoing direction is below and always return no reflection in this case, here we will expect the reflection function to instead compute and return the amount of light reflected using the appropriate formulas for their reflection model, ignoring the detail that they are not in the same hemisphere. Higher level code in pbrt will ensure that only reflective or transmissive scattering routines are evaluated as appropriate. The value of this convention will be explained in Section 9.1.