10.5 Material Interface and Implementations
With a variety of textures available, we will turn to materials, first introducing the material interface and then a few material implementations. pbrt’s materials all follow a similar form, evaluating textures to get parameter values that are used to initialize their particular BSDF model. Therefore, we will only include a few of their implementations in the text here.
The Material interface is defined by the Material class, which can be found in the file base/material.h. pbrt includes the implementations of 11 materials; these are enough that we have collected all of their type names in a fragment that is not included in the text.
One of the most important methods that Material implementations must provide is GetBxDF(). It has the following signature:
There are a few things to notice in its declaration. First, it is templated based on a type TextureEvaluator. This class is used by materials to, unsurprisingly, evaluate their textures. We will discuss it further in a page or two, as well as MaterialEvalContext, which serves a similar role to TextureEvalContext.
Most importantly, note the return type, ConcreteBxDF. This type is specific to each Material and should be replaced with the actual BxDF type that the material uses. (For example, the DiffuseMaterial returns a DiffuseBxDF.) Different materials thus have different signatures for their GetBxDF() methods. This is unusual for an interface method in C++ and is not usually allowed with regular C++ virtual functions, though we will see shortly how pbrt handles the variety of them.
Each Material is also responsible for defining the type of BxDF that it returns from its GetBxDF() method with a local type definition for the type BxDF. For example, DiffuseMaterial has
in the body of its definition.
The value of defining the interface in this way is that doing so makes it possible to write generic BSDF evaluation code that is templated on the type of material. Such code can then allocate storage for the BxDF on the stack, for whatever type of BxDF the material uses. pbrt’s wavefront renderer, which is described in Chapter 15, takes advantage of this opportunity. (Further details and discussion of its use there are in Section 15.3.9.) A disadvantage of this design is that materials cannot return different BxDF types depending on their parameter values; they are limited to the one that they declare.
The Material class provides a GetBSDF() method that handles the variety of material BxDF return types. It requires some C++ arcana, though it centralizes the complexity of handling the diversity of types returned from the GetBxDF() methods.
Material::GetBSDF() has the same general form of most of the dynamic dispatch method implementations in pbrt. (We have elided almost all of them from the text since most of them are boilerplate code.) Here we define a lambda function, getBSDF, and call the Dispatch() method that Material inherits from TaggedPointer. Recall that Dispatch() uses type information encoded in a 64-bit pointer to determine which concrete material type the Material points to before casting the pointer to that type and passing it to the lambda.
getBSDF is a C++ generic lambda: when it is called, the auto mtl parameter will have a concrete type, that of a reference to a pointer to one of the materials enumerated in the <<Material Types>> fragment. Given mtl, then, we can find the concrete type of its material and thence the type of its BxDF. If a material does not return a BxDF, it should use void for its BxDF type definition. In that case, an unset BSDF is returned. (The MixMaterial is the only such Material in pbrt.)
The provided ScratchBuffer is used to allocate enough memory to store the material’s BxDF; using it is much more efficient than using C++’s new and delete operators here. That memory is then initialized with the value returned by the material’s GetBxDF() method before the complete BSDF is returned to the caller.
Materials that incorporate subsurface scattering must define a GetBSSRDF() method that follows a similar form. They must also include a using declaration in their class definition that defines a concrete BSSRDF type. (The code for rendering BSSRDFs is included only in the online edition.)
The Material class provides a corresponding GetBSSRDF() method that uses the provided ScratchBuffer to allocate storage for the material-specific BSSRDF.
The MaterialEvalContext that GetBxDF() and GetBSSRDF() take plays a similar role to other *EvalContext classes: it encapsulates only the values that are necessary for material evaluation. They are a superset of those that are used for texture evaluation, so it inherits from TextureEvalContext. Doing so has the added advantage that MaterialEvalContexts can be passed directly to the texture evaluation methods.
As before, there is not only a constructor that initializes a MaterialEvalContext from a SurfaceInteraction but also a constructor that takes the values for the members individually (not included here).
A TextureEvaluator is a class that is able to evaluate some or all of pbrt’s texture types. One of its methods takes a set of textures and reports whether it is capable of evaluating them, while others actually evaluate textures. On the face of it, there is no obvious need for such a class: why not allow Materials to call the Texture Evaluate() methods directly? This additional layer of abstraction aids performance with the wavefront integrator; it makes it possible to separate materials into those that have lightweight textures and those with heavyweight textures and to process them separately. Doing so is beneficial to performance on the GPU; see Section 15.3.9 for further discussion.
For now we will only define the UniversalTextureEvaluator, which can evaluate all textures. In practice, the indirection it adds is optimized away by the compiler such that it introduces no runtime overhead. It is used with all of pbrt’s integrators other than the one defined in Chapter 15.
TextureEvaluators must provide a CanEvaluate() method that takes lists of FloatTextures and SpectrumTextures. They can then examine the types of the provided textures to determine if they are able to evaluate them. For the universal texture evaluator, the answer is always the same.
TextureEvaluators must also provide operator() method implementations that evaluate a given texture. Thus, given a texture evaluator texEval, material code should use the expression texEval(tex, ctx) rather than tex.Evaluate(ctx). The implementation of this method is again trivial for the universal evaluator. (A corresponding method for spectrum textures is effectively the same and not included here.)
Returning to the Material interface, all materials must provide a CanEvaluateTextures() method that takes a texture evaluator. They should return the result of calling its CanEvaluate() method with all of their textures provided. Code that uses Materials is then responsible for ensuring that a Material’s GetBxDF() or GetBSSRDF() method is only called with a texture evaluator that is able to evaluate its textures.
Materials also may modify the shading normals of objects they are bound to, usually in order to introduce the appearance of greater geometric detail than is actually present. The Material interface has two ways that they may do so, normal mapping and bump mapping.
pbrt’s normal mapping code, which will be described in Section 10.5.3, takes an image that specifies the shading normals. A nullptr value should be returned by this interface method if no normal map is included with a material.
Alternatively, shading normals may be specified via bump mapping, which takes a displacement function that specifies surface detail with a FloatTexture. A nullptr value should be returned if no such displacement function has been specified.
What should be returned by HasSubsurfaceScattering() method implementations should be obvious; this method is used to determine for which materials in a scene it is necessary to do the additional processing to model that effect.
10.5.1 Material Implementations
With the preliminaries covered, we will now present a few material implementations. All the Materials in pbrt are fairly basic bridges between Textures and BxDFs, so we will focus here on their basic form and some of the unique details of one of them.
Diffuse Material
DiffuseMaterial is the simplest material implementation and is a good starting point for understanding the material requirements.
These are the BxDF and BSSRDF type definitions for DiffuseMaterial. Because this material does not include subsurface scattering, BSSRDF can be set to be void.
The constructor initializes the following member variables with provided values, so it is not included here.
The CanEvaluateTextures() method is easy to implement; the various textures used for BSDF evaluation are passed to the given TextureEvaluator. Note that the displacement texture is not included here; if present, it is handled separately by the bump mapping code.
There is also not very much to GetBxDF(); it evaluates the reflectance texture, clamping the result to the range of valid reflectances before passing it along to the DiffuseBxDF constructor and returning a DiffuseBxDF.
GetNormalMap() and GetDisplacement() return the corresponding member variables and the remaining methods are trivial; see the source code for details.
Dielectric Material
DielectricMaterial represents a dielectric interface.
It returns a DielectricBxDF and does not include subsurface scattering.
DielectricMaterial has a few more parameters than DiffuseMaterial. The index of refraction is specified with a SpectrumTexture so that it may vary with wavelength. Note also that two roughness values are stored, which allows the specification of an anisotropic microfacet distribution. If the distribution is isotropic, this leads to a minor inefficiency in storage and, shortly, texture evaluation, since both are always evaluated.
GetBxDF() follows a similar form to DiffuseMaterial, evaluating various textures and using their results to initialize the returned DielectricBxDF.
If the index of refraction is the same for all wavelengths, then all wavelengths will follow the same path if a ray is refracted. Otherwise, they will go in different directions—this is dispersion. In that case, pbrt only follows a single ray path according to the first wavelength in SampledWavelengths rather than tracing multiple rays to track each of them, and a call to SampledWavelengths::TerminateSecondary() is necessary. (See Section 4.5.4 for more information.)
DielectricMaterial therefore calls TerminateSecondary() unless the index of refraction is known to be constant, as determined by checking if eta’s Spectrum type is a ConstantSpectrum. This check does not detect all cases where the sampled spectrum values are all the same, but it catches most of them in practice, and unnecessarily terminating the secondary wavelengths affects performance but not correctness. A bigger shortcoming of the implementation here is that there is no dispersion if light is reflected at a surface and not refracted. In that case, all wavelengths could still be followed. However, how light paths will be sampled at the surface is not known at this point in program execution.
It can be convenient to specify a microfacet distribution’s roughness with a scalar parameter in , where values close to zero correspond to near-perfect specular reflection, rather than by specifying values directly. The RoughnessToAlpha() method performs a mapping that gives a reasonably intuitive control for surface appearance.
The GetBxDF() method then evaluates the roughness textures and remaps the returned values if required.
Given the index of refraction and microfacet distribution, it is easy to pull the pieces together to return the final BxDF.
Mix Material
The final material implementation that we will describe in the text is MixMaterial, which stores two other materials and uses a Float-valued texture to blend between them.
MixMaterial does not cleanly fit into pbrt’s Material abstraction. For example, it is unable to define a single BxDF type that it will return, since its two constituent materials may have different BxDFs, and may themselves be MixMaterials, for that matter. Thus, MixMaterial requires special handling by the code that uses materials. (For example, there is a special case for MixMaterials in the SurfaceInteraction::GetBSDF() method described in Section 10.5.2.)
This is not ideal: as a general point of software design, it would be better to have abstractions that make it possible to provide this functionality without requiring special-case handling in calling code. However, we were unable to find a clean way to do this while still being able to statically reason about the type of BxDF a material will return; that aspect of the Material interface offers enough of a performance benefit that we did not want to change it.
Therefore, when a MixMaterial is encountered, one of its constituent materials is randomly chosen, with probability given by the floating-point amount texture. Thus, a 50/50 mix of two materials is not represented by the average of their respective BSDFs and so forth, but instead by each of them being evaluated half the time. This is effectively the material analog of the stochastic alpha test that was described in Section 7.1.1. The ChooseMaterial() method implements the logic.
Stochastic selection of materials can introduce noise in images at low sampling rates; see Figure 10.20. However, a few tens of samples are generally plenty to resolve any visual error. Furthermore, this approach does bring benefits: sampling and evaluation of the resulting BSDF is more efficient than if it was a weighted sum of the BSDFs from the constituent materials.
MixMaterial provides an accessor that makes it possible to traverse all the materials in the scene, including those nested inside a MixMaterial, so that it is possible to perform operations such as determining which types of materials are and are not present in a scene.
A fatal error is issued if the GetBxDF() method is called. A call to GetBSSRDF() is handled similarly, in code not included here.
10.5.2 Finding the BSDF at a Surface
Because pbrt’s Integrators use the SurfaceInteraction class to collect the necessary information associated with each intersection point, we will add a GetBSDF() method to this class that handles all the details related to computing the BSDF at its point.
This method first calls the SurfaceInteraction’s ComputeDifferentials() method to compute information about the projected size of the surface area around the intersection on the image plane for use in texture antialiasing.
As described in Section 10.5.1, if there is a MixMaterial at the intersection point, it is necessary to resolve it to be a regular material. A while loop here ensures that nested MixMaterials are handled correctly.
If the final material is nullptr, it represents a non-scattering interface between two types of participating media. In this case, a default uninitialized BSDF is returned.
Otherwise, normal or bump mapping is performed before the BSDF is created.
The appropriate utility function for normal or bump mapping is called, depending on which technique is to be used.
With differentials both for texture filtering and for shading geometry now settled, the Material::GetBSDF() method can be called. Note that the universal texture evaluator is used both here and previously in the method, as there is no need to distinguish between different texture complexities in this part of the system.
pbrt provides an option to override all the materials in a scene with equivalent diffuse BSDFs; doing so can be useful for some debugging problems. In this case, the hemispherical–directional reflectance is used to initialize a DiffuseBxDF.
The SurfaceInteraction::GetBSSRDF() method, not included here, follows a similar path before calling Material::GetBSSRDF().
10.5.3 Normal Mapping
Normal mapping is a technique that maps tabularized surface normals stored in images to surfaces and uses them to specify shading normals in order to give the appearance of fine geometric detail.
With normal maps, one must choose a coordinate system for the stored normals. While any coordinate system may be chosen, one of the most useful is the local shading coordinate system at each point on a surface where the axis is aligned with the surface normal and tangent vectors are aligned with and . (This is the same as the reflection coordinate system described in Section 9.1.1.) When that coordinate system is used, the approach is called tangent-space normal mapping. With tangent-space normal mapping, a given normal map can be applied to a variety of shapes, while choosing a coordinate system like object space would closely couple a normal map’s encoding to a specific geometric object.
Normal maps are traditionally encoded in RGB images, where red, green, and blue respectively store the , , and components of the surface normal. When tangent-space normal mapping is used, normal map images are typically predominantly blue, reflecting the fact that the component of the surface normal has the largest magnitude unless the normal has been substantially perturbed. (See Figure 10.21.)
This RGB encoding brings us to an unfortunate casualty from the adoption of spectral rendering in this version of pbrt: while pbrt’s SpectrumTextures previously returned RGB colors, they now return point-sampled spectral values. If an RGB image map is used for a spectrum texture, it is not possible to exactly reconstruct the original RGB colors; there will unavoidably be error in the Monte Carlo estimator that must be evaluated to find RGB. Introducing noise in the orientations of surface normals is unacceptable, since it would lead to systemic bias in rendered images. Consider a bumpy shiny object: error in the surface normal would lead to scattered rays intersecting objects that they would never intersect given the correct normals, which could cause arbitrarily large error.
We might avoid that problem by augmenting the SpectrumTexture interface to include a method that returned RGB color, introducing a separate RGBTexture interface and texture implementations, or by introducing a NormalTexture that returned normals directly. Any of these could cleanly support normal mapping, though all would require a significant amount of additional code.
Because the capability of directly looking up RGB values is only needed for normal mapping, the NormalMap() function therefore takes an Image to specify the normal map. It assumes that the first three channels of the image represent red, green, and blue. With this approach we have lost the benefits of being able to scale and mix textures as well as the ability to apply a variety of mapping functions to compute texture coordinates. While that is unfortunate, those capabilities are less often used with normal maps than with other types of textures, and so we prefer not to make the Texture interfaces more complex purely for normal mapping.
Both NormalMap() and BumpMap() take a NormalBumpEvalContext to specify the local geometric information for the point where the shading geometry is being computed.
As usual, it has a constructor, not included here, that performs initialization given a SurfaceInteraction.
It also provides a conversion operator to TextureEvalContext, which only needs a subset of the values stored in NormalBumpEvalContext.
The first step in the normal mapping computation is to read the tangent-space normal vector from the image map. The image wrap mode is hard-coded here since Repeat is almost always the desired mode, though it would be easy to allow the wrap mode to be set via a parameter. Note also that the coordinate is inverted, again following the image texture coordinate convention discussed in Section 10.4.2.
Normal maps are traditionally encoded in fixed-point image formats with pixel values that range from 0 to 1. This encoding allows the use of compact 8-bit pixel representations as well as compressed image formats that are supported by GPUs. Values read from the image must therefore be remapped to the range to reconstruct an associated normal vector. The normal vector must be renormalized, as both the quantization in the image pixel format and the bilinear interpolation may have caused it to be non-unit-length.
In order to transform the normal to rendering space, a Frame can be used to specify a coordinate system where the original shading normal is aligned with the axis. Transforming the tangent-space normal into this coordinate system gives the rendering-space normal.
This function returns partial derivatives of the surface that account for the shading normal rather than the shading normal itself. Suitable partial derivatives can be found in two steps. First, a call to GramSchmidt() with the original and the new shading normal gives the closest vector to that is perpendicular to . is then found by taking the cross product of and the new , giving an orthogonal coordinate system. Both of these vectors are respectively scaled to have the same length as the original and vectors.
10.5.4 Bump Mapping
Another way to define shading normals is via a FloatTexture that defines a displacement at each point on the surface: each point has a displaced point associated with it, defined by , where is the offset returned by the displacement texture at and is the surface normal at (Figure 10.22). We can use this texture to compute shading normals so that the surface appears as if it actually had been offset by the displacement function, without modifying its geometry. This process is called bump mapping. For relatively small displacement functions, the visual effect of bump mapping can be quite convincing.
An example of bump mapping is shown in Figure 10.23, which shows part of the San Miguel scene rendered with and without bump mapping. There, the bump map gives the appearance of a substantial amount of detail in the walls and floors that is not actually present in the geometric model. Figure 10.24 shows one of the image maps used to define the bump function in Figure 10.23.
The BumpMap() function is responsible for computing the effect of bump mapping at the point being shaded given a particular displacement texture. Its implementation is based on finding an approximation to the partial derivatives and of the displaced surface and using them in place of the surface’s actual partial derivatives to compute the shading normal. Assume that the original surface is defined by a parametric function , and the bump offset function is a scalar function . Then the displaced surface is given by
where is the surface normal at .
The partial derivatives of can be found using the chain rule. For example, the partial derivative in is
We have already computed the value of ; it is and is available in the TextureEvalContext structure, which also stores the surface normal and the partial derivative . The displacement function can be readily evaluated, which leaves as the only remaining term.
There are two possible approaches to finding the values of and . One option would be to augment the FloatTexture interface with a method to compute partial derivatives of the underlying texture function. For example, for image map textures mapped to the surface directly using its parameterization, these partial derivatives can be computed by subtracting adjacent texels in the and directions. However, this approach is difficult to extend to complex procedural textures like some of the ones defined earlier in this chapter. Therefore, pbrt directly computes these values with forward differencing, without modifying the FloatTexture interface.
Recall the definition of the partial derivative:
Forward differencing approximates the value using a finite value of and evaluating at two positions. Thus, the final expression for is the following (for simplicity, we have dropped the explicit dependence on for some of the terms):
Interestingly enough, most bump-mapping implementations ignore the final term under the assumption that is expected to be relatively small. (Since bump mapping is mostly useful for approximating small perturbations, this is a reasonable assumption.) The fact that many renderers do not compute the values and may also have something to do with this simplification. An implication of ignoring the last term is that the magnitude of the displacement function then does not affect the bump-mapped partial derivatives; adding a constant value to it globally does not affect the final result, since only differences of the bump function affect it. pbrt computes all three terms since it has and readily available, although in practice this final term rarely makes a visually noticeable difference.
One remaining issue is how to choose the offsets and for the finite differencing computations. They should be small enough that fine changes in are captured but large enough so that available floating-point precision is sufficient to give a good result. Here, we will choose and values that lead to an offset that is about half the image-space pixel sample spacing and use them to update the appropriate member variables in the TextureEvalContext to reflect a shift to the offset position.
The <<Shift shiftedCtx dv in the direction>> fragment is nearly the same as the fragment that shifts du, so it is not included here.
Given the new positions and the displacement texture’s values at them, the partial derivatives can be computed directly using Equation (10.12):