1.3 pbrt: System Overview

pbrt is structured using standard object-oriented techniques: abstract base classes are defined for important entities (e.g., a Shape abstract base class defines the interface that all geometric shapes must implement, the Light abstract base class acts similarly for lights, etc.). The majority of the system is implemented purely in terms of the interfaces provided by these abstract base classes; for example, the code that checks for occluding objects between a light source and a point being shaded calls the Shape intersection methods and doesn’t need to consider the particular types of shapes that are present in the scene. This approach makes it easy to extend the system, as adding a new shape only requires implementing a class that implements the Shape interface and linking it into the system.

pbrt is written using a total of 10 key abstract base classes, summarized in Table 1.1. Adding a new implementation of one of these types to the system is straightforward; the implementation must inherit from the appropriate base class, be compiled and linked into the executable, and the object creation routines in Appendix B must be modified to create instances of the object as needed as the scene description file is parsed. Section A.4 discusses extending the system in this manner in more detail.

Table 1.1: Main Interface Types. Most of pbrt is implemented in terms of 10 key abstract base classes, listed here. Implementations of each of these can easily be added to the system to extend its functionality.

Base class Directory Section
Shape shapes/ 3.1
Aggregate accelerators/ 4.2
Camera cameras/ 6.1
Sampler samplers/ 7.2
Filter filters/ 7.8
Material materials/ 9.2
Texture textures/ 10.3
Medium media/ 11.3
Light lights/ 12.2
Integrator integrators/ 1.3.3

The pbrt source code distribution is available from pbrt.org. (A large collection of example scenes is also available as a separate download.) All of the code for the pbrt core is in the src/core directory, and the main() function is contained in the short file main/pbrt.cpp. Various implementations of instances of the abstract base classes are in separate directories: src/shapes has implementations of the Shape base class, src/materials has implementations of Material, and so forth.

Throughout this section are a number of images rendered with extended versions of pbrt. Of them, Figures 1.11 through 1.14 are notable: not only are they visually impressive but also each of them was created by a student in a rendering course where the final class project was to extend pbrt with new functionality in order to render an interesting image. These images are among the best from those courses.

Figure 1.11: Guillaume Poncin and Pramod Sharma extended pbrt in numerous ways, implementing a number of complex rendering algorithms, to make this prize-winning image for Stanford’s cs348b rendering competition. The trees are modeled procedurally with L-systems, a glow image processing filter increases the apparent realism of the lights on the tree, snow was modeled procedurally with metaballs, and a subsurface scattering algorithm gave the snow its realistic appearance by accounting for the effect of light that travels beneath the snow for some distance before leaving it.

Figure 1.12: Abe Davis, David Jacobs, and Jongmin Baek rendered this amazing image of an ice cave to take the grand prize in the 2009 Stanford CS348b rendering competition. They first implemented a simulation of the physical process of glaciation, the process where snow falls, melts, and refreezes over the course of many years, forming stratified layers of ice. They then simulated erosion of the ice due to melted water runoff before generating a geometric model of the ice. Scattering of light inside the volume was simulated with volumetric photon mapping; the blue color of the ice is entirely due to modeling the wavelength-dependent absorption of light in the ice volume.

Figure 1.13: Lingfeng Yang implemented a bidirectional texture function to simulate the appearance of cloth, adding an analytic self-shadowing model, to render this image that took first prize in the 2009 Stanford CS348b rendering competition.

1.3.1 Phases of Execution

pbrt can be conceptually divided into two phases of execution. First, it parses the scene description file provided by the user. The scene description is a text file that specifies the geometric shapes that make up the scene, their material properties, the lights that illuminate them, where the virtual camera is positioned in the scene, and parameters to all of the individual algorithms used throughout the system. Each statement in the input file has a direct mapping to one of the routines in Appendix A; these routines comprise the procedural interface for describing a scene. The scene file format is documented on the pbrt Web site, pbrt.org.

The end results of the parsing phase are an instance of the Scene class and an instance of the Integrator class. The Scene contains a representation of the contents of the scene (geometric objects, lights, etc.), and the Integrator implements an algorithm to render it. The integrator is so-named because its main task is to evaluate the integral from Equation (1.1).

Once the scene has been specified, the second phase of execution begins, and the main rendering loop executes. This phase is where pbrt usually spends the majority of its running time, and most of this book describes code that executes during this phase. The rendering loop is performed by executing an implementation of the Integrator::Render() method, which is the focus of Section 1.3.4.

This chapter will describe a particular Integrator subclass named SamplerIntegrator, whose Render() method determines the light arriving at a virtual film plane for a large number of rays that model the process of image formation. After the contributions of all of these film samples have been computed, the final image is written to a file. The scene description data in memory are deallocated, and the system then resumes processing statements from the scene description file until no more remain, allowing the user to specify another scene to be rendered, if desired.

Figure 1.14: Jared Jacobs and Michael Turitzin added an implementation of Kajiya and Kay’s texel-based fur rendering algorithm (Kajiya and Kay 1989) to pbrt and rendered this image, where both the fur on the dog and the shag carpet are rendered with the texel fur algorithm.

Figures 1.15 and 1.16 were rendered with LuxRender, a GPL-licensed rendering system originally based on the pbrt source code from the first edition of the book. (See www.luxrender.net for more information about LuxRender.)

Figure 1.15: This contemporary indoor scene was modeled and rendered by Florent Boyer. The image was rendered using LuxRender, a GPL-licensed physically-based rendering system originally based on pbrt’s source code. Modeling and texturing were done using Blender.

Figure 1.16: Martin Lubich modeled this scene of the Austrian Imperial Crown and rendered it using LuxRender, an open source fork of the pbrt codebase. The scene was modeled in Blender and consists of approximately 1.8 million vertices. It is illuminated by six area light sources with emission spectra based on measured data from a real-world light source and was rendered with 1280 samples per pixel in 73 hours of computation on a quad-core CPU. See Martin’s Web site, www.loramel.net, for more information, including downloadable Blender scene files.

1.3.2 Scene Representation

pbrt’s main() function can be found in the file main/pbrt.cpp. This function is quite simple; it first loops over the provided command-line arguments in argv, initializing values in the Options structure and storing the filenames provided in the arguments. Running pbrt with –help as a command-line argument prints all of the options that can be specified on the command line. The fragment that parses the command-line arguments, <<Process command-line arguments>>, is straightforward and therefore not included in the book here.

The options structure is then passed to the pbrtInit() function, which does systemwide initialization. The main() function then parses the given scene description(s), leading to the creation of a Scene and an Integrator. After all rendering is done, pbrtCleanup() does final cleanup before the system exits.

The pbrtInit() and pbrtCleanup() functions appear in a mini-index in the page margin, along with the number of the page where they are actually defined. The mini-indices have pointers to the definitions of almost all of the functions, classes, methods, and member variables used or referred to on each page.

<<Main program>>= 
int main(int argc, char *argv[]) { Options options; std::vector<std::string> filenames; <<Process command-line arguments>> 
for (int i = 1; i < argc; ++i) { if (!strcmp(argv[i], "--ncores") || !strcmp(argv[i], "--nthreads")) options.nThreads = atoi(argv[++i]); else if (!strcmp(argv[i], "--outfile")) options.imageFile = argv[++i]; else if (!strcmp(argv[i], "--quick")) options.quickRender = true; else if (!strcmp(argv[i], "--quiet")) options.quiet = true; else if (!strcmp(argv[i], "--verbose")) options.verbose = true; else if (!strcmp(argv[i], "--help") || !strcmp(argv[i], "-h")) { printf("usage: pbrt [--nthreads n] [--outfile filename] [--quick] [--quiet] " "[--verbose] [--help] <filename.pbrt> ...\n"); return 0; } else filenames.push_back(argv[i]); }
pbrtInit(options); <<Process scene description>> 
if (filenames.size() == 0) { <<Parse scene from standard input>> 
ParseFile("-");
} else { <<Parse scene from input files>> 
for (const std::string &f : filenames) if (!ParseFile(f)) Error("Couldn't open scene file \"%s\"", f.c_str());
}
pbrtCleanup(); return 0; }

If pbrt is run with no input filenames provided, then the scene description is read from standard input. Otherwise it loops through the provided filenames, processing each file in turn.

<<Process scene description>>= 
if (filenames.size() == 0) { <<Parse scene from standard input>> 
ParseFile("-");
} else { <<Parse scene from input files>> 
for (const std::string &f : filenames) if (!ParseFile(f)) Error("Couldn't open scene file \"%s\"", f.c_str());
}

The ParseFile() function parses a scene description file, either from standard input or from a file on disk; it returns false if it was unable to open the file. The mechanics of parsing scene description files will not be described in this book; the parser implementation can be found in the lex and yacc files core/pbrtlex.ll and core/pbrtparse.y, respectively. Readers who want to understand the parsing subsystem but are not familiar with these tools may wish to consult Levine, Mason, and Brown (1992).

We use the common UNIX idiom that a file named “-” represents standard input:

<<Parse scene from standard input>>= 
ParseFile("-");

If a particular input file can’t be opened, the Error() routine reports this information to the user. Error() uses the same format string semantics as printf().

<<Parse scene from input files>>= 
for (const std::string &f : filenames) if (!ParseFile(f)) Error("Couldn't open scene file \"%s\"", f.c_str());

As the scene file is parsed, objects are created that represent the lights and geometric primitives in the scene. These are all stored in the Scene object, which is created by the RenderOptions::MakeScene() method in Section A.3.7 in Appendix B. The Scene class is declared in core/scene.h and defined in core/scene.cpp.

<<Scene Declarations>>= 
class Scene { public: <<Scene Public Methods>> 
Scene(std::shared_ptr<Primitive> aggregate, const std::vector<std::shared_ptr<Light>> &lights) : lights(lights), aggregate(aggregate) { <<Scene Constructor Implementation>> 
worldBound = aggregate->WorldBound(); for (const auto &light : lights) light->Preprocess(*this);
} const Bounds3f &WorldBound() const { return worldBound; } bool Intersect(const Ray &ray, SurfaceInteraction *isect) const; bool IntersectP(const Ray &ray) const; bool IntersectTr(Ray ray, Sampler &sampler, SurfaceInteraction *isect, Spectrum *transmittance) const;
<<Scene Public Data>> 
std::vector<std::shared_ptr<Light>> lights;
private: <<Scene Private Data>> 
std::shared_ptr<Primitive> aggregate; Bounds3f worldBound;
};

<<Scene Public Methods>>= 
Scene(std::shared_ptr<Primitive> aggregate, const std::vector<std::shared_ptr<Light>> &lights) : lights(lights), aggregate(aggregate) { <<Scene Constructor Implementation>> 
worldBound = aggregate->WorldBound(); for (const auto &light : lights) light->Preprocess(*this);
}

Each light source in the scene is represented by a Light object, which specifies the shape of a light and the distribution of energy that it emits. The Scene stores all of the lights using a vector of shared_ptr instances from the C++ standard library. pbrt uses shared pointers to track how many times objects are referenced by other instances. When the last instance holding a reference (the Scene in this case) is destroyed, the reference count reaches zero and the Light can be safely freed, which happens automatically at that point.

While some renderers support separate light lists per geometric object, allowing a light to illuminate only some of the objects in the scene, this idea does not map well to the physically based rendering approach taken in pbrt, so pbrt only supports a single global per-scene list. Many parts of the system need access to the lights, so the Scene makes them available as a public member variable.

<<Scene Public Data>>= 
std::vector<std::shared_ptr<Light>> lights;

Each geometric object in the scene is represented by a Primitive, which combines two objects: a Shape that specifies its geometry, and a Material that describes its appearance (e.g., the object’s color, whether it has a dull or glossy finish). All of the geometric primitives are collected into a single aggregate Primitive in the Scene member variable Scene::aggregate. This aggregate is a special kind of primitive that itself holds references to many other primitives. Because it implements the Primitive interface it appears no different from a single primitive to the rest of the system. The aggregate implementation stores all the scene’s primitives in an acceleration data structure that reduces the number of unnecessary ray intersection tests with primitives that are far away from a given ray.

<<Scene Private Data>>= 
std::shared_ptr<Primitive> aggregate;

The constructor caches the bounding box of the scene geometry in the worldBound member variable.

<<Scene Constructor Implementation>>= 
worldBound = aggregate->WorldBound();

<<Scene Private Data>>+= 
Bounds3f worldBound;

The bound is made available via the WorldBound() method.

<<Scene Public Methods>>+= 
const Bounds3f &WorldBound() const { return worldBound; }

Some Light implementations find it useful to do some additional initialization after the scene has been defined but before rendering begins. The Scene constructor calls their Preprocess() methods to allow them to do so.

<<Scene Constructor Implementation>>+= 
for (const auto &light : lights) light->Preprocess(*this);

The Scene class provides two methods related to ray–primitive intersection. Its Intersect() method traces the given ray into the scene and returns a Boolean value indicating whether the ray intersected any of the primitives. If so, it fills in the provided SurfaceInteraction structure with information about the closest intersection point along the ray. The SurfaceInteraction structure is defined in Section 4.1.

<<Scene Method Definitions>>= 
bool Scene::Intersect(const Ray &ray, SurfaceInteraction *isect) const { return aggregate->Intersect(ray, isect); }

A closely related method is Scene::IntersectP(), which checks for the existence of intersections along the ray but does not return any information about those intersections. Because this routine doesn’t need to search for the closest intersection or compute any additional information about intersections, it is generally more efficient than Scene::Intersect(). This routine is used for shadow rays.

<<Scene Method Definitions>>+=  
bool Scene::IntersectP(const Ray &ray) const { return aggregate->IntersectP(ray); }

1.3.3 Integrator Interface and SamplerIntegrator

Rendering an image of the scene is handled by an instance of a class that implements the Integrator interface. Integrator is an abstract base class that defines the Render() method that must be provided by all integrators. In this section, we will define one Integrator implementation, the SamplerIntegrator. The basic integrator interfaces are defined in core/integrator.h, and some utility functions used by integrators are in core/integrator.cpp. The implementations of the various integrators are in the integrators directory.

<<Integrator Declarations>>= 
class Integrator { public: <<Integrator Interface>> 
virtual ~Integrator(); virtual void Render(const Scene &scene) = 0;
};

The method that Integrators must provide is Render(); it is passed a reference to the Scene to use to compute an image of the scene or more generally, a set of measurements of the scene lighting. This interface is intentionally kept very general to permit a wide range of implementations—for example, one could implement an Integrator that takes measurements only at a sparse set of positions distributed through the scene rather than generating a regular 2D image.

<<Integrator Interface>>= 
virtual void Render(const Scene &scene) = 0;

In this chapter, we’ll focus on SamplerIntegrator, which is an Integrator subclass, and the WhittedIntegrator, which implements the SamplerIntegrator interface. (Implementations of other SamplerIntegrators will be introduced in Chapters 14 and 15; the integrators in Chapter 16 inherit directly from Integrator.) The name of the SamplerIntegrator derives from the fact that its rendering process is driven by a stream of samples from a Sampler; each such sample identifies a point on the image at which the integrator should compute the arriving light to form the image.

<<SamplerIntegrator Declarations>>= 
class SamplerIntegrator : public Integrator { public: <<SamplerIntegrator Public Methods>> 
SamplerIntegrator(std::shared_ptr<const Camera> camera, std::shared_ptr<Sampler> sampler) : camera(camera), sampler(sampler) { } virtual void Preprocess(const Scene &scene, Sampler &sampler) { } void Render(const Scene &scene); virtual Spectrum Li(const RayDifferential &ray, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth = 0) const = 0; Spectrum SpecularReflect(const RayDifferential &ray, const SurfaceInteraction &isect, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth) const; Spectrum SpecularTransmit(const RayDifferential &ray, const SurfaceInteraction &isect, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth) const;
protected: <<SamplerIntegrator Protected Data>> 
std::shared_ptr<const Camera> camera;
private: <<SamplerIntegrator Private Data>> 
std::shared_ptr<Sampler> sampler;
};

The SamplerIntegrator stores a pointer to a Sampler. The role of the sampler is subtle, but its implementation can substantially affect the quality of the images that the system generates. First, the sampler is responsible for choosing the points on the image plane from which rays are traced. Second, it is responsible for supplying the sample positions used by integrators for estimating the value of the light transport integral, Equation (1.1). For example, some integrators need to choose random points on light sources to compute illumination from area lights. Generating a good distribution of these samples is an important part of the rendering process that can substantially affect overall efficiency; this topic is the main focus of Chapter 7.

<<SamplerIntegrator Private Data>>= 
std::shared_ptr<Sampler> sampler;

The Camera object controls the viewing and lens parameters such as position, orientation, focus, and field of view. A Film member variable inside the Camera class handles image storage. The Camera classes are described in Chapter 6, and Film is described in Section 7.9. The Film is responsible for writing the final image to a file and possibly displaying it on the screen as it is being computed.

<<SamplerIntegrator Protected Data>>= 
std::shared_ptr<const Camera> camera;

The SamplerIntegrator constructor stores pointers to these objects in member variables. The SamplerIntegrator is created in the RenderOptions::MakeIntegrator() method, which is in turn called by pbrtWorldEnd(), which is called by the input file parser when it is done parsing a scene description from an input file and is ready to render the scene.

<<SamplerIntegrator Public Methods>>= 
SamplerIntegrator(std::shared_ptr<const Camera> camera, std::shared_ptr<Sampler> sampler) : camera(camera), sampler(sampler) { }

SamplerIntegrator implementations may optionally implement the Preprocess() method. It is called after the Scene has been fully initialized and gives the integrator a chance to do scene-dependent computation, such as allocating additional data structures that are dependent on the number of lights in the scene, or precomputing a rough representation of the distribution of radiance in the scene. Implementations that don’t need to do anything along these lines can leave this method unimplemented.

<<SamplerIntegrator Public Methods>>+=  
virtual void Preprocess(const Scene &scene, Sampler &sampler) { }

1.3.4 The Main Rendering Loop

After the Scene and the Integrator have been allocated and initialized, the Integrator::Render() method is invoked, starting the second phase of pbrt’s execution: the main rendering loop. In the SamplerIntegrator’s implementation of this method, at each of a series of positions on the image plane, the method uses the Camera and the Sampler to generate a ray into the scene and then uses the Li() method to determine the amount of light arriving at the image plane along that ray. This value is passed to the Film, which records the light’s contribution. Figure 1.17 summarizes the main classes used in this method and the flow of data among them.

Figure 1.17: Class Relationships for the Main Rendering Loop in the SamplerIntegrator::Render() Method in core/integrator.cpp. The Sampler provides a sequence of sample values, one for each image sample to be taken. The Camera turns a sample into a corresponding ray from the film plane, and the Li() method implementation computes the radiance along that ray arriving at the film. The sample and its radiance are given to the Film, which stores their contribution in an image. This process repeats until the Sampler has provided as many samples as are necessary to generate the final image.

<<SamplerIntegrator Method Definitions>>= 
void SamplerIntegrator::Render(const Scene &scene) { Preprocess(scene, *sampler); <<Render image tiles in parallel>> 
<<Compute number of tiles, nTiles, to use for parallel rendering>> 
Bounds2i sampleBounds = camera->film->GetSampleBounds(); Vector2i sampleExtent = sampleBounds.Diagonal(); const int tileSize = 16; Point2i nTiles((sampleExtent.x + tileSize - 1) / tileSize, (sampleExtent.y + tileSize - 1) / tileSize);
ParallelFor2D( [&](Point2i tile) { <<Render section of image corresponding to tile>> 
<<Allocate MemoryArena for tile>> 
MemoryArena arena;
<<Get sampler instance for tile>> 
int seed = tile.y * nTiles.x + tile.x; std::unique_ptr<Sampler> tileSampler = sampler->Clone(seed);
<<Compute sample bounds for tile>> 
int x0 = sampleBounds.pMin.x + tile.x * tileSize; int x1 = std::min(x0 + tileSize, sampleBounds.pMax.x); int y0 = sampleBounds.pMin.y + tile.y * tileSize; int y1 = std::min(y0 + tileSize, sampleBounds.pMax.y); Bounds2i tileBounds(Point2i(x0, y0), Point2i(x1, y1));
<<Get FilmTile for tile>> 
std::unique_ptr<FilmTile> filmTile = camera->film->GetFilmTile(tileBounds);
<<Loop over pixels in tile to render them>> 
for (Point2i pixel : tileBounds) { tileSampler->StartPixel(pixel); do { <<Initialize CameraSample for current sample>> 
CameraSample cameraSample = tileSampler->GetCameraSample(pixel);
<<Generate camera ray for current sample>> 
RayDifferential ray; Float rayWeight = camera->GenerateRayDifferential(cameraSample, &ray); ray.ScaleDifferentials(1 / std::sqrt(tileSampler->samplesPerPixel));
<<Evaluate radiance along camera ray>> 
Spectrum L(0.f); if (rayWeight > 0) L = Li(ray, scene, *tileSampler, arena); <<Issue warning if unexpected radiance value is returned>> 
if (L.HasNaNs()) { Error("Not-a-number radiance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); } else if (L.y() < -1e-5) { Error("Negative luminance value, %f, returned " "for image sample. Setting to black.", L.y()); L = Spectrum(0.f); } else if (std::isinf(L.y())) { Error("Infinite luminance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); }
<<Add camera ray’s contribution to image>> 
filmTile->AddSample(cameraSample.pFilm, L, rayWeight);
<<Free MemoryArena memory from computing image sample value>> 
arena.Reset();
} while (tileSampler->StartNextSample()); }
<<Merge image tile into Film>> 
camera->film->MergeFilmTile(std::move(filmTile));
}, nTiles);
<<Save final image after rendering>>  }

So that rendering can proceed in parallel on systems with multiple processing cores, the image is decomposed into small tiles of pixels. Each tile can be processed independently and in parallel. The ParallelFor() function, which is described in more detail in Section A.6, implements a parallel for loop, where multiple iterations may run in parallel. A C++ lambda expression provides the loop body. Here, a variant of ParallelFor() that loops over a 2D domain is used to iterate over the image tiles.

<<Render image tiles in parallel>>= 
<<Compute number of tiles, nTiles, to use for parallel rendering>> 
Bounds2i sampleBounds = camera->film->GetSampleBounds(); Vector2i sampleExtent = sampleBounds.Diagonal(); const int tileSize = 16; Point2i nTiles((sampleExtent.x + tileSize - 1) / tileSize, (sampleExtent.y + tileSize - 1) / tileSize);
ParallelFor2D( [&](Point2i tile) { <<Render section of image corresponding to tile>> 
<<Allocate MemoryArena for tile>> 
MemoryArena arena;
<<Get sampler instance for tile>> 
int seed = tile.y * nTiles.x + tile.x; std::unique_ptr<Sampler> tileSampler = sampler->Clone(seed);
<<Compute sample bounds for tile>> 
int x0 = sampleBounds.pMin.x + tile.x * tileSize; int x1 = std::min(x0 + tileSize, sampleBounds.pMax.x); int y0 = sampleBounds.pMin.y + tile.y * tileSize; int y1 = std::min(y0 + tileSize, sampleBounds.pMax.y); Bounds2i tileBounds(Point2i(x0, y0), Point2i(x1, y1));
<<Get FilmTile for tile>> 
std::unique_ptr<FilmTile> filmTile = camera->film->GetFilmTile(tileBounds);
<<Loop over pixels in tile to render them>> 
for (Point2i pixel : tileBounds) { tileSampler->StartPixel(pixel); do { <<Initialize CameraSample for current sample>> 
CameraSample cameraSample = tileSampler->GetCameraSample(pixel);
<<Generate camera ray for current sample>> 
RayDifferential ray; Float rayWeight = camera->GenerateRayDifferential(cameraSample, &ray); ray.ScaleDifferentials(1 / std::sqrt(tileSampler->samplesPerPixel));
<<Evaluate radiance along camera ray>> 
Spectrum L(0.f); if (rayWeight > 0) L = Li(ray, scene, *tileSampler, arena); <<Issue warning if unexpected radiance value is returned>> 
if (L.HasNaNs()) { Error("Not-a-number radiance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); } else if (L.y() < -1e-5) { Error("Negative luminance value, %f, returned " "for image sample. Setting to black.", L.y()); L = Spectrum(0.f); } else if (std::isinf(L.y())) { Error("Infinite luminance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); }
<<Add camera ray’s contribution to image>> 
filmTile->AddSample(cameraSample.pFilm, L, rayWeight);
<<Free MemoryArena memory from computing image sample value>> 
arena.Reset();
} while (tileSampler->StartNextSample()); }
<<Merge image tile into Film>> 
camera->film->MergeFilmTile(std::move(filmTile));
}, nTiles);

There are two factors to trade off in deciding how large to make the image tiles: load-balancing and per-tile overhead. On one hand, we’d like to have significantly more tiles than there are processors in the system: consider a four-core computer with only four tiles. In general, it’s likely that some of the tiles will take less processing time than others; the ones that are responsible for parts of the image where the scene is relatively simple will usually take less processing time than parts of the image where the scene is relatively complex. Therefore, if the number of tiles was equal to the number of processors, some processors would finish before others and sit idle while waiting for the processor that had the longest running tile. Figure 1.18 illustrates this issue; it shows the distribution of execution time for the tiles used to render the shiny sphere scene in Figure 1.7. The longest running one took 151 times longer than the shortest one.

Figure 1.18: Histogram of Time Spent Rendering Each Tile for the Scene in Figure 1.7. The horizontal axis measures time in seconds. Note the wide variation in execution time, illustrating that different parts of the image required substantially different amounts of computation.

On the other hand, having tiles that are too small is also inefficient. There is a small fixed overhead for a processing core to determine which loop iteration it should run next; the more tiles there are, the more times this overhead must be paid.

For simplicity, pbrt always uses 16 times 16 tiles; this granularity works well for almost all images, except for very low-resolution ones. We implicitly assume that the small image case isn’t particularly important to render at maximum efficiency. The Film’s GetSampleBounds() method returns the extent of pixels over which samples must be generated for the image being rendered. The addition of tileSize - 1 in the computation of nTiles results in a number of tiles that is rounded to the next higher integer when the sample bounds along an axis are not exactly divisible by 16 . This means that the lambda function invoked by ParallelFor() must be able to deal with partial tiles containing some unused pixels.

<<Compute number of tiles, nTiles, to use for parallel rendering>>= 
Bounds2i sampleBounds = camera->film->GetSampleBounds(); Vector2i sampleExtent = sampleBounds.Diagonal(); const int tileSize = 16; Point2i nTiles((sampleExtent.x + tileSize - 1) / tileSize, (sampleExtent.y + tileSize - 1) / tileSize);

When the parallel for loop implementation that is defined in Appendix A.6.4 decides to run a loop iteration on a particular processor, the lambda will be called with the tile’s coordinates. It starts by doing a little bit of setup work, determining which part of the film plane it is responsible for and allocating space for some temporary data before using the Sampler to generate image samples, the Camera to determine corresponding rays leaving the film plane, and the Li() method to compute radiance along those rays arriving at the film.

<<Render section of image corresponding to tile>>= 
<<Allocate MemoryArena for tile>> 
MemoryArena arena;
<<Get sampler instance for tile>> 
int seed = tile.y * nTiles.x + tile.x; std::unique_ptr<Sampler> tileSampler = sampler->Clone(seed);
<<Compute sample bounds for tile>> 
int x0 = sampleBounds.pMin.x + tile.x * tileSize; int x1 = std::min(x0 + tileSize, sampleBounds.pMax.x); int y0 = sampleBounds.pMin.y + tile.y * tileSize; int y1 = std::min(y0 + tileSize, sampleBounds.pMax.y); Bounds2i tileBounds(Point2i(x0, y0), Point2i(x1, y1));
<<Get FilmTile for tile>> 
std::unique_ptr<FilmTile> filmTile = camera->film->GetFilmTile(tileBounds);
<<Loop over pixels in tile to render them>> 
for (Point2i pixel : tileBounds) { tileSampler->StartPixel(pixel); do { <<Initialize CameraSample for current sample>> 
CameraSample cameraSample = tileSampler->GetCameraSample(pixel);
<<Generate camera ray for current sample>> 
RayDifferential ray; Float rayWeight = camera->GenerateRayDifferential(cameraSample, &ray); ray.ScaleDifferentials(1 / std::sqrt(tileSampler->samplesPerPixel));
<<Evaluate radiance along camera ray>> 
Spectrum L(0.f); if (rayWeight > 0) L = Li(ray, scene, *tileSampler, arena); <<Issue warning if unexpected radiance value is returned>> 
if (L.HasNaNs()) { Error("Not-a-number radiance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); } else if (L.y() < -1e-5) { Error("Negative luminance value, %f, returned " "for image sample. Setting to black.", L.y()); L = Spectrum(0.f); } else if (std::isinf(L.y())) { Error("Infinite luminance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); }
<<Add camera ray’s contribution to image>> 
filmTile->AddSample(cameraSample.pFilm, L, rayWeight);
<<Free MemoryArena memory from computing image sample value>> 
arena.Reset();
} while (tileSampler->StartNextSample()); }
<<Merge image tile into Film>> 
camera->film->MergeFilmTile(std::move(filmTile));

Implementations of the Li() method will generally need to temporarily allocate small amounts of memory for each radiance computation. The large number of resulting allocations can easily overwhelm the system’s regular memory allocation routines (e.g., malloc() or new), which must maintain and synchronize elaborate internal data structures to track sets of free memory regions among processors. A naive implementation could potentially spend a fairly large fraction of its computation time in the memory allocator.

To address this issue, we will pass an instance of the MemoryArena class to the Li() method. MemoryArena instances manage pools of memory to enable higher performance allocation than what is possible with the standard library routines.

The arena’s memory pool is always released in its entirety, which removes the need for complex internal data structures. Instances of this class can only be used by a single thread—concurrent access without additional synchronization is not permitted. We create a unique MemoryArena for each loop iteration that can be used directly, which also ensures that the arena is only accessed by a single thread.

<<Allocate MemoryArena for tile>>= 

Most Sampler implementations find it useful to maintain some state, such as the coordinates of the current pixel being sampled. This means that multiple processing threads cannot use a single Sampler concurrently. Therefore, Samplers provide a Clone() method to create a new instance of a given Sampler; it takes a seed that is used by some implementations to seed a pseudo-random number generator so that the same sequence of pseudo-random numbers isn’t generated in every tile. (Note that not all Samplers use pseudo-random numbers; those that don’t just ignore the seed.)

<<Get sampler instance for tile>>= 
int seed = tile.y * nTiles.x + tile.x; std::unique_ptr<Sampler> tileSampler = sampler->Clone(seed);

Next, the extent of pixels to be sampled in this loop iteration is computed based on the tile indices. Two issues must be accounted for in this computation: first, the overall pixel bounds to be sampled may not be equal to the full image resolution. For example, the user may have specified a “crop window” of just a subset of pixels to sample. Second, if the image resolution isn’t an exact multiple of 16, then the tiles on the right and bottom images won’t be a full 16 times 16 .

<<Compute sample bounds for tile>>= 
int x0 = sampleBounds.pMin.x + tile.x * tileSize; int x1 = std::min(x0 + tileSize, sampleBounds.pMax.x); int y0 = sampleBounds.pMin.y + tile.y * tileSize; int y1 = std::min(y0 + tileSize, sampleBounds.pMax.y); Bounds2i tileBounds(Point2i(x0, y0), Point2i(x1, y1));

Finally, a FilmTile is acquired from the Film. This class provides a small buffer of memory to store pixel values for the current tile. Its storage is private to the loop iteration, so pixel values can be updated without worrying about other threads concurrently modifying the same pixels. The tile is merged into the film’s storage once the work for rendering it is done; serializing concurrent updates to the image is handled then.

<<Get FilmTile for tile>>= 
std::unique_ptr<FilmTile> filmTile = camera->film->GetFilmTile(tileBounds);

Rendering can now proceed. The implementation loops over all of the pixels in the tile using a range-based for loop that automatically uses iterators provided by the Bounds2 class. The cloned Sampler is notified that it should start generating samples for the current pixel, and samples are processed in turn until StartNextSample() returns false. (As we’ll see in Chapter 7, taking multiple samples per pixel can greatly improve final image quality.)

<<Loop over pixels in tile to render them>>= 
for (Point2i pixel : tileBounds) { tileSampler->StartPixel(pixel); do { <<Initialize CameraSample for current sample>> 
CameraSample cameraSample = tileSampler->GetCameraSample(pixel);
<<Generate camera ray for current sample>> 
RayDifferential ray; Float rayWeight = camera->GenerateRayDifferential(cameraSample, &ray); ray.ScaleDifferentials(1 / std::sqrt(tileSampler->samplesPerPixel));
<<Evaluate radiance along camera ray>> 
Spectrum L(0.f); if (rayWeight > 0) L = Li(ray, scene, *tileSampler, arena); <<Issue warning if unexpected radiance value is returned>> 
if (L.HasNaNs()) { Error("Not-a-number radiance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); } else if (L.y() < -1e-5) { Error("Negative luminance value, %f, returned " "for image sample. Setting to black.", L.y()); L = Spectrum(0.f); } else if (std::isinf(L.y())) { Error("Infinite luminance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); }
<<Add camera ray’s contribution to image>> 
filmTile->AddSample(cameraSample.pFilm, L, rayWeight);
<<Free MemoryArena memory from computing image sample value>> 
arena.Reset();
} while (tileSampler->StartNextSample()); }

The CameraSample structure records the position on the film for which the camera should generate the corresponding ray. It also stores time and lens position sample values, which are used when rendering scenes with moving objects and for camera models that simulate non-pinhole apertures, respectively.

<<Initialize CameraSample for current sample>>= 
CameraSample cameraSample = tileSampler->GetCameraSample(pixel);

The Camera interface provides two methods to generate rays: Camera::GenerateRay(), which returns the ray for a given image sample position, and Camera::GenerateRayDifferential(), which returns a ray differential, which incorporates information about the rays that the Camera would generate for samples that are one pixel away on the image plane in both the x and y directions. Ray differentials are used to get better results from some of the texture functions defined in Chapter 10, making it possible to compute how quickly a texture varies with respect to the pixel spacing, a key component of texture antialiasing.

After the ray differential has been returned, the ScaleDifferentials() method scales the differential rays to account for the actual spacing between samples on the film plane for the case where multiple samples are taken per pixel.

The camera also returns a floating-point weight associated with the ray. For simple camera models, each ray is weighted equally, but camera models that more accurately model the process of image formation by lens systems may generate some rays that contribute more than others. Such a camera model might simulate the effect of less light arriving at the edges of the film plane than at the center, an effect called vignetting. The returned weight will be used later to scale the ray’s contribution to the image.

<<Generate camera ray for current sample>>= 
RayDifferential ray; Float rayWeight = camera->GenerateRayDifferential(cameraSample, &ray); ray.ScaleDifferentials(1 / std::sqrt(tileSampler->samplesPerPixel));

Note the capitalized floating-point type Float: depending on the compilation flags of pbrt, this is an alias for either float or double. More detail on this design choice is provided in Section A.1.

Given a ray, the next task is to determine the radiance arriving at the image plane along that ray. The Li() method takes care of this task.

<<Evaluate radiance along camera ray>>= 
Spectrum L(0.f); if (rayWeight > 0) L = Li(ray, scene, *tileSampler, arena); <<Issue warning if unexpected radiance value is returned>> 
if (L.HasNaNs()) { Error("Not-a-number radiance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); } else if (L.y() < -1e-5) { Error("Negative luminance value, %f, returned " "for image sample. Setting to black.", L.y()); L = Spectrum(0.f); } else if (std::isinf(L.y())) { Error("Infinite luminance value returned " "for image sample. Setting to black."); L = Spectrum(0.f); }

Li() is a pure virtual method that returns the incident radiance at the origin of a given ray; each subclass of SamplerIntegrator must provide an implementation of this method. The parameters to Li() are the following:

  • ray: the ray along which the incident radiance should be evaluated.
  • scene: the Scene being rendered. The implementation will query the scene for information about the lights and geometry, and so on.
  • sampler: a sample generator used to solve the light transport equation via Monte Carlo integration.
  • arena: a MemoryArena for efficient temporary memory allocation by the integrator. The integrator should assume that any memory it allocates with the arena will be freed shortly after the Li() method returns and thus should not use the arena to allocate any memory that must persist for longer than is needed for the current ray.
  • depth: the number of ray bounces from the camera that have occurred up until the current call to Li().

The method returns a Spectrum that represents the incident radiance at the origin of the ray:

<<SamplerIntegrator Public Methods>>+= 
virtual Spectrum Li(const RayDifferential &ray, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth = 0) const = 0;

A common side effect of bugs in the rendering process is that impossible radiance values are computed. For example, division by zero results in radiance values equal either to the IEEE floating-point infinity or “not a number” value. The renderer looks for this possibility, as well as for spectra with negative contributions, and prints an error message when it encounters them. Here we won’t include the fragment that does this, <<Issue warning if unexpected radiance value is returned>>. See the implementation in core/integrator.cpp if you’re interested in its details.

After the radiance arriving at the ray’s origin is known, the image can be updated: the FilmTile::AddSample() method updates the pixels in the tile’s image given the results from a sample. The details of how sample values are recorded in the film are explained in Sections 7.8 and 7.9.

<<Add camera ray’s contribution to image>>= 
filmTile->AddSample(cameraSample.pFilm, L, rayWeight);

After processing a sample, all of the allocated memory in the MemoryArena is freed together when MemoryArena::Reset() is called. (See Section 9.1.1 for an explanation of how the MemoryArena is used to allocate memory to represent BSDFs at intersection points.)

<<Free MemoryArena memory from computing image sample value>>= 
arena.Reset();

Once radiance values for all of the samples in a tile have been computed, the FilmTile is handed off to the Film’s MergeFilmTile() method, which handles adding the tile’s pixel contributions to the final image. Note that the std::move() function is used to transfer ownership of the unique_ptr to MergeFilmTile().

<<Merge image tile into Film>>= 
camera->film->MergeFilmTile(std::move(filmTile));

After all of the loop iterations have finished, the SamplerIntegrator’s Render() method calls the Film’s WriteImage() method to write the image out to a file.

<<Save final image after rendering>>= 

1.3.5 An Integrator for Whitted Ray Tracing

Chapters 14 and 15 include the implementations of many different integrators, based on a variety of algorithms with differing levels of accuracy. Here we will present an integrator based on Whitted’s ray-tracing algorithm. This integrator accurately computes reflected and transmitted light from specular surfaces like glass, mirrors, and water, although it doesn’t account for other types of indirect lighting effects like light bouncing off a wall and illuminating a room. The WhittedIntegrator class can be found in the integrators/whitted.h and integrators/whitted.cpp files in the pbrt distribution.

<<WhittedIntegrator Declarations>>= 
class WhittedIntegrator : public SamplerIntegrator { public: <<WhittedIntegrator Public Methods>> 
WhittedIntegrator(int maxDepth, std::shared_ptr<const Camera> camera, std::shared_ptr<Sampler> sampler) : SamplerIntegrator(camera, sampler), maxDepth(maxDepth) { } Spectrum Li(const RayDifferential &ray, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth) const;
private: <<WhittedIntegrator Private Data>> 
const int maxDepth;
};

<<WhittedIntegrator Public Methods>>= 
WhittedIntegrator(int maxDepth, std::shared_ptr<const Camera> camera, std::shared_ptr<Sampler> sampler) : SamplerIntegrator(camera, sampler), maxDepth(maxDepth) { }

The Whitted integrator works by recursively evaluating radiance along reflected and refracted ray directions. It stops the recursion at a predetermined maximum depth, WhittedIntegrator::maxDepth. By default, the maximum recursion depth is five. Without this termination criterion, the recursion might never terminate (imagine, e.g., a hall-of-mirrors scene). This member variable is initialized in the WhittedIntegrator constructor based on parameters set in the scene description file.

<<WhittedIntegrator Private Data>>= 
const int maxDepth;

As a SamplerIntegrator implementation, the WhittedIntegrator must provide an implementation of the Li() method, which returns the radiance arriving at the origin of the given ray. Figure 1.19 summarizes the data flow among the main classes used during integration at surfaces.

Figure 1.19: Class Relationships for Surface Integration. The main rendering loop in the SamplerIntegrator computes a camera ray and passes it to the Li() method, which returns the radiance along that ray arriving at the ray’s origin. After finding the closest intersection, it computes the material properties at the intersection point, representing them in the form of a BSDF. It then uses the Lights in the Scene to determine the illumination there. Together, these give the information needed to compute the radiance reflected back along the ray at the intersection point.

<<WhittedIntegrator Method Definitions>>= 
Spectrum WhittedIntegrator::Li(const RayDifferential &ray, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth) const { Spectrum L(0.); <<Find closest ray intersection or return background radiance>> 
SurfaceInteraction isect; if (!scene.Intersect(ray, &isect)) { for (const auto &light : scene.lights) L += light->Le(ray); return L; }
<<Compute emitted and reflected light at ray intersection point>> 
<<Initialize common variables for Whitted integrator>> 
Normal3f n = isect.shading.n; Vector3f wo = isect.wo;
<<Compute scattering functions for surface interaction>> 
isect.ComputeScatteringFunctions(ray, arena);
<<Compute emitted light if ray hit an area light source>> 
L += isect.Le(wo);
<<Add contribution of each light source>> 
for (const auto &light : scene.lights) { Vector3f wi; Float pdf; VisibilityTester visibility; Spectrum Li = light->Sample_Li(isect, sampler.Get2D(), &wi, &pdf, &visibility); if (Li.IsBlack() || pdf == 0) continue; Spectrum f = isect.bsdf->f(wo, wi); if (!f.IsBlack() && visibility.Unoccluded(scene)) L += f * Li * AbsDot(wi, n) / pdf; }
if (depth + 1 < maxDepth) { <<Trace rays for specular reflection and refraction>> 
L += SpecularReflect(ray, isect, scene, sampler, arena, depth); L += SpecularTransmit(ray, isect, scene, sampler, arena, depth);
}
return L; }

The first step is to find the first intersection of the ray with the shapes in the scene. The Scene::Intersect() method takes a ray and returns a Boolean value indicating whether it intersected a shape. For rays where an intersection was found, it initializes the provided SurfaceInteraction with geometric information about the intersection.

If no intersection was found, radiance may be carried along the ray due to light sources that don’t have associated geometry. One example of such a light is the InfiniteAreaLight, which can represent illumination from the sky. The Light::Le() method allows such lights to return their radiance along a given ray.

<<Find closest ray intersection or return background radiance>>= 
SurfaceInteraction isect; if (!scene.Intersect(ray, &isect)) { for (const auto &light : scene.lights) L += light->Le(ray); return L; }

Otherwise a valid intersection has been found. The integrator must determine how light is scattered by the surface of the shape at the intersection point, determine how much illumination is arriving from light sources at the point, and apply an approximation to Equation (1.1) to compute how much light is leaving the surface in the viewing direction. Because this integrator ignores the effect of participating media like smoke or fog, the radiance leaving the intersection point is the same as the radiance arriving at the ray’s origin.

<<Compute emitted and reflected light at ray intersection point>>= 
<<Initialize common variables for Whitted integrator>> 
Normal3f n = isect.shading.n; Vector3f wo = isect.wo;
<<Compute scattering functions for surface interaction>> 
isect.ComputeScatteringFunctions(ray, arena);
<<Compute emitted light if ray hit an area light source>> 
L += isect.Le(wo);
<<Add contribution of each light source>> 
for (const auto &light : scene.lights) { Vector3f wi; Float pdf; VisibilityTester visibility; Spectrum Li = light->Sample_Li(isect, sampler.Get2D(), &wi, &pdf, &visibility); if (Li.IsBlack() || pdf == 0) continue; Spectrum f = isect.bsdf->f(wo, wi); if (!f.IsBlack() && visibility.Unoccluded(scene)) L += f * Li * AbsDot(wi, n) / pdf; }
if (depth + 1 < maxDepth) { <<Trace rays for specular reflection and refraction>> 
L += SpecularReflect(ray, isect, scene, sampler, arena, depth); L += SpecularTransmit(ray, isect, scene, sampler, arena, depth);
}

Figure 1.20 shows a few quantities that will be used frequently in the fragments to come. n is the surface normal at the intersection point and the normalized direction from the hit point back to the ray origin is stored in wo; Cameras are responsible for normalizing the direction component of generated rays, so there’s no need to renormalize it here. Normalized directions are denoted by the omega Subscript symbol in this book, and in pbrt’s code we will use wo to represent omega Subscript normal o , the outgoing direction of scattered light.

Figure 1.20: Geometric Setting for the Whitted Integrator. normal p Subscript is the ray intersection point and bold n Subscript is its surface normal. The direction in which we’d like to compute reflected radiance is omega Subscript normal o ; it is the vector pointing in the opposite direction of the incident ray.

<<Initialize common variables for Whitted integrator>>= 
Normal3f n = isect.shading.n; Vector3f wo = isect.wo;

If an intersection was found, it’s necessary to determine how the surface’s material scatters light. The ComputeScatteringFunctions() method handles this task, evaluating texture functions to determine surface properties and then initializing a representation of the BSDF (and possibly BSSRDF) at the point. This method generally needs to allocate memory for the objects that constitute this representation; because this memory only needs to be available for the current ray, the MemoryArena is provided for it to use for its allocations.

<<Compute scattering functions for surface interaction>>= 
isect.ComputeScatteringFunctions(ray, arena);

In case the ray happened to hit geometry that is emissive (such as an area light source), the integrator accounts for any emitted radiance by calling the SurfaceInteraction::Le() method. This gives the first term of the light transport equation, Equation (1.1). If the object is not emissive, this method returns a black spectrum.

<<Compute emitted light if ray hit an area light source>>= 
L += isect.Le(wo);

For each light, the integrator calls the Light::Sample_Li() method to compute the radiance from that light falling on the surface at the point being shaded. This method also returns the direction vector from the point being shaded to the light source, which is stored in the variable wi (denoting an incident direction omega Subscript normal i ).

The spectrum returned by this method does not account for the possibility that some other shape may block light from the light and prevent it from reaching the point being shaded. Instead, it returns a VisibilityTester object that can be used to determine if any primitives block the surface point from the light source. This test is done by tracing a shadow ray between the point being shaded and the light to verify that the path is clear. pbrt’s code is organized in this way so that it can avoid tracing the shadow ray unless necessary: this way it can first make sure that the light falling on the surface would be scattered in the direction omega Subscript normal o if the light isn’t blocked. For example, if the surface is not transmissive, then light arriving at the back side of the surface doesn’t contribute to reflection.

The Sample_Li() method also returns the probability density for the light to have sampled the direction wi in the pdf variable. This value is used for Monte Carlo integration with complex area light sources where light is arriving at the point from many directions even though just one direction is sampled here; for simple lights like point lights, the value of pdf is one. The details of how this probability density is computed and used in rendering is the topic of Chapters 13 and 14; in the end, the light’s contribution must be divided by pdf, so this is done by the implementation here.

If the arriving radiance is nonzero and the BSDF indicates that some of the incident light from the direction omega Subscript normal i is in fact scattered to the outgoing direction omega Subscript normal o , then the integrator multiplies the radiance value upper L Subscript normal i Superscript by the value of the BSDF f and the cosine term. The cosine term is computed using the AbsDot() function, which returns the absolute value of the dot product between two vectors. If the vectors are normalized, as both wi and n are here, this is equal to the absolute value of the cosine of the angle between them (Section 2.2.1).

This product represents the light’s contribution to the light transport equation integral, Equation (1.1), and it is added to the total reflected radiance upper L . After all lights have been considered, the integrator has computed the total contribution of direct lighting—light that arrives at the surface directly from emissive objects (as opposed to light that has reflected off other objects in the scene before arriving at the point).

<<Add contribution of each light source>>= 
for (const auto &light : scene.lights) { Vector3f wi; Float pdf; VisibilityTester visibility; Spectrum Li = light->Sample_Li(isect, sampler.Get2D(), &wi, &pdf, &visibility); if (Li.IsBlack() || pdf == 0) continue; Spectrum f = isect.bsdf->f(wo, wi); if (!f.IsBlack() && visibility.Unoccluded(scene)) L += f * Li * AbsDot(wi, n) / pdf; }

This integrator also handles light scattered by perfectly specular surfaces like mirrors or glass. It is fairly simple to use properties of mirrors to find the reflected directions (Figure 1.21) and to use Snell’s law to find the transmitted directions (Section 8.2). The integrator can then recursively follow the appropriate ray in the new direction and add its contribution to the reflected radiance at the point originally seen from the camera. The computation of the effect of specular reflection and transmission is handled in separate utility methods so these functions can easily be reused by other SamplerIntegrator implementations.

Figure 1.21: Reflected rays due to perfect specular reflection make the same angle with the surface normal as the incident ray.

<<Trace rays for specular reflection and refraction>>= 
L += SpecularReflect(ray, isect, scene, sampler, arena, depth); L += SpecularTransmit(ray, isect, scene, sampler, arena, depth);

<<SamplerIntegrator Method Definitions>>+= 
Spectrum SamplerIntegrator::SpecularReflect(const RayDifferential &ray, const SurfaceInteraction &isect, const Scene &scene, Sampler &sampler, MemoryArena &arena, int depth) const { <<Compute specular reflection direction wi and BSDF value>> 
Vector3f wo = isect.wo, wi; Float pdf; BxDFType type = BxDFType(BSDF_REFLECTION | BSDF_SPECULAR); Spectrum f = isect.bsdf->Sample_f(wo, &wi, sampler.Get2D(), &pdf, type);
<<Return contribution of specular reflection>> 
const Normal3f &ns = isect.shading.n; if (pdf > 0 && !f.IsBlack() && AbsDot(wi, ns) != 0) { <<Compute ray differential rd for specular reflection>> 
RayDifferential rd = isect.SpawnRay(wi); if (ray.hasDifferentials) { rd.hasDifferentials = true; rd.rxOrigin = isect.p + isect.dpdx; rd.ryOrigin = isect.p + isect.dpdy; <<Compute differential reflected directions>> 
Normal3f dndx = isect.shading.dndu * isect.dudx + isect.shading.dndv * isect.dvdx; Normal3f dndy = isect.shading.dndu * isect.dudy + isect.shading.dndv * isect.dvdy; Vector3f dwodx = -ray.rxDirection - wo, dwody = -ray.ryDirection - wo; Float dDNdx = Dot(dwodx, ns) + Dot(wo, dndx); Float dDNdy = Dot(dwody, ns) + Dot(wo, dndy); rd.rxDirection = wi - dwodx + 2.f * Vector3f(Dot(wo, ns) * dndx + dDNdx * ns); rd.ryDirection = wi - dwody + 2.f * Vector3f(Dot(wo, ns) * dndy + dDNdy * ns);
}
return f * Li(rd, scene, sampler, arena, depth + 1) * AbsDot(wi, ns) / pdf; } else return Spectrum(0.f);
}

In the SpecularReflect() and SpecularTransmit() methods, the BSDF::Sample_f() method returns an incident ray direction for a given outgoing direction and a given mode of light scattering. This method is one of the foundations of the Monte Carlo light transport algorithms that will be the subject of the last few chapters of this book. Here, we will use it to find only outgoing directions corresponding to perfect specular reflection or refraction, using flags to indicate to BSDF::Sample_f() that other types of reflection should be ignored. Although BSDF::Sample_f() can sample random directions leaving the surface for probabilistic integration algorithms, the randomness is constrained to be consistent with the BSDF’s scattering properties. In the case of perfect specular reflection or refraction, only one direction is possible, so there is no randomness at all.

The calls to BSDF::Sample_f() in these functions initialize wi with the chosen direction and return the BSDF’s value for the directions left-parenthesis omega Subscript normal o Baseline comma omega Subscript normal i Baseline right-parenthesis . If the value of the BSDF is nonzero, the integrator uses the SamplerIntegrator::Li() method to get the incoming radiance along omega Subscript normal i , which in this case will in turn resolve to the WhittedIntegrator::Li() method.

<<Compute specular reflection direction wi and BSDF value>>= 
Vector3f wo = isect.wo, wi; Float pdf; BxDFType type = BxDFType(BSDF_REFLECTION | BSDF_SPECULAR); Spectrum f = isect.bsdf->Sample_f(wo, &wi, sampler.Get2D(), &pdf, type);

In order to use ray differentials to antialias textures that are seen in reflections or refractions, it is necessary to know how reflection and transmission affect the screen-space footprint of rays. The fragments that compute the ray differentials for these rays are defined later, in Section 10.1.3. Given the fully initialized ray differential, a recursive call to Li() provides incident radiance, which is scaled by the value of the BSDF, the cosine term, and divided by the PDF, as per Equation (1.1).

<<Return contribution of specular reflection>>= 
const Normal3f &ns = isect.shading.n; if (pdf > 0 && !f.IsBlack() && AbsDot(wi, ns) != 0) { <<Compute ray differential rd for specular reflection>> 
RayDifferential rd = isect.SpawnRay(wi); if (ray.hasDifferentials) { rd.hasDifferentials = true; rd.rxOrigin = isect.p + isect.dpdx; rd.ryOrigin = isect.p + isect.dpdy; <<Compute differential reflected directions>> 
Normal3f dndx = isect.shading.dndu * isect.dudx + isect.shading.dndv * isect.dvdx; Normal3f dndy = isect.shading.dndu * isect.dudy + isect.shading.dndv * isect.dvdy; Vector3f dwodx = -ray.rxDirection - wo, dwody = -ray.ryDirection - wo; Float dDNdx = Dot(dwodx, ns) + Dot(wo, dndx); Float dDNdy = Dot(dwody, ns) + Dot(wo, dndy); rd.rxDirection = wi - dwodx + 2.f * Vector3f(Dot(wo, ns) * dndx + dDNdx * ns); rd.ryDirection = wi - dwody + 2.f * Vector3f(Dot(wo, ns) * dndy + dDNdy * ns);
}
return f * Li(rd, scene, sampler, arena, depth + 1) * AbsDot(wi, ns) / pdf; } else return Spectrum(0.f);

The SpecularTransmit() method is essentially the same as SpecularReflect() but just requests the BSDF_TRANSMISSION specular component of the BSDF, if any, rather than the BSDF_REFLECTION component used by SpecularReflect(). We therefore won’t include its implementation in the text of the book here.