## 16.3 Bidirectional Path Tracing

The path-tracing algorithm described in Section 14.5 was the first fully general light transport algorithm in computer graphics, handling both a wide variety of geometric representations, lights, and BSDF models. Although it works well for many scenes, path tracing can exhibit high variance in the presence of particular tricky lighting conditions. For example, consider the setting shown in Figure 16.12: a light source is illuminating a small area on the ceiling such that the rest of the room is only illuminated by indirect lighting bouncing from that area. If we only trace paths starting from the camera, we will almost never happen to sample a path vertex in the illuminated region on the ceiling before we trace a shadow ray to the light. Most of the paths will have no contribution, while a few of them—the ones that happen to hit the small region on the ceiling—will have a large contribution. The resulting image will have high variance.

Difficult lighting settings like this can be handled more effectively by
constructing paths that start from the camera on one end and from the light on
the other end and are connected in the middle with a visibility ray. The
resulting *bidirectional path-tracing* algorithm (henceforth referred to as BDPT)
is a generalization of the standard path-tracing algorithm that can be much
more efficient. In contrast to
stochastic progressive photon mapping, BDPT is unbiased and does not blur the
scene illumination.

BDPT first incrementally constructs a *camera subpath* starting with a
point on the camera . The next vertex, , is found by
computing the first intersection along the camera ray. Another vertex is
found by sampling the BSDF at and tracing a ray to find a point
, and so forth. The resulting path of vertices is . Following the same process starting from a
point on a light source (and then using adjoint BSDFs at each vertex)
creates a *light subpath* of vertices, .

Given the two subpaths, a complete light-carrying path can be found by connecting a pair of vertices from each path.

where and . (Our notation orders the vertices in according to the propagation of light). If a visibility ray between and is unoccluded, then the path contribution can be found by evaluating the BSDFs at the connecting vertices (see Figure 16.13). More generally, these subpaths can be combined using the theory of path-space integration from Section 14.4.4.

Superficially, this approach bears some semblance to the two phases of the photon mapper; a key difference is that BDPT computes an unbiased estimate without density estimation. There are also significant differences in how the fact that a given light path could have been constructed in multiple different ways is handled.

There are three refinements to the algorithm that we’ve described so far that improve its performance in practice. The first two are analogous to improvements made to path tracing, and the third is a powerful variance reduction technique.

- First, subpaths can be reused: given a path , transport can be evaluated over all of the paths given by connecting all the various combinations of prefixes of the two paths together. If two paths have and vertices, respectively, then a variety of unique paths can be constructed from them, ranging in length from 2 to vertices long. Figure 16.14 illustrates these strategies for the case of direct illumination.
- The second optimization is not to try to connect paths that use only a single vertex from one of the subpaths. It is preferable to generate those paths using optimized sampling routines provided by the camera and light sources; for light sources, these are the direct lighting techniques that were introduced in Section 14.3.
- The third optimization weights the various strategies for generating paths of a given length more carefully than just averaging all of the strategies that construct paths of the same length. BDPT’s approach of connecting subpaths means that a path containing scattering events can be generated in different ways. We can expect that some strategies will be a good choice for producing certain types of paths while being quite poor for others. Multiple importance sampling can be applied to combine the set of connection strategies into a single estimator that uses each strategy where it is best. This application of MIS is crucial to BDPT’s efficiency.

One of BDPT’s connection strategies is to directly connect light subpath
vertices to the camera: these paths almost always create a contribution
whose raster coordinates differ from the current pixel being rendered,
which violates the expectations of the `SamplerIntegrator`
interface. Thus, `BDPTIntegrator` derives from the more general
`Integrator` interface so that it can have more flexibility in how it
updates the image. Its implementation is in the files
`integrators/bdpt.h` and
`integrators/bdpt.cpp`.

The `BDPTIntegrator` constructor, which is straightforward and not
included here, initializes member variables with the provided camera,
sampler, and the maximum path depth.

All subpath creation and connection steps are performed in a nested parallel
loop over pixels in `BDPTIntegrator::Render()`. The overall structure
of this method is very similar to `SamplerIntegrator::Render()`:

- The image is subdivided into tiles of pixels, which are processed in parallel.
- For each tile, the method declares a
`MemoryArena`,`arena`and acquires a`Sampler`instance from`BDPTIntegrator::sampler`via a call to`Sampler::Clone()`. - It then loops over the pixels in each tile, taking samples from each
one until
`Sampler::StartNextSample()`returns`false`, at which point it advances to the next pixel.

We won’t include this code here, as the details should be familiar now. Instead, we’ll move forward to the fragment responsible for generating and connecting the subpaths for a pixel sample.

Generating a single BDPT sample involves sampling a pixel position in the image, generating camera and light subpaths, and then connecting them using a variety of specialized connection strategies.

`L`>>

The `Vertex` class, which will be introduced in
Section 16.3.1, represents a vertex along a subpath. We
start by allocating two arrays for vertices for the two subpaths. In
addition to a vertex on a surface, a `Vertex` can represent a vertex
for a scattering event in participating media, a vertex on a light source,
or a vertex on the camera lens.

For each subpath, one more vertex than the maximum path length must be allocated to store the starting vertex on the light or camera. Camera subpaths get yet again one more vertex, which allows camera paths to randomly intersect light sources—this strategy is important for rendering area lights seen by reflections only from specular surfaces, for example. (The corresponding strategy of allowing a light subpath to randomly intersect the camera lens is less useful in practice.)

The `GenerateCameraSubpath()` and `GenerateLightSubpath()`
functions, which generate these two subpaths, will be defined in
Section 16.3.2, after some preliminaries
related to the `Vertex` representation.

After the subpaths have been generated, a nested `for` loop iterates
over all pairs of vertices from the two subpaths and attempts to connect
them. In these loops, and correspond to the number of vertices to
use from the corresponding subpath; an index of 0 means that no
scattering events are
used from the corresponding subpath. In our implementation, this strategy
is only supported for the case, which furthermore requires
`cameraVertices[t]` to be a surface intersection involving a light source.
Because the dual case—intersecting a camera with —is not supported, the
loop over camera subpaths starts at .

A path length of 1 corresponds to connecting a point on the camera lens
or a light source to the other subpath. For light endpoints,
this is identical to the standard light sampling approach provided by
`Light::Sample_Li()` and first used in
Section 14.3.1; our implementation uses this
existing functionality. For camera endpoints, we will rely on the symmetric
analog `Camera::Sample_Wi()`. Since `Camera::Sample_Wi()` and
`Light::Sample_Li()` cannot both be used at the same time, we skip
the case.

`L`>>

The `ConnectBDPT()` function attempts to connect the two subpaths with
the given number of vertices; it returns the weighted contribution of the
radiance carried along the resulting path. (It will be defined shortly, in
Section 16.3.3.) In most cases, this contribution is
accumulated into the variable `L` that will be provided to the
`FilmTile` after all of the subpath connections have been attempted.
However, the connection connects a vertex of the light subpath
directly to the camera and thus will produce different raster positions in
every iteration—in this case, the implementation calls
`Film::AddSplat()` to immediately record its sample contribution.

`L`>>=

### 16.3.1 Vertex Abstraction Layer

A general advantage of path-space rendering techniques is their ability to create paths in a large number of different ways, but this characteristic often leads to cluttered and hard-to-debug implementations. Establishing connections between pairs of vertices on the light and camera subpaths is a simple operation when only surface interactions are involved but quickly becomes unwieldy if one or both of the vertices may represent a scattering event in participating media, for example.

Instead of an inconveniently large number of conditional statements in the
core BDPT code, we’ll define the `Vertex` type, which can represent
any kind of path vertex. All of the necessary conditional logic to handle
various cases that occur throughout the BDPT implementation will be
encapsulated in its methods.

`next`is an infinite area light>>

`next`>>

`light`to the light source at the vertex>>

`light`to the light source at the vertex>>

`light`,

`pdfChoice`>>

Altogether, four different types of path vertices are supported in
`pbrt`.

The `beta` member variable is analogous to the
variable in the volumetric path tracer
(Section 15.3.1) or the weight carried by particles
in the `SPPMIntegrator`: it contains the product of the
BSDF or phase function values, transmittances, and cosine terms for the
vertices in the path generated so far, divided by their respective sampling
PDFs. For the light subpath, they also include the emitted radiance divided by
the density of the emission position and direction. For the camera subpath,
radiance is replaced by importance.

Instances of various types of `Interaction`s represent type-specific
data about the vertex. This information is arranged as a space-efficient
C++ union since only one of the entries is used at a time.

`EndpointInteraction` is a new interaction implementation that is used only by
BDPT. It records the position of a path endpoint—i.e., a position on a
light source or the lens of the camera—and stores a pointer to the camera
or light in question.

There are a multiple constructors that initialize the
`EndpointInteraction` contents using a pointer and either an existing
`Interaction` or a sampled ray. For brevity, we only show the
constructors for light endpoints.

A range of static helper functions create `Vertex` instances for the various types of
path vertices. We’ll only include their declarations here, as their
implementations are all
straightforward. We could instead have provided a range of overloaded
constructors that took various `Interaction` types as parameters, but
we think that having the name of the type of vertex being created explicit
in a function call makes the following code easier to read.

It is often necessary to access the core fields in `Interaction` that
are common to all types of vertices; the `Vertex::GetInteraction()`
method extracts this shared part. Since `Vertex::mi`,
`Vertex::si`, and `Vertex::ei` all derive from `Interaction`
and are part of the same union and thus their base `Interaction`s are
at the same location in memory, the conditional logic below should be
removed by the compiler.

The convenience function `Vertex::p()` returns the vertex
position. We omit definitions of
`Vertex::time()`,
`Vertex::ng()`, and
`Vertex::ns()`,
which are defined analogously, and return the time, geometric normal, and
shading normal, respectively, of the vertex.

The `delta` attribute is only used by surface interactions and records
whether a Dirac delta function was sampled (e.g., when light is scattered by
a perfectly specular material).

A simple way to find out whether a vertex (including endpoints) is located on a
surface is to check whether `Vertex::ng()` returns a nonzero
result.

`Vertex::f()` evaluates the portion of the measurement equation,
(16.1), associated with a vertex. This method only needs to
handle surface and medium vertices since the BDPT implementation only
invokes it in those cases. Note that the next vertex in the path is the only one
passed to this method: though the direction to the predecessor vertex is
needed to evaluate the BRDF or phase function, this information is already
available in `Interaction::wo` from when the `Vertex` was first
created.

The `Vertex::IsConnectible()` method returns a Boolean value that
indicates whether a connection
strategy involving the current vertex can in principle succeed. If, for
example, the vertex is a surface interaction whose BSDF only consists of
Dirac delta components, then we can never successfully connect it to a
subpath vertex in the other path: there’s zero probability of choosing a
direction where the delta distribution is nonzero. The implementation assumes
that medium and camera vertices are always connectible (the latter
assumption would have to be modified if support for orthographic cameras is added).

A few helper methods are useful for working with lights—these are
necessary to deal with the considerable variety of light sources
supported by `pbrt`.

For instance, when the `Primitive` underlying a surface interaction
vertex is itself an area light, the vertex can take on different roles
depending on the BDPT connection strategy: it can be re-interpreted as a
light source and used as a path endpoint, or it can serve as a normal
scattering event to generate paths of greater length. The
`Vertex::IsLight()` method therefore provides a comprehensive test for
whether a vertex can be interpreted as a light source.

Light sources that have an emission profile that contains a Dirac delta
distribution must be treated specially in the computation of multiple
importance sampling weights; `Vertex::IsDeltaLight()` checks for this
case.

The `Vertex::IsInfiniteLight()` method indicates whether a vertex is
associated with an infinite area light. Such vertices can be created by
sampling an emitted ray from an `InfiniteAreaLight` or by tracing a
ray from the camera that escapes into the environment. In the latter case,
the vertex is marked with the type `VertexType::Light`, but
`ei.light` stores `nullptr` since no specific light source was
intersected.

Finally, `Le()` can be used to find emitted radiance from
an intersected light source toward another vertex.

#### Probability Densities

BDPT’s multiple importance sampling code requires detailed information
about the probability density of light-carrying paths with respect to a
range of different path sampling strategies. It is crucial that these
densities are expressed in the same probability measure so that ratios of
their probabilities are meaningful. The implementation here uses the
*area product measure* for path probabilities. It expresses the
density of a path as the product of the densities of its individual
vertices, which are in turn given in a simple common (and consistent)
measure: *probability per unit area*.
This is the same measure as was initially used to derive the surface form
of the LTE in Section 14.4.3.

Recall from Section 5.5 that the Jacobian of
the mapping from solid angles to surface area involves the inverse squared
distance and the cosine of angle between the geometric normal at
`next` and `wn` (assuming `next` is a surface vertex—if it
is a point in a participating medium, there is no cosine term
(Section 15.1.1)). The
`ConvertDensity()` method returns the product of this Jacobian
(computed from the vertex attributes) and the `pdf` parameter,
which should express a solid angle density at the vertex.
(Infinite area light sources need special handling here; this case is
discussed later, in Section 16.3.5.)

`next`is an infinite area light>>

Each vertex has two densities: the first, `pdfFwd`, stores *forward* density
of the current vertex, which is the probability per unit area of the
current vertex as generated by the path sampling algorithm.
The second density, `pdfRev`, is the hypothetical probability density of the
vertex if the direction of light transport was *reversed*—that is, if
radiance transport was used in place of importance transport for the camera
path and vice versa for the light path. This reverse density will be
crucial for computing MIS weights in Section 16.3.4.

The `Vertex::Pdf()` method returns the probability per unit area of the
sampling technique associated with a given vertex. Given a preceding vertex
`prev`, it evaluates the density for sampling the vertex `next` for rays
leaving the vertex `*this`. The `prev` argument may be
equal to `nullptr` for path endpoints (i.e., cameras or light sources), which
have no predecessor. Light sources require some extra care and are handled
separately via the `PdfLight()` method that will be discussed shortly.

`next`>>

For all other vertex types, the function first computes normalized directions to the preceding and next vertex (if present).

Depending on the vertex type, `Pdf()` invokes the appropriate PDF method and
stores the probability per unit solid angle for sampling the direction to
`next` in the variable `pdf`.

Finally, the solid angle density is converted to a probability per unit
area at `next`.

`next`>>=

Light-emitting vertices can be created in two different ways: by using a
sampling routine like `Light::Sample_Le()`, or by intersecting an
emissive surface via ray tracing. To be able to compare these different
strategies as part of a multiple importance sampling scheme, it is
necessary to know the corresponding probability per unit area for a light
vertex. This task is handled by the `PdfLight()` method.

Its definition resembles that of `Vertex::Pdf()`: it computes the
direction from the current vertex to the provided vertex and invokes
`Light::Pdf_Le()` to retrieve the solid angle density of the underlying
sampling strategy, which is subsequently converted into a density per unit
area at `v`. In contrast to `Vertex::Pdf()`, this method also
treats surface vertices located on area lights as if they were light source
vertices. Once more, there is a special case for infinite area lights,
which we postpone until Section 16.3.5.

`light`to the light source at the vertex>>

Depending on the vertex type, the pointer to the light source implementation must be obtained from one of two different locations.

`light`to the light source at the vertex>>=

By symmetry, we would now expect a dual routine `Vertex::PdfCamera()`
that applies to camera endpoints. However, cameras in `pbrt` are never
represented using explicit geometry: thus, they cannot be reached by ray
intersections, which eliminates the need for a dedicated query function.
If desired, a perfectly symmetric implementation could be achieved by
instantiating scene geometry that is tagged with an “area camera”
analogous to area lights. This increases the set of possible BDPT
connection strategies, though their benefit is negligible in most scenes
due to the low probability of intersecting the camera.

Note that the `Pdf()` and `PdfLight()` methods use the
directional probability density of the importance strategy implemented at
the current vertex as measured at the location of another given
vertex. However, this is not enough to fully characterize the behavior of
path endpoints, whose sampling routines generate rays from a 4D distribution. An
additional `PdfLightOrigin()` method fills the gap by providing
information about the spatial distribution of samples on the light sources
themselves. For the same reason as before, a dedicated
`PdfCameraOrigin()` method for camera endpoints is not needed.

`light`to the light source at the vertex>>

`light`,

`pdfChoice`>>

`light`to the light source at the vertex>>

`light`,

`pdfChoice`>>

To determine the discrete probability of choosing `light` among the
available light sources, we must find the pointer to the light source and
look up the corresponding entry in `lightDistr`. If there are very
many light sources, the linear search here will be inefficient. In that
case, this computation could be implemented more efficiently by storing
this probability directly in the light source class.

`light`,

`pdfChoice`>>=

### 16.3.2 Generating the Camera and Light Subpaths

A symmetric pair of functions, `GenerateCameraSubpath()` and
`GenerateLightSubpath()`, generates the two corresponding types of
subpaths. Both do some initial work to get the path started and then call
out to a second function, `RandomWalk()`, which takes care of sampling
the following vertices and initializing the `path` array. Both of
these functions return the number of vertices in the subpath.

A camera path starts with a camera ray from
`Camera::GenerateRayDifferential()`. As in the `SamplerIntegrator`,
the ray’s differentials are scaled so that they reflect the actual pixel
sampling density.

The vertex at position `path[0]` is initialized with a special
endpoint vertex on the camera
lens (for cameras with finite apertures) or pinhole.
The `RandomWalk()` function then
takes care of generating the rest of the vertices. `TransportMode`
reflects the quantity that is carried back to the origin of the
path—hence `TransportMode::Radiance` is used here. Since the first
element of `path` was already used for the endpoint vertex,
`RandomWalk()` is invoked such that it writes sampled vertices starting
at position `path[1]` with a maximum depth of `maxDepth - 1`.
The function returns the total number of sampled vertices.

The function `GenerateLightSubpath()` works in a similar fashion, with
some minor differences corresponding to the fact that the path starts from
a light source.

`path[1]`for infinite area light>>

`path[0]`for infinite area light>>

As usual in this integrator, a specific light is chosen by sampling from
the provided `Distribution1D`. Next, an emitted ray is sampled via
the light’s implementation of `Light::Sample_Le()`.

The `beta` variable is initialized with the associated sampling
weight, which is given by the emitted radiance multiplied by a cosine
factor from the light transport equation and divided by the probability of
the sample in ray-space. This step is analogous to
Equation (16.15) and the approach implemented in the
fragment <<Generate `photonRay` from light source and
initialize `beta`>> from the particle tracing step of the SPPM
integrator.

`path[1]`for infinite area light>>

`path[0]`for infinite area light>>

`RandomWalk()` traces paths starting at an initial vertex. It assumes
that a position and an outgoing direction at the corresponding path
endpoint were previously sampled and that this information is provided via
the input arguments `ray`, a path throughput weight `beta`, and
a parameter `pdf` that gives the probability of sampling the
ray per unit solid angle of `ray.d`. The parameter `mode` selects
between importance and radiance transport
(Section 16.1). The path vertices are stored in
the provided `path` array up to a maximum number of `maxDepth`
vertices, and the actual number of generated vertices is returned at the end.

`path`>>

`path`and compute forward density>>

`mode`and skip over medium boundaries>>

`vertex`with surface intersection information>>

The two variables `pdfFwd` and `pdfRev` are updated during
every loop iteration and satisfy the following invariants: at the beginning of
each iteration, `pdfFwd` records the probability per unit solid angle of
the sampled ray direction `ray.d`. On the other hand, `pdfRev`
denotes the *reverse* probability at the end of each iteration—that is,
the density of the opposite light transport mode per unit solid angle along the
same ray segment.

`path`>>=

`path`and compute forward density>>

`mode`and skip over medium boundaries>>

`vertex`with surface intersection information>>

The loop body begins by intersecting the current ray against the scene
geometry. If the ray is passing through a participating medium, the call
to `Medium::Sample()` possibly samples a scattering event between the
ray and the surface. It returns the medium sampling weight, which is
incorporated into the path contribution weight `beta`.

When `Medium::Sample()` generates a medium scattering event, the
corresponding `Interaction` is stored in a `Vertex`
and appended at the end of the `path` array.
The `Vertex::CreateMedium()` method converts the solid angle
density in `pdfFwd` to a probability
per unit area and stores the result in `Vertex::pdfFwd`.

`path`and compute forward density>>=

If the maximum path depth has not yet been exceeded, a scattered direction is sampled from the phase function and used to spawn a new ray that will be processed by the next loop iteration.

At this point, we could evaluate the phase function with swapped arguments
to obtain the sampling density at the preceding vertex for a hypothetical
random walk that would have produced the same scattering interactions in
reverse order. Since phase functions are generally symmetric with respect
to their arguments, instead we simply reuse the value computed for `pdfFwd`.

For surfaces, the overall structure is similar, though some extra care is required to deal with non-symmetric scattering and surfaces that mark transitions between media.

`mode`and skip over medium boundaries>>

`vertex`with surface intersection information>>

The fragment <<Capture escaped rays when tracing from the camera>>
is necessary to support infinite area lights. It will be discussed in
Section 16.3.5. The following fragment, <<Compute
scattering functions for `mode` and skip over medium boundaries>>, is
analogous to <<Compute scattering functions and skip over medium
boundaries>> from the basic path tracer except that scattering functions are
requested for the current light transport mode (radiance or importance
transport) using the `mode` parameter.

`mode`and skip over medium boundaries>>=

Given a valid intersection, the current path vertex is initialized with the
corresponding surface intersection vertex,
where, again, the solid angle density
`pdfFwd` is converted to an area density before being stored in
`Vertex::pdfFwd`.

`vertex`with surface intersection information>>=

If the maximum path depth has not yet been exceeded,
a scattered direction is sampled from the BSDF and the path contribution in
`beta` is updated. For the surface case, we can’t generally assume that
`BSDF::Pdf()` is symmetric; hence we must re-evaluate the sampling density with
swapped arguments to obtain `pdfRev`. In case of a specular sampling
event, we mark the vertex using the flag `Vertex::delta` and set
`pdfFwd` and `pdfRev` to 0 to indicate that the underlying
interaction has no continuous density function. Finally, we correct for
non-symmetry related to the use of shading normals (see
Section 16.1.3 for details).

The loop wraps up by converting the reverse density `pdfRev` to a
probability per unit area and storing it in the `Vertex` data
structure of the preceding vertex.

### 16.3.3 Subpath Connections

The `ConnectBDPT()` function takes the light and camera subpaths and
the number of vertices and to use from each one, respectively.
It returns the corresponding
strategy’s contribution.

The connection strategy with uses only a
single camera vertex, the camera’s position; the raster position of the
path’s contribution is then based on which pixel the last vertex of the
light subpath is visible in (if any). In this case,
the resulting position is returned via the `pRaster`
argument.

`L`>>

`L`for case>> } }

A number of cases must be considered when handling connections; special
handling is needed for those involving short subpaths with only zero or one
vertex. Some strategies dynamically sample an additional vertex, which is
stored in the temporary variable `sampled`.

`L`>>=

`L`for case>> } }

The first case () applies when no vertices on the light subpath are
used and can only succeed when the camera subpath is already a complete path—that is, when vertex
can be interpreted as a light source. In this case, `L` is set to the
product of the path throughput weight and the emission at .

The second case applies when —that is, when a prefix of the light subpath
is directly connected to the camera (Figure 16.15). To
permit optimized importance sampling strategies analogous to direct
illumination routines for light sources, we will ignore the actual camera
vertex and sample a new one using
`Camera::Sample_Wi()`—this optimization corresponds to the second
bullet listed at the beginning of Section 16.3.
This type of connection can only succeed if the light subpath vertex
supports sampled connections; otherwise the BSDF at
will certainly return 0 and there’s no reason to attempt
a connection.

`Camera::Sample_Wi()`samples a vertex on the lens corresponding to a ray leaving the camera to the light path vertex (if there is such a ray that intersects the film).

`L`for case>> } }

If the camera vertex was generated successfully, `pRaster` is initialized
and `vis` holds the connection segment. Following Equation (16.1),
we can compute
the final contribution as the product of the subpath weights, the
transmittance over the connecting segment, the BRDF or phase function, and
a cosine factor when is a surface vertex.

`L`for case>>=

We omit the next case, , here. It corresponds to performing a direct
lighting calculation at the last vertex of the camera subpath. Its
implementation is similar to the case—the main differences are that
roles of lights and cameras are exchanged and that a light source must be
chosen using `lightDistr` before a light sample can be generated.

The last case, <<Handle all other bidirectional connection cases>>, is responsible for most types of connections: it applies whenever the camera and light subpath prefixes are long enough so that no special cases are triggered (i.e., when ). If we consider the generalized path contribution equation from Section 15.1.1, we have constructed camera and light subpaths with the incremental path construction approach used in Section 14.5.3 for regular path tracing. Given the throughput of these paths up to the current vertices, and , respectively, where

and similarly for , we can find that the contribution of a path of light vertices and camera vertices is given by

The first and last products involving the emission, importance and
generalized throughput terms, for the light path and
for the camera path, are already available in the
`Vertex::beta` fields of the connection vertices, so we only need to
compute the term in brackets to find the path’s overall contribution. The
symmetric nature of BDPT is readily apparent: the final contribution is
equal to the product of the subpath weights, the BRDF or phase functions
and a (symmetric) generalized geometry term. Note that this strategy cannot
succeed when one of the connection vertices is marked as not
connectible—in this case, no connection attempt is made.

The product of subpath weights and the two BSDFs is often 0; this
case happens, for example, if the connecting segment requires that light be
transmitted through one of the two surfaces but the corresponding surface
isn’t transmissive. In this case, it’s worth avoiding the unnecessary call to
the `G()` function, which traces a shadow ray to test visibility.

The generalized geometry term, Equation (15.5), is
computed in a separate function `G()`.

The computation of the multiple importance sampling weight for the connected
path is implemented as a separate function `MISWeight()`, which we
discuss next.

### 16.3.4 Multiple Importance Sampling

Recall the example of a light pointed up at the ceiling, indirectly
illuminating a room. Even without multiple importance sampling,
bidirectional path tracing will do much better than path tracing by
reducing the number of paths with no contribution, since the paths from the
light provide camera path vertices more light-carrying targets to hit with
connection segments (see Figure 16.17, which shows the effectiveness of
various types of bidirectional connections).
However, the image will still suffer from variance caused by paths with unexpectedly
large contributions due to vertices on the camera subpaths that happen to
find the bright spot on the ceiling. MIS can be applied to address this
issue; it automatically recognizes that connection strategies that involve
at least one scattering event on the light subpath lead to superior
sampling strategies in this case.
This ability comes at the cost of having to know the probabilities for
constructing a path according to all available strategies and is the reason for
caching the `Vertex::pdfFwd` and `Vertex::pdfRev` values earlier.

In this section, we will explain the `MISWeight()` function that
computes the multiple importance sampling weight associated with a
particular BDPT sampling strategy. It takes a light and camera subpath and
an integer pair identifying the prefixes used by a successful BDPT
connection attempt, producing a complete path . It iterates over all
alternative strategies that could hypothetically have generated the same
input path but with an earlier or later crossover point between the light
and camera subpaths. Figure 16.16 shows the basic idea
and Figure 16.17 shows images from the various
strategies individually.

`MISWeight()`considers other strategies that could have produced the same path (bottom). The case is omitted for simplicity. (It only makes sense in systems where the sensor of the camera can be intersected by rays.)

The function then reweights the path contribution using the balance heuristic from Section 13.10.1, taking all of these possible sampling strategies into account. It is straightforward to switch to other MIS variants (e.g., based on the power heuristic) if desired.

Weighted versions of the images in Figure 16.17 are shown in Figure 16.18.

Let denote the currently considered connection strategy, which concatenates a prefix from the light subpath and a (reversed) prefix from the camera subpath, producing a path of length with vertices that we will refer to as :

Suppose that the probability per unit area of vertex is given by and for sampling strategies based on importance and radiance transport, respectively. Then the area product density of the current path is simply the product of the importance transport densities up to vertex and the radiance transport densities for the remainder:

Implementation-wise, the above expression is straightforward to evaluate:
the importance transport densities are already
cached in the `Vertex::pdfFwd` fields of the light subpath, and the
same holds true for the radiance transport densities on the camera subpath.

More generally, we are also interested in the path density according to
other connection strategies that could *in theory* have created
this path. This requires that they generate paths of a compatible length—i.e.,
that . Their corresponding path density is given by

where . Evaluating these will also involve the reverse
probabilities in `Vertex::pdfRev`.

Recall from Section 13.10.1 that the balance heuristic weight for strategy out of a set of sampling strategies with uniform sample allocation was given by

This is the expression we would like to evaluate in `MISWeight()`, though
there are two practical issues that must first be addressed.

First, path densities can easily under- or overflow the range of representable values in single or even double precision. Area densities of individual vertices can be seen to be inversely proportional to the square of the scene dimensions: for instance, uniformly scaling the scene to half its size will quadruple the vertex densities. When computing the area product density of a path with 10 vertices, the same scaling operation will cause an increase in path density of approximately one million times. When working with very small or large scenes (compared to a box of unit volume), the floating point exponent of can quickly exceed the valid range.

Second, a naive MIS implementation has time complexity of , where is the maximum path length. Evaluation of based on Equation (16.17) involves a linear sweep over vertices, and the MIS weight in Equation (16.18) requires another sweep over strategies. Given that this must be done once per connection strategy for a number of strategies that is proportional to the square of the subpath length, we are left with an algorithm of quartic complexity.

We will avoid both of these issues by using a more efficient incremental computation that works with ratios of probability densities to achieve better numerical and run-time behavior.

Dividing both the numerator and denominator of Equation (16.18) by yields

The two sums above consider alternative strategies that would have taken additional steps on the camera or light subpath, respectively. Let us define a more concise notation for the individual summand terms:

These satisfy the following recurrence relations:

The recurrence weights in the above equations are ratios of path densities of two adjacent sampling strategies, which differ only in how a single vertex is generated. Thus, they can be reduced to probability ratios of the affected vertex:

The main portion of the `MISWeight()` function accumulates these
probability ratios in a temporary variable `sumRi` using an incremental
evaluation scheme based on Equation (16.21). The
last line returns the reciprocal of the terms according to
Equation (16.19). There is also a special case at the
beginning, which directly returns a weight of 1 for paths with two
vertices, which can only be generated by a single strategy.

`remap0`that deals with Dirac delta functions>>

A helper function `remap0()` returns its
argument while mapping 0-valued arguments to 1. It is used to handle the
special case of Dirac delta functions in the path, which have a
continuous density of 0. Such degenerate vertices cannot be joined using any
deterministic connection strategy, and their discrete probability cancels when
iterating over the remaining set of strategies because it occurs both in the
numerator and denominator of the summands in
Equation (16.19). The purpose of the helper function
is to temporarily map these densities to a nonzero value to make sure that this
cancellation occurs without causing a division by 0.

`remap0`that deals with Dirac delta functions>>=

To avoid an excessively large number of calls to the various `Vertex`
PDF functions, the weight computation uses the cached probabilities
in `Vertex::pdfFwd` and `Vertex::pdfRev`. Since these values only
capture information about the original camera and light subpaths, they must
still be updated to match the full path configuration near the crossover
point—specifically and and their
predecessors. This is implemented in the somewhat technical fragment
<<Temporarily update vertex properties for current strategy>>, which
we discuss last.

We iterate over hypothetical strategies that would have taken
additional steps from the light direction, using a temporary variable
`ri` to store the current iterate . The fragment name makes
reference to the camera subpath, since these extra steps involve vertices
that were in reality sampled from the camera side. All vertex densities are passed
through the function `remap0()`, and the ratio is only added to a running sum
when endpoints of the current hypothetical connection strategy are marked as
non-degenerate. The loop terminates before reaching the strategy,
which shouldn’t be considered since the camera cannot be intersected.

The next step considers additional steps along the light subpath and largely resembles the previous case. A special case arises when the current strategy would involve intersecting a light source (i.e., when ): this will fail when the endpoint involves a Dirac delta distribution, hence the additional test below.

Finally, we will define the missing fragment <<Temporarily update vertex
properties for current strategy>>, which modifies `Vertex` attributes
with new values specific to the current connection strategy . To reduce
the amount of code needed for both the update and the subsequent cleanup
operations, we will introduce a helper class `ScopedAssignment` that
temporarily modifies a given variable and then reverts its change when
program execution leaves the scope it was defined in. It
stores a pointer `ScopedAssignment::target` to a memory location of
arbitrary type (specified via the `Type` template parameter) and a
snapshot of the original value in `ScopedAssignment::backup`.

The `ScopedAssignment` constructor takes a pointer to a target memory
location and overwrites it with the `value` parameter after making a
backup copy. The destructor simply reverts any changes.

The main update operation then consists of finding the connection vertices
and their predecessors and updating vertex probabilities and other
attributes so that the two `Vertex` arrays reflect the chosen
connection strategy.

We begin by obtaining pointers to the affected connection vertices and and their predecessors.

Recall that strategies with or perform camera and light source
sampling and thus generate a new endpoint. The implementation accounts for this
by temporarily overriding `*qs` or `*pt` with the sampled vertex
provided via the `sampled` argument of `MISWeight()`.

Certain materials in `pbrt` (e.g., the `UberMaterial`) instantiate both
specular and non-specular `BxDF` lobes, which requires some additional
consideration at this point: suppose the specular lobe of such a material is
sampled while generating the camera or light subpath. In this case, the
associated `Vertex` will have its `Vertex::delta` flag set to
`true`, causing `MISWeight()` to (correctly) ignore it as a
hypothetical connection vertex when comparing the densities of different
strategies.
On the other hand, it is possible that a BDPT strategy later connects this
degenerate vertex to a vertex on the other subpath using its non-specular
component. In this case, its “personality” must temporarily change to that of
a non-degenerate vertex. We always force the `Vertex::delta` attribute of
the connection vertices to `false` to account for this possibility.

Next, we will update the reverse sampling densities of the connection
vertices and their predecessors, starting with . This vertex
was originally sampled on the camera subpath, but it could also have been
reached using an extra step from the light side
( in three-point form); the resulting
density at is evaluated using `Vertex::Pdf()`.

The case is special: here, is an intersection with a
light source found on the camera subpath. The alternative reverse sampling
strategy generates a light sample using `Light::Sample_Le()`, and we
evaluate its spatial density with the help of
`Vertex::PdfLightOrigin()`.

The next fragment initializes the `pdfRev` field of
with the density of the reverse strategy
. Once more, there is a special case for
, where the alternative reverse strategy samples an emitted ray via
`Light::Sample_Le()` and intersects it against the scene geometry; the
corresponding density is evaluated using `Vertex::PdfLight()`.

The last fragment, <<Update reverse density of vertices and >>, is not included here. It is analogous except that it does not require a similar special case for .

### 16.3.5 Infinite Area Lights and BDPT

The infinite area light, first introduced in Section 12.6, provides a convenient way of illuminating scenes with realistic captured illumination. Unfortunately, its definition as an infinitely distant directional source turns out to be rather difficult to reconcile with BDPT’s path integral formulation, which expresses probabilities in terms of area densities, which in turn requires finite-sized emitting surfaces.

Through some gymnastics, we could represent infinite area lights with
finite shapes. For example, an infinite area light’s radiance emission
distribution could be described by a large emitting sphere that surrounded
the scene, where the directional distribution of emitted radiance at each
point on the interior of the sphere was the same as the infinite area
light’s emitted radiance for the same direction. This approach would
require a significantly more complex implementation of
`InfiniteAreaLight` with no practical benefits apart from BDPT
compatibility.

Instead of changing the functionality of `InfiniteAreaLight`, we will
instead make infinite area lights a special case in BDPT. Since
illumination from these lights is most naturally integrated over solid angles, our
approach will be to add support for solid angle integration to the vertex
abstraction layer. In practice, scenes may contain multiple infinite area
lights; we will follow the convention of treating them as one combined
light when evaluating the emitted radiance or determining sample
probabilities.

First, we will create a special endpoint vertex any time a camera path ray escapes
the scene. The <<Capture escaped rays when tracing from the
camera>> fragment is invoked by `RandomWalk()` whenever no surface
intersection could be found while generating the camera subpath. In the
implementation of the `Vertex::CreateLight()` method called here,
the `pdfFwd` variable recording the probability per unit solid angle
is stored directly in `Vertex::pdfFwd` without conversion by
`Vertex::ConvertDensity()`.

The existence of light vertices on the camera subpath leads to certain
nonsensical connection strategies. For instance, we couldn’t possibly connect a
light vertex on the camera subpath to another vertex on the light subpath. The
following check in `ConnectBDPT()` detects and ignores
such connection attempts.

Some parts of the code may still attempt to invoke `ConvertDensity()`
with a `next` `Vertex` that refers to an infinite area light. The
following fragment detects this case at the beginning of
`ConvertDensity()` and directly returns the supplied solid angle
density without conversion in that case.

`next`is an infinite area light>>=

Next, we need to adapt the light subpath sampling routine to correct the
probability values returned by the ray sampling function
`InfiniteAreaLight::Sample_Le()`. This case is detected in an additional
fragment at the end of `GenerateLightSubpath()`.

`path[1]`for infinite area light>>

`path[0]`for infinite area light>>

Recall that `InfiniteAreaLight::Sample_Le()` samples a ray direction (with a
corresponding density `pdfDir`) and a ray origin on a perpendicular disk
(with a corresponding density `pdfPos`) that touches the scene’s bounding
sphere. Due to foreshortening, the resulting ray has a corresponding spatial
density of at its first intersection with the
scene geometry, where is the angle between and the
geometric normal.

`path[1]`for infinite area light>>=

Following our new convention, the spatial density of infinite area light
endpoints is now expressed as a probability per unit solid angle. We will
create a helper function `InfiniteLightDensity()` that determines this
value while also accounting for the presence of other infinite area lights.

`path[0]`for infinite area light>>=

This function performs a weighted sum of the directional densities of all
infinite area lights using the light probabilities in `lightDistr`.

The remaining two changes are analogous and address the probability
computation in the `PdfLightOrigin()` and `PdfLight()`
methods. For the former, we similarly return the combined solid angle
density when an infinite area light is detected.

In `PdfLight()`, we compute the probability of sampling a ray origin on a
disk whose radius is equal to the scene’s bounding sphere. The remainder of
`PdfLight()` already accounts for the necessary cosine foreshortening
factor; hence we do not need to multiply by it here.