## 16.1 The Path-Space Measurement Equation

In light of the path integral form of the LTE from Equation (14.16), it’s useful to go back and formally describe the quantity that is being estimated when we compute pixel values for an image. Not only does this let us see how to apply the LTE to a wider set of problems than just computing 2D images (e.g., to precomputing scattered radiance distributions at the vertices of a polygonal model), but this process also leads us to a key theoretical mechanism for understanding the bidirectional path tracing and photon mapping algorithms in this chapter. For simplicity, we’ll use the basic path integral for surfaces rather than the generalized variant from Section 15.1.1, though the conclusions are applicable to both versions of the LTE.

The *measurement equation* describes the value of an abstract
measurement that is found by integrating over some set of rays carrying
radiance.
For example, when computing the value of a pixel in the image, we want to
integrate over rays starting in the neighborhood of the pixel, with
contributions weighted by the image reconstruction filter. Ignoring depth
of field for now (so that each point on the film plane corresponds to a
single outgoing direction from the camera), we can write the pixel’s value
as an integral over points on the film plane of a weighting function times
the incident radiance along the corresponding camera rays:

where is the measurement for the th pixel and is a point on the film. In this setting, the term is the product of the filter function around the pixel and a delta function that selects the appropriate camera ray direction of the sample from , :

This formulation may initially seem gratuitously complex, but it leads us to an important insight. If we expand the terms of the LTE sum, we have

where is the path throughput function introduced in Equation (14.18).

Note the nice symmetric way in which the emitted radiance (quantifying the light source emission profile) and the weighting function (quantifying the camera’s sensitivity profile for a pixel ) appear in the above equation: neither term is treated specially, and from this we can infer that the concepts of emission and measurement are mathematically interchangeable.

The implications of this symmetry are important: it says that we can think of the rendering process in two different ways. The first interpretation is that light could be emitted from light sources, bounce around the scene, and arrive at a sensor where describes its contribution to the measurement. Alternatively, we can think of the sensor as emitting an imaginary quantity that creates a measurement when it reaches a light source. This idea isn’t just a theoretical construct: it can be applied in practice. A good example is the Dual Photography work of Sen et al. (2005) that showed that it was possible to take photographs from the viewpoint of a video projector by processing input photographs taken with a separate camera—this could be interpreted as turning the projector into a camera, while using the original camera as the “light source” to illuminate the scene.

By simply swapping the role of cameras and light sources in this way, we can
create a method known as *particle tracing,* which traces rays from the
light sources to recursively estimate the incident importance arriving on
surfaces. This is not a particularly useful rendering technique on its own, but
it constitutes an essential ingredient of other methods such as bidirectional
path tracing and photon mapping.

The value described by the term is known as the
*importance* for the ray between and in the
scene.
When the
measurement equation is used to compute pixel measurements, the importance
will often be partially or fully described by delta distributions, as it
was in the previous example. Many other types of measurements besides image
formation can be described by appropriately constructed importance
functions, and thus the formalisms described here can be used to show how
the integral over paths described by the measurement equation is also the
integral that must be estimated to compute them.

### 16.1.1 Sampling Cameras

Bidirectional light transport algorithms require the ability to evaluate the
value of the importance function for arbitrary points in the scene; this is useful,
for example, for computing the importance of a point along a path that started at
a light source. The `Camera::We()` method takes
a ray with origin and direction
and evaluates the importance emitted from the point
on the camera in a direction . If provided, the `pRaster2`
parameter is used to return the raster position associated with the ray on the
film; conceptually this can be understood as the discrete index such that
attains its maximum value; in practice, the function
returns fractional values to specify the raster positions more accurately.

The default implementation of this method generates an error message; it’s
currently only implemented for the perspective camera model in `pbrt`.
Implementing it for other `Camera` models is saved for
Exercise 16.1 at the end of the chapter.

Given the camera-to-world transformation for the provided time, the method
checks that the direction points in the same hemisphere as the
camera is facing by transforming the camera-space viewing direction
to world space and computing the cosine of the angles between
them. If these directions are more than 90 degrees apart, then the
camera would never return a ray in this direction from its
`GenerateRay()` method, and so an importance value of 0 can be
returned immediately.

A slightly more involved test next checks if the ray corresponds to one starting from the film area. If its origin is outside of the film’s extent, then the point is outside of the camera’s viewing volume, and again a zero importance value can be returned.

For a camera with a finite aperture, we have a point on the lens and its direction (Figure 16.1). We don’t yet know the point on the film that this ray corresponds to, but we do know that all rays leaving that point are in focus at the plane . Therefore, if we compute the ray’s intersection with the plane of focus, then transforming that point with the perspective projection matrix gives us the corresponding point on the film. For pinhole apertures, we compute the intersection with a plane arbitrarily set at to get a point along the ray leaving the camera before performing the projection.

`pFocus`for a Ray Leaving the Lens. In order to compute the value of the importance function for this ray, we need to find the point it corresponds to on the film plane. To do so, we first compute

`pFocus`, the point where the ray intersects the plane of focus. This point can in turn be projected by the camera’s perspective projection matrix to find the corresponding raster-space point on the film.

Given the raster-space point, it’s easy to check if it’s inside the image extent.

The perspective camera in `pbrt` is an ideal sensor in the sense that it
generates samples with a uniform distribution over the film area. We will now
use this fact to derive the corresponding directional sampling distribution.
We’ll start by defining a camera space image rectangle that all camera rays
pass through and (arbitrarily) choose the one on the plane . The following
fragment that is part of the `PerspectiveCamera` constructor uses the
`RasterToCamera` transformation and divides by the coordinate to
compute the rectangle’s corner points, which in turn gives the rectangle’s area
.

`PerspectiveCamera`>>=

`PerspectiveCamera`. Given a point on the visible region of the image plane at with PDF , we can compute the directional PDF at a point on the lens (filled circle) by applying Equation (5.6) to account for the distance to the point on the lens and , the angle between the vector from to the point on the lens, and the surface normal of the image plane.

The importance function doesn’t have to obey any normalization constraints
(just like emitted radiance from an area light). However, we’ll define the
`PerspectiveCamera`’s importance function as a normalized PDF on ray space,
both for convenience in the following and also to be consistent with the weight values of 1
returned from `PerspectiveCamera::GenerateRay()`.

The importance function of `PerspectiveCamera` varies smoothly over its
support (); this variation is defined so that it cancels the vignetting
that a real pinhole camera would have and ensures that pixels record
values in units of radiance (this is another reason why
`PerspectiveCamera::GenerateRay()` returns weight values of 1).

The camera uniformly generates samples over the image plane area ; thus, the area-measured PDF for points on the image plane is . Consider now the directional PDF at a point on the lens (or the camera’s pinhole) corresponding to a differential area on the image plane (Figure 16.2); applying Equation (5.6) to transform to a directional density gives

where is the angle that makes with the image rectangle normal and is the distance between the point on the lens and the intersection of the ray with the plane. The distance to the point on the image plane is

as is the coordinate of in the local camera coordinate system and . Hence,

By construction, the above density function is normalized when integrated over directions—however, we initially set out to create a normalized importance function that is defined on the space of camera rays , where is the surface region associated with the perspective camera’s lens element. This function must satisfy the ray-space normalization criterion

We can’t directly set equal to due to the extra integration over areas and the additional cosine factor in the above integral.

Note that the lens area of a perspective camera is equal to , where is the lens radius. For point camera, the lens area is set to 1 and interpreted as a Dirac delta function.

We then define on ray space as

which divides by the lens area and a term to cancel out the cosine factor from Equation (16.3). At this point, the implementation has already ensured that is within the frustum, hence the only thing left to do is to return the importance value according to the first case of Equation (16.4).

In the light of a definition for the perspective camera’s importance function,
we can now reinterpret the ray generation function `Camera::GenerateRay()`
as an importance sampling technique for `We()`. As such, it’s appropriate
to define a `Camera` method that separately returns the spatial and
directional PDFs for sampling a particular ray leaving the camera, analogous to
the `Light::Pdf_Le()` method for light sources that will be introduced in
Section 16.1.2. As before, this method is currently only
implemented for the `PerspectiveCamera` and the default implementation
generates an error message.

The directional density of the ideal sampling strategy implemented in the
method `PerspectiveCamera::GenerateRay()` was already discussed and is
equal to defined in Equation (16.2). The spatial density is
the reciprocal of the lens area. Due to this overlap, the first four fragments
of `PerspectiveCamera::Pdf_We()` are therefore either identical or almost
identical to the similarly named fragments from `PerspectiveCamera::We()`
with the exception that they return zero probabilities via `*pdfPos` and
`*pdfDir` upon failure.

One last additional `Camera` method completes the symmetry between
light sources and cameras: it samples a point on the camera lens and
computes an incident direction along which importance is arriving at the
given reference position in the scene; it is thus the camera equivalent of
`Light::Sample_Li()`.

Like `Sample_Li()`, the PDF value this method returns is defined with
respect to solid angle at the reference point.

The `PerspectiveCamera` implementation of this method samples a point
on the lens to compute the incident importance at the reference point.

`lensIntr`>>

`ref`>>

We can compute an `Interaction` for a point on the lens by using
`u` to sample the lens and then transforming this point to world
space. For pinhole cameras, `lensRadius` is 0 and `pLens`
always ends up being at the origin.

`lensIntr`>>=

Given the point on the lens, most of the output parameters of
`Sample_We()` can be initialized in a straightforward manner.

`ref`>>

The PDF of the sample is the probability of sampling a point on the lens
(`1 / lensArea`), converted into a probability per unit solid angle at
the reference point. For pinhole cameras, there is an implied delta
distribution in both the PDF and the importance function value that cancel
out later. (This is following the same convention as was used for BSDFs
and light sources in Sections 14.1.3
and 14.2.1).

`ref`>>=

### 16.1.2 Sampling Light Rays

For bidirectional light transport algorithms, it’s also necessary to add a
light sampling method, `Sample_Le()`, that samples a ray from a
distribution of rays *leaving* the light, returning the ray in
`*ray` and the surface normal at the point on the light source in
`*nLight` (effectively, the analog of `Camera::GenerateRay()`).
A total of four sample values are passed to this method in the `u1`
and `u2` parameters so that two are available to sample the ray’s
origin and two are available for its direction. Not all light
implementations need all of these values—for example, the origin of all
rays leaving a point light is the same.

This method returns two PDF values: the ray origin’s probability density with respect to surface area on the light and its direction’s probability density with respect to solid angle. The joint probability of sampling the ray is the product of these two probabilities.

So that multiple importance sampling can be applied, there is also a method to return the position and direction PDFs for a given ray.

#### Point Lights

The sampling method for generating rays leaving point lights is
straightforward. The origin of the ray must be the light’s position; this
part of the density is described by a delta distribution. Directions are
uniformly sampled over the sphere, and the overall sampling density is the
product of these two densities. As usual, we’ll ignore the delta
distribution that is in the actual PDF because it is canceled out by a
(missing) corresponding delta term in the radiance value in the
`Spectrum` returned by the sampling routine.

#### Spotlights

The method for sampling an outgoing ray with a reasonable distribution for the spotlight is more interesting. While it could just sample directions uniformly on the sphere as was done for the point light, this distribution is likely to be a bad match for the spotlight’s actual distribution. For example, if the light has a very narrow beam angle, many samples will be taken in directions where the light doesn’t cast any illumination. Instead, we will sample from a uniform distribution over the cone of directions in which the light casts illumination. Although the sampling distribution does not try to account for the falloff toward the edges of the beam, this is only a minor shortcoming in practice.

The PDF for the spotlight’s illumination distribution is
separable with . We thus just need to find a sampling
distribution for . The `UniformSampleCone()` function from
Section 13.6.4 provides this functionality.

The `SpotLight`’s `Pdf_Le()` method for sampled rays must check to
see if the direction is inside the cone of illuminated directions before
returning the cone sampling PDF.

The sampling routines for `ProjectionLight`s and
`GonioPhotometricLight`s are essentially the same as the ones for
`SpotLight`s and `PointLight`s, respectively. For sampling
outgoing rays, `ProjectionLight`s sample uniformly from the cone that
encompasses their projected image map (hence the need to compute
`ProjectionLight::cosTotalWidth` in the constructor), and those for
`GonioPhotometricLight`s sample uniformly over the unit sphere.
Exercise 16.2 at the end of this chapter discusses improvements to these
sampling methods that better account for the directional variation of these
lights.

#### Area Lights

The method for sampling a ray leaving an area light is also easily implemented in terms of the shape sampling methods from Section 14.2.2.

`Shape`,

`pShape`>>

`w`for area light>>

The surface area–based variant of `Shape::Sample()` is used to find the ray
origin, sampled from some density over the surface.

`Shape`,

`pShape`>>=

The ray’s direction is sampled from a cosine-weighted distribution about
the surface normal at the sampled point. Incorporating this cosine
weighting means that rays leaving the light carry uniform differential
power, which is preferable for bidirectional light transport
algorithms. Because the direction returned by
`CosineSampleHemisphere()` is in the canonical coordinate system, it
must be transformed to the coordinate system about the surface normal at
the sampled point here.

`w`for area light>>=

#### Distant Lights

Sampling a ray from the `DistantLight`’s distribution of outgoing rays
is a more interesting problem. The ray’s direction is determined in
advance by a delta distribution; it must be the same as the light’s negated
direction. For its origin, there are an infinite number of 3D points where
it could start. How should we choose an appropriate one, and how do we
compute its density?

The desired property is that rays intersect points in the scene that are
illuminated by the distant light with uniform probability. One way to do this
is to construct a disk that has the same radius as the scene’s bounding
sphere and has a normal that is oriented with the light’s direction and then choose
a random point on this disk, using the `ConcentricSampleDisk()` function
(Figure 16.3). Once this point has been chosen, if the
point is displaced along the light’s direction by the scene’s bounding sphere
radius and used as the origin of the light ray, the ray origin will be outside
the bounding sphere of the scene.

`DistantLight::Sample_Le()`method finds the disk oriented in the light’s direction that is large enough so that the entire scene can be intersected by rays leaving the disk in the light’s direction. Ray origins are sampled uniformly by area on this disk, and ray directions are given directly by the light’s direction.

This is a valid sampling approach, since by construction it has nonzero probability of sampling all incident rays into the sphere due to the directional light. The area component of the sampling density is uniform and therefore equal to the reciprocal of the area of the disk that was sampled. The directional density is given by a delta distribution based on the light’s direction.

Choosing the point on the oriented disk is a simple application of vector algebra. We construct a coordinate system with two vectors perpendicular to the disk’s normal (the light’s direction); see Figure 16.4. Given a random point on the canonical unit disk, computing the offsets from the disk’s center with respect to its coordinate vectors gives the corresponding point.

Finally, the point is offset along the light direction and the ray can
be initialized. Recall from Section 12.4 that
`DistantLight`s can’t be embedded in any medium other than a vacuum.
Therefore, no medium needs to be specified for the ray.

#### Infinite Area Lights

Generating a random ray leaving an infinite light source can be done by
sampling a direction with the same approach as the earlier
`InfiniteAreaLight::Sample_Li()` method. The sampled ray’s origin is
then set using the same approach as was used for `DistantLight`s, where
a disk that covers the scene’s bounding sphere is oriented along the ray’s
direction (recall Figure 16.3). We therefore
won’t include the <<Compute direction for infinite light sample
ray>> or <<Compute origin for infinite light sample ray>> fragments
here.

`InfiniteAreaLight`ray PDFs>>

The PDFs for these rays are the PDF for sampling the direction (as derived in Section 14.2.4) and the PDF for sampling a point on the disk.

`InfiniteAreaLight`ray PDFs>>=

The `Pdf_Le()` method applies the same formulas, so we won’t include
its implementation here.

### 16.1.3 Non-symmetric Scattering

Certain aspects in the input scene specification of materials and geometry can lead to non-symmetric behavior in light transport simulations, where incident radiance and importance are scattered in different ways at a point. If these differences aren’t accounted for, rendering algorithms based on radiance and importance transport will produce different and inconsistent results when rendering the same input scene. Bidirectional techniques that combine radiance and importance transport are particularly affected, since their design is fundamentally based on the principle of symmetry.

In this section, we will briefly enumerate cases that give rise to non-symmetry and explain how they can be addressed to arrive at a consistent set of bidirectional estimators.

Recall the path throughput term from Equation (16.1), which was defined as

The vertices are ordered such that denotes the -th scattering event as seen from the camera.

Sampling techniques based on finding importance-carrying paths trace rays
starting at the light sources to estimate the incident importance at the
light, which means that the vertices will be generated in reverse compared
to the above ordering. As such, the incident and outgoing direction
arguments of the BSDFs will be (incorrectly) reversed unless special
precautions are taken. We thus define the *adjoint BSDF* at
vertex , whose only role is to evaluate the original BSDF with
swapped arguments:

All sampling steps based on importance transport will then use the adjoint
form of the BSDF rather than its original version. Most BSDFs in `pbrt` are
symmetric so that there is no actual difference between and
. However, certain cases related to shading normals and
light refracting into media with a different index of refraction require
additional attention.

The `TransportMode` enumeration is used to inform such non-symmetric
BSDFs about the transported quantity so that they can correctly switch
between the adjoint and non-adjoint forms.

#### Non-symmetry Due to Refraction

When light refracts into a material with a higher index of refraction than the incident medium’s index of refraction, the energy is compressed into a smaller set of angles. This is easy to see yourself, for instance, by looking at the sky from underwater in a quiet outdoor swimming pool. Because no light can be refracted above the critical angle ( for water), the incident hemisphere of light is squeezed into a considerably smaller subset of the hemisphere, which covers the remaining set of angles. Radiance along rays that do refract must thus increase so that energy is preserved when light passes through the interface. More precisely, the incident and transmitted radiance are related by

where and are the refractive indices on the incident and transmitted sides, respectively. The symmetry relationship satisfied by a BTDF is

and we can obtain the adjoint BTDF

which effectively cancels out the scale factor in Equation (16.5).
With these equations, we can now define the last missing piece in the
implementation of `SpecularTransmission::Sample_f()`. Whenever radiance is
transported over a refractive boundary, we apply the scale factor from
Equation (16.5). For importance transport, we use the
adjoint BTDF, which lacks the scaling factor due to the combination of
Equations (16.5) and (16.6).

A similar adjustment is also needed for the `FourierBSDF::f()` method
in the case of refraction. In this case, `FourierBSDFTable::eta`
provides the relative index of refraction. Recall that this model uses a
convention where the sign of is flipped, hence the
expression `muI * muO > 0` can be used to check if light
is being refracted rather than reflected.

`scale`to account for adjoint light transport>>=

Finally, the transmissive term of the `SeparableBSSRDF` requires a
similar correction when light leaves the medium after a second refraction (the
first one being handled by the material’s `BSDF`).

#### Non-symmetry Due to Shading Normals

Shading normals are another cause of non-symmetric scattering. As previously discussed in Section 3.6.3, shading normals are mainly used to make polygonal surfaces appear smoother than their actual discretization. This entails replacing the “true” geometric normal with an interpolated shading normal whenever the BSDF or the cosine term in the light transport equation are evaluated. Bump or normal mapping can be interpreted as another kind of shading normal, where is obtained from a texture map.

This kind of modification to the normal of a surface interaction causes a corresponding change in the underlying reflectance model, producing an effective BSDF that is generally non-symmetric. Without additional precautions, this non-symmetry can lead to visible artifacts in renderings based on adjoint techniques, including discontinuities in shading effects resembling flat-shaded polygons that interpolated normals were originally meant to avoid.

Recall the light transport equation, (14.13), which relates the incident and outgoing radiance on surfaces:

Here, the cosine factor is expressed as an inner product involving and the true normal of the underlying geometry. Suppose now that we’d like to replace with the shading normal . Instead of modifying the scattering equation, another mathematically equivalent way of expressing this change entails switching to a new BSDF defined as

The first factor in the above expression makes this BSDF non-symmetric with respect to the arguments and . To avoid artifacts and inconsistencies in bidirectional rendering algorithms, the adjoint BSDF should be used in simulations whenever importance transport is used. It is given by

Rather than integrating this special case into all `BxDF` subclasses,
we find it cleaner to detect this case in the integrator and apply a
correction factor

which corrects the normal dependence of the non-adjoint version into that of
the adjoint when importance transport is indicated by the `mode`
parameter. This adjustment is implemented by the helper function
`CorrectShadingNormal()` below.