10.1 Sampling and Antialiasing

The sampling task from Chapter 7 was a frustrating one since the aliasing problem was known to be unsolvable from the start. The infinite frequency content of geometric edges and hard shadows guarantees aliasing in the final images, no matter how high the image sampling rate. (Our only consolation is that the visual impact of this remaining aliasing can be reduced to unobjectionable levels with a sufficient number of well-placed samples.)

Fortunately, for textures things are not this difficult from the start: either there is often a convenient analytic form of the texture function available, which makes it possible to remove excessively high frequencies before sampling it, or it is possible to be careful when evaluating the function so as not to introduce high frequencies in the first place. When this problem is carefully addressed in texture implementations, as is done through the rest of this chapter, there is usually no need for more than one sample per pixel in order to render an image without texture aliasing.

Two problems must be addressed in order to remove aliasing from texture functions:

  1. The sampling rate in texture space must be computed. The screen space sampling rate is known from the image resolution and pixel sampling rate, but here we need to determine the resulting sampling rate on a surface in the scene in order to find the rate at which the texture function is being sampled.
  2. Given the texture sampling rate, sampling theory must be applied to guide the computation of a texture value that doesn’t have higher frequency variation than can be represented by the sampling rate (e.g., by removing excess frequencies beyond the Nyquist limit from the texture function).

These two issues will be addressed in turn throughout the rest of this section.

10.1.1 Finding the Texture Sampling Rate

Consider an arbitrary texture function that is a function of position, upper T left-parenthesis normal p Subscript Baseline right-parenthesis , defined on a surface in the scene. If we ignore the complications introduced by visibility issues—the possibility that another object may occlude the surface at nearby image samples or that the surface may have a limited extent on the image plane—this texture function can also be expressed as a function over points left-parenthesis x comma y right-parenthesis on the image plane, upper T left-parenthesis f left-parenthesis x comma y right-parenthesis right-parenthesis , where f left-parenthesis x comma y right-parenthesis is the function that maps image points to points on the surface. Thus, upper T left-parenthesis f left-parenthesis x comma y right-parenthesis right-parenthesis gives the value of the texture function as seen at image position left-parenthesis x comma y right-parenthesis .

As a simple example of this idea, consider a 2D texture function upper T left-parenthesis s comma t right-parenthesis applied to a quadrilateral that is perpendicular to the z axis and has corners at the world space points left-parenthesis 0 comma 0 comma 0 right-parenthesis , left-parenthesis 1 comma 0 comma 0 right-parenthesis , left-parenthesis 1 comma 1 comma 0 right-parenthesis , and left-parenthesis 0 comma 1 comma 0 right-parenthesis . If an orthographic camera is placed looking down the z axis such that the quadrilateral precisely fills the image plane and if points normal p Subscript on the quadrilateral are mapped to 2D left-parenthesis s comma t right-parenthesis texture coordinates by

s equals normal p Subscript Baseline Subscript x Baseline t equals normal p Subscript Baseline Subscript y Baseline comma

then the relationship between left-parenthesis s comma t right-parenthesis and screen left-parenthesis x comma y right-parenthesis pixels is straightforward:

s equals StartFraction x Over x Subscript normal r Baseline EndFraction t equals StartFraction y Over y Subscript normal r Baseline EndFraction comma

where the overall image resolution is left-parenthesis x Subscript normal r Baseline comma y Subscript normal r Baseline right-parenthesis (Figure 10.2). Thus, given a sample spacing of one pixel in the image plane, the sample spacing in left-parenthesis s comma t right-parenthesis texture parameter space is left-parenthesis 1 slash x Subscript normal r Baseline comma 1 slash y Subscript normal r Baseline right-parenthesis , and the texture function must remove any detail at a higher frequency than can be represented at that sampling rate.

Figure 10.2: If a quadrilateral is viewed with an orthographic perspective such that the quadrilateral precisely fills the image plane, it’s easy to compute the relationship between the sampling rate in left-parenthesis x comma y right-parenthesis pixel coordinates and the texture sampling rate.

This relationship between pixel coordinates and texture coordinates, and thus the relationship between their sampling rates, is the key bit of information that determines the maximum frequency content allowable in the texture function. As a slightly more complex example, given a triangle with left-parenthesis s comma t right-parenthesis texture coordinates at the vertices and viewed with a perspective projection, it’s possible to analytically find the differences in  s and  t across the sample points on the image plane. This is the basis of basic texture map antialiasing in graphics processors.

For more complex scene geometry, camera projections, and mappings to texture coordinates, it is much more difficult to precisely determine the relationship between image positions and texture parameter values. Fortunately, for texture antialiasing, we don’t need to be able to evaluate f left-parenthesis x comma y right-parenthesis for arbitrary left-parenthesis x comma y right-parenthesis but just need to find the relationship between changes in pixel sample position and the resulting change in texture sample position at a particular point on the image. This relationship is given by the partial derivatives of this function, partial-differential f slash partial-differential x and partial-differential f slash partial-differential y . For example, these can be used to find a first-order approximation to the value of f ,

f left-parenthesis x prime comma y Superscript prime Baseline right-parenthesis almost-equals f left-parenthesis x comma y right-parenthesis plus left-parenthesis x prime minus x right-parenthesis StartFraction partial-differential f Over partial-differential x EndFraction plus left-parenthesis y prime minus y right-parenthesis StartFraction partial-differential f Over partial-differential y EndFraction period

If these partial derivatives are changing slowly with respect to the distances x prime minus x and y prime minus y , this is a reasonable approximation. More importantly, the values of these partial derivatives give an approximation to the change in texture sample position for a shift of one pixel in the x and y directions, respectively, and thus directly yield the texture sampling rate. For example, in the previous quadrilateral example, partial-differential s slash partial-differential x equals 1 slash x Subscript normal r , partial-differential s slash partial-differential y equals 0 , partial-differential t slash partial-differential x equals 0 , and partial-differential t slash partial-differential y equals 1 slash y Subscript normal r .

The key to finding the values of these partial derivatives in the general case lies in the RayDifferential structure, which was defined in Section 2.5.1. This structure is initialized for each camera ray by the Camera::GenerateRayDifferential() method; it contains not only the ray actually being traced through the scene but also two additional rays, one offset horizontally one pixel sample from the camera ray and the other offset vertically by one pixel sample. All of the geometric ray intersection routines use only the main camera ray for their computations; the auxiliary rays are ignored (this is easy to do because RayDifferential is a subclass of Ray).

Here we will use the offset rays to estimate the partial derivatives of the mapping p left-parenthesis x comma y right-parenthesis from image position to world space position and the partial derivatives of the mappings u left-parenthesis x comma y right-parenthesis and v left-parenthesis x comma y right-parenthesis from left-parenthesis x comma y right-parenthesis to left-parenthesis u comma v right-parenthesis parametric coordinates, giving the partial derivatives of world space positions partial-differential normal p slash partial-differential x and partial-differential normal p slash partial-differential y and the partial derivatives of left-parenthesis u comma v right-parenthesis parametric coordinates partial-differential u slash partial-differential x , partial-differential v slash partial-differential x , partial-differential u slash partial-differential y , and partial-differential v slash partial-differential y . In Section 10.2, we will see how these can be used to compute the screen space derivatives of arbitrary quantities based on normal p Subscript or left-parenthesis u comma v right-parenthesis and consequently the sampling rates of these quantities. The values of these partial derivatives at the intersection point are stored in the SurfaceInteraction structure. They are declared as mutable, since they are set in a method that takes a const instance of that object.

<<SurfaceInteraction Public Data>>+= 
mutable Vector3f dpdx, dpdy; mutable Float dudx = 0, dvdx = 0, dudy = 0, dvdy = 0;

The SurfaceInteraction::ComputeDifferentials() method computes these values. It is called by SurfaceInteraction::ComputeScatteringFunctions() before the Material’s ComputeScatteringFunctions() method is called so that these values will be available for any texture evaluation routines that are called by the material. Because ray differentials aren’t available for all rays traced by the system (e.g., rays starting from light sources traced for photon mapping or bidirectional path tracing), the hasDifferentials field of the RayDifferential must be checked before these computations are performed. If the differentials are not present, then the derivatives are all set to zero (which will eventually lead to unfiltered point sampling of textures).

<<SurfaceInteraction Method Definitions>>+=  
void SurfaceInteraction::ComputeDifferentials( const RayDifferential &ray) const { if (ray.hasDifferentials) { <<Estimate screen space change in normal p Subscript and left-parenthesis u comma v right-parenthesis >> 
<<Compute auxiliary intersection points with plane>> 
Float d = -Dot(n, Vector3f(p.x, p.y, p.z)); Float tx = (-Dot(n, Vector3f(ray.rxOrigin)) - d) / Dot(n, ray.rxDirection); Point3f px = ray.rxOrigin + tx * ray.rxDirection; Float ty = (-Dot(n, Vector3f(ray.ryOrigin)) - d) / Dot(n, ray.ryDirection); Point3f py = ray.ryOrigin + ty * ray.ryDirection;
dpdx = px - p; dpdy = py - p; <<Compute left-parenthesis u comma v right-parenthesis offsets at auxiliary points>> 
<<Choose two dimensions to use for ray offset computation>> 
int dim[2]; if (std::abs(n.x) > std::abs(n.y) && std::abs(n.x) > std::abs(n.z)) { dim[0] = 1; dim[1] = 2; } else if (std::abs(n.y) > std::abs(n.z)) { dim[0] = 0; dim[1] = 2; } else { dim[0] = 0; dim[1] = 1; }
<<Initialize A, Bx, and By matrices for offset computation>> 
Float A[2][2] = { { dpdu[dim[0]], dpdv[dim[0]] }, { dpdu[dim[1]], dpdv[dim[1]] } }; Float Bx[2] = { px[dim[0]] - p[dim[0]], px[dim[1]] - p[dim[1]] }; Float By[2] = { py[dim[0]] - p[dim[0]], py[dim[1]] - p[dim[1]] };
if (!SolveLinearSystem2x2(A, Bx, &dudx, &dvdx)) dudx = dvdx = 0; if (!SolveLinearSystem2x2(A, By, &dudy, &dvdy)) dudy = dvdy = 0;
} else { dudx = dvdx = 0; dudy = dvdy = 0; dpdx = dpdy = Vector3f(0, 0, 0); } }

The key to computing these estimates is the assumption that the surface is locally flat with respect to the sampling rate at the point being shaded. This is a reasonable approximation in practice, and it is hard to do much better. Because ray tracing is a point-sampling technique, we have no additional information about the scene in between the rays we have traced. For highly curved surfaces or at silhouette edges, this approximation can break down, though this is rarely a source of noticeable error in practice.

For this approximation, we need the plane through the point normal p Subscript intersected by the main ray that is tangent to the surface. This plane is given by the implicit equation

a x plus b y plus c z plus d equals 0 comma

where a equals bold n Subscript Baseline Subscript x , b equals bold n Subscript Baseline Subscript y , c equals bold n Subscript Baseline Subscript z , and d equals minus left-parenthesis bold n Subscript Baseline dot normal p right-parenthesis . We can then compute the intersection points normal p Subscript x and normal p Subscript y between the auxiliary rays r Subscript x and r Subscript y and this plane (Figure 10.3). These new points give an approximation to the partial derivatives of position on the surface partial-differential normal p slash partial-differential x and partial-differential normal p slash partial-differential y , based on forward differences:

StartFraction partial-differential normal p Over partial-differential x EndFraction almost-equals normal p Subscript x Baseline minus normal p Subscript Baseline comma StartFraction partial-differential normal p Over partial-differential y EndFraction almost-equals normal p Subscript y Baseline minus normal p Subscript Baseline period

Figure 10.3: By approximating the local surface geometry at the intersection point with the tangent plane through p, approximations to the points at which the auxiliary rays r Subscript x and r Subscript y would intersect the surface can be found by finding their intersection points with the tangent plane p Subscript x and p Subscript y .

Because the differential rays are offset one pixel sample in each direction, there’s no need to divide these differences by a normal upper Delta value, since normal upper Delta equals 1 .

<<Estimate screen space change in normal p Subscript and left-parenthesis u comma v right-parenthesis >>= 
<<Compute auxiliary intersection points with plane>> 
Float d = -Dot(n, Vector3f(p.x, p.y, p.z)); Float tx = (-Dot(n, Vector3f(ray.rxOrigin)) - d) / Dot(n, ray.rxDirection); Point3f px = ray.rxOrigin + tx * ray.rxDirection; Float ty = (-Dot(n, Vector3f(ray.ryOrigin)) - d) / Dot(n, ray.ryDirection); Point3f py = ray.ryOrigin + ty * ray.ryDirection;
dpdx = px - p; dpdy = py - p; <<Compute left-parenthesis u comma v right-parenthesis offsets at auxiliary points>> 
<<Choose two dimensions to use for ray offset computation>> 
int dim[2]; if (std::abs(n.x) > std::abs(n.y) && std::abs(n.x) > std::abs(n.z)) { dim[0] = 1; dim[1] = 2; } else if (std::abs(n.y) > std::abs(n.z)) { dim[0] = 0; dim[1] = 2; } else { dim[0] = 0; dim[1] = 1; }
<<Initialize A, Bx, and By matrices for offset computation>> 
Float A[2][2] = { { dpdu[dim[0]], dpdv[dim[0]] }, { dpdu[dim[1]], dpdv[dim[1]] } }; Float Bx[2] = { px[dim[0]] - p[dim[0]], px[dim[1]] - p[dim[1]] }; Float By[2] = { py[dim[0]] - p[dim[0]], py[dim[1]] - p[dim[1]] };
if (!SolveLinearSystem2x2(A, Bx, &dudx, &dvdx)) dudx = dvdx = 0; if (!SolveLinearSystem2x2(A, By, &dudy, &dvdy)) dudy = dvdy = 0;

The ray–plane intersection algorithm described in Section 3.1.2 gives the t value where a ray described by origin normal o and direction bold d intersects a plane described by a x plus b y plus c z plus d equals 0 :

t equals StartFraction minus left-parenthesis left-parenthesis a comma b comma c right-parenthesis dot normal o right-parenthesis minus d Over left-parenthesis a comma b comma c right-parenthesis dot bold d EndFraction period

To compute this value for the two auxiliary rays, the plane’s d coefficient is computed first. It isn’t necessary to compute the a , b , and c coefficients, since they’re available in n. We can then apply the formula directly.

<<Compute auxiliary intersection points with plane>>= 
Float d = -Dot(n, Vector3f(p.x, p.y, p.z)); Float tx = (-Dot(n, Vector3f(ray.rxOrigin)) - d) / Dot(n, ray.rxDirection); Point3f px = ray.rxOrigin + tx * ray.rxDirection; Float ty = (-Dot(n, Vector3f(ray.ryOrigin)) - d) / Dot(n, ray.ryDirection); Point3f py = ray.ryOrigin + ty * ray.ryDirection;

Using the positions normal p Subscript Baseline Subscript x and normal p Subscript Baseline Subscript y , an approximation to their respective left-parenthesis u comma v right-parenthesis coordinates can be found by taking advantage of the fact that the surface’s partial derivatives partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v form a (not necessarily orthogonal) coordinate system on the plane and that the coordinates of the auxiliary intersection points in terms of this coordinate system are their coordinates with respect to the left-parenthesis u comma v right-parenthesis parameterization (Figure 10.4).

Figure 10.4: An estimate of the difference in left-parenthesis u comma v right-parenthesis parametric coordinates from normal p Subscript to normal p prime can be found by finding the coordinates of normal p prime with respect to the coordinate system defined by normal p Subscript , partial-differential normal p slash partial-differential u , and partial-differential normal p slash partial-differential v .

Given a position normal p prime on the plane, we can compute its position with respect to the coordinate system by

normal p prime equals normal p Subscript Baseline plus normal upper Delta Subscript u Baseline StartFraction partial-differential normal p Over partial-differential u EndFraction plus normal upper Delta Subscript v Baseline StartFraction partial-differential normal p Over partial-differential v EndFraction comma

or, equivalently,

Start 3 By 1 Matrix 1st Row normal p prime Subscript x Baseline minus normal p Subscript Baseline Subscript x Baseline 2nd Row normal p prime Subscript y Baseline minus normal p Subscript Baseline Subscript y Baseline 3rd Row normal p prime Subscript z Baseline minus normal p Subscript Baseline Subscript z Baseline EndMatrix equals Start 3 By 2 Matrix 1st Row 1st Column partial-differential normal p Subscript x Baseline slash partial-differential u 2nd Column partial-differential normal p Subscript x Baseline slash partial-differential v 2nd Row 1st Column partial-differential normal p Subscript y Baseline slash partial-differential u 2nd Column partial-differential normal p Subscript y Baseline slash partial-differential v 3rd Row 1st Column partial-differential normal p Subscript z Baseline slash partial-differential u 2nd Column partial-differential normal p Subscript z Baseline slash partial-differential v EndMatrix StartBinomialOrMatrix normal upper Delta Subscript u Baseline Choose normal upper Delta Subscript v Baseline EndBinomialOrMatrix period

A solution to this linear system of equations for one of the auxiliary points normal p prime equals normal p Subscript normal x or normal p prime equals normal p Subscript normal y gives the corresponding screen space partial derivatives left-parenthesis partial-differential u slash partial-differential x comma partial-differential v slash partial-differential x right-parenthesis or left-parenthesis partial-differential u slash partial-differential y comma partial-differential v slash partial-differential y right-parenthesis , respectively.

This linear system has three equations with two unknowns—that is, it’s overconstrained. We need to be careful since one of the equations may be degenerate—for example, if partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v are in the x y plane such that their z components are both zero, then the third equation will be degenerate. Therefore, we’d like to solve the system of equations using two equations that don’t give a degenerate system. An easy way to do this is to take the cross product of partial-differential normal p slash partial-differential u and partial-differential normal p slash partial-differential v , see which coordinate of the result has the largest magnitude, and use the other two. Their cross product is already available in n, so using this approach is straightforward. Even after all this, it may happen that the linear system has no solution (usually due to the partial derivatives not forming a coordinate system on the plane). In that case, all that can be done is to return arbitrary values.

<<Compute left-parenthesis u comma v right-parenthesis offsets at auxiliary points>>= 
<<Choose two dimensions to use for ray offset computation>> 
int dim[2]; if (std::abs(n.x) > std::abs(n.y) && std::abs(n.x) > std::abs(n.z)) { dim[0] = 1; dim[1] = 2; } else if (std::abs(n.y) > std::abs(n.z)) { dim[0] = 0; dim[1] = 2; } else { dim[0] = 0; dim[1] = 1; }
<<Initialize A, Bx, and By matrices for offset computation>> 
Float A[2][2] = { { dpdu[dim[0]], dpdv[dim[0]] }, { dpdu[dim[1]], dpdv[dim[1]] } }; Float Bx[2] = { px[dim[0]] - p[dim[0]], px[dim[1]] - p[dim[1]] }; Float By[2] = { py[dim[0]] - p[dim[0]], py[dim[1]] - p[dim[1]] };
if (!SolveLinearSystem2x2(A, Bx, &dudx, &dvdx)) dudx = dvdx = 0; if (!SolveLinearSystem2x2(A, By, &dudy, &dvdy)) dudy = dvdy = 0;

<<Choose two dimensions to use for ray offset computation>>= 
int dim[2]; if (std::abs(n.x) > std::abs(n.y) && std::abs(n.x) > std::abs(n.z)) { dim[0] = 1; dim[1] = 2; } else if (std::abs(n.y) > std::abs(n.z)) { dim[0] = 0; dim[1] = 2; } else { dim[0] = 0; dim[1] = 1; }

<<Initialize A, Bx, and By matrices for offset computation>>= 
Float A[2][2] = { { dpdu[dim[0]], dpdv[dim[0]] }, { dpdu[dim[1]], dpdv[dim[1]] } }; Float Bx[2] = { px[dim[0]] - p[dim[0]], px[dim[1]] - p[dim[1]] }; Float By[2] = { py[dim[0]] - p[dim[0]], py[dim[1]] - p[dim[1]] };

10.1.2 Filtering Texture Functions

It is necessary to remove frequencies in texture functions that are past the Nyquist limit for the texture sampling rate. The goal is to compute, with as few approximations as possible, the result of the ideal texture resampling process, which says that in order to evaluate upper T left-parenthesis f left-parenthesis x comma y right-parenthesis right-parenthesis without aliasing, we must first band-limit it, removing frequencies beyond the Nyquist limit by convolving it with the sinc filter:

upper T prime Subscript b Baseline left-parenthesis x comma y right-parenthesis equals integral Subscript negative normal infinity Superscript normal infinity Baseline integral Subscript negative normal infinity Superscript normal infinity Baseline normal s normal i normal n normal c left-parenthesis x Superscript prime Baseline right-parenthesis normal s normal i normal n normal c left-parenthesis y Superscript prime Baseline right-parenthesis upper T prime left-parenthesis f left-parenthesis x minus x Superscript prime Baseline comma y minus y Superscript prime Baseline right-parenthesis right-parenthesis normal d x prime normal d y Superscript prime Baseline period

The band-limited function in turn should then be convolved with the pixel filter g left-parenthesis x comma y right-parenthesis centered at the left-parenthesis x comma y right-parenthesis point on the screen at which we want to evaluate the texture function:

upper T prime Subscript f Baseline left-parenthesis x comma y right-parenthesis equals integral Subscript minus normal y normal upper W normal i normal d normal t normal h slash 2 Superscript normal y normal upper W normal i normal d normal t normal h slash 2 Baseline integral Subscript minus normal x normal upper W normal i normal d normal t normal h slash 2 Superscript normal x normal upper W normal i normal d normal t normal h slash 2 Baseline g left-parenthesis x prime comma y Superscript prime Baseline right-parenthesis upper T prime Subscript b Baseline left-parenthesis x minus x Superscript prime Baseline comma y minus y Superscript prime Baseline right-parenthesis normal d x prime normal d y Superscript prime Baseline period

This gives the theoretically perfect value for the texture as projected onto the screen.

In practice, there are many simplifications that can be made to this process, with little reduction in visual quality. For example, a box filter may be used for the band-limiting step, and the second step is usually ignored completely, effectively acting as if the pixel filter were a box filter, which makes it possible to do the antialiasing work completely in texture space and simplifies the implementation significantly. The EWA filtering algorithm in Section 10.4.5 is a notable exception in that it assumes a Gaussian pixel filter.

The box filter is easy to use, since it can be applied analytically by computing the average of the texture function over the appropriate region. Intuitively, this is a reasonable approach to the texture filtering problem, and it can be computed directly for many texture functions. Indeed, through the rest of this chapter, we will often use a box filter to average texture function values between samples and informally use the term filter region to describe the area being averaged over. This is the most common approach when filtering texture functions.

Even the box filter, with all of its shortcomings, gives acceptable results for texture filtering in many cases. One factor that helps is the fact that a number of samples are usually taken in each pixel. Thus, even if the filtered texture values used in each one are sub-optimal, once they are filtered by the pixel reconstruction filter, the end result generally doesn’t suffer too much.

An alternative to using the box filter to filter texture functions is to use the observation that the effect of the ideal sinc filter is to let frequency components below the Nyquist limit pass through unchanged but to remove frequencies past it. Therefore, if we know the frequency content of the texture function (e.g., if it is a sum of terms, each one with known frequency content), then if we replace the high-frequency terms with their average values, we are effectively doing the work of the sinc prefilter. This approach is known as clamping and is the basis for antialiasing in the textures based on the noise function in Section 10.6.

Finally, for texture functions where none of these techniques is easily applied, a final option is supersampling—the function is evaluated and filtered at multiple locations near the main evaluation point, thus increasing the sampling rate in texture space. If a box filter is used to filter these sample values, this is equivalent to averaging the value of the function. This approach can be expensive if the texture function is complex to evaluate, and as with image sampling a very large number of samples may be needed to remove aliasing. Although this is a brute-force solution, it is still more efficient than increasing the image sampling rate, since it doesn’t incur the cost of tracing more rays through the scene.

10.1.3 Ray Differentials for Specular Reflection and Transmission

Given the effectiveness of ray differentials for finding filter regions for texture antialiasing for camera rays, it is useful to extend the method to make it possible to determine texture space sampling rates for objects that are seen indirectly via specular reflection or refraction; objects seen in mirrors, for example, should also no more have texture aliasing than directly visible objects. Igehy (1999) developed an elegant solution to the problem of how to find the appropriate differential rays for specular reflection and refraction, which is the approach used in pbrt.

Figure 10.5 illustrates the difference that proper texture filtering for specular reflection and transmission can make: it shows a glass ball and a mirrored ball on a plane with a texture map containing high-frequency components. Ray differentials ensure that the images of the texture seen via reflection and refraction from the balls are free of aliasing artifacts. Here, ray differentials eliminate aliasing without excessively blurring the texture.

Figure 10.5: (1) Tracking ray differentials for reflected and refracted rays ensures that the image map texture seen in the balls is filtered to avoid aliasing. The left ball is glass, exhibiting reflection and refraction, and the right ball is a mirror, just showing reflection. Note that the texture is well filtered over both of the balls. (2) shows the aliasing artifacts that are present if ray differentials aren’t used.

In order to compute the reflected or transmitted ray differentials at a surface intersection point, we need an approximation to the rays that would have been traced at the intersection points for the two offset rays in the ray differential that hit the surface (Figure 10.6). The new ray for the main ray is computed by the BSDF, so here we only need to compute the outgoing rays for the r Subscript x and r Subscript y differentials.

Figure 10.6: The specular reflection formula gives the direction of the reflected ray at a point on a surface. An offset ray for a ray differential r prime (dashed line) will generally intersect the surface at a different point and be reflected in a different direction. The new direction is affected by both the different surface normal at the point as well as the offset ray’s different incident direction. The computation to find the reflected direction for the offset ray in pbrt estimates the change in reflected direction as a function of image space position and approximates the ray differential’s direction with the main ray’s direction added to the estimated change in direction.

For both reflection and refraction, the origin of each differential ray is easily found. The SurfaceInteraction::ComputeDifferentials() method previously computed approximations for how much the surface position changes with respect to left-parenthesis x comma y right-parenthesis position on the image plane partial-differential normal p slash partial-differential x and partial-differential normal p slash partial-differential y . Adding these offsets to the intersection point of the main ray gives approximate origins for the new rays. If the incident ray doesn’t have differentials, then it’s impossible to compute reflected ray differentials and this step is skipped.

<<Compute ray differential rd for specular reflection>>= 
RayDifferential rd = isect.SpawnRay(wi); if (ray.hasDifferentials) { rd.hasDifferentials = true; rd.rxOrigin = isect.p + isect.dpdx; rd.ryOrigin = isect.p + isect.dpdy; <<Compute differential reflected directions>> 
Normal3f dndx = isect.shading.dndu * isect.dudx + isect.shading.dndv * isect.dvdx; Normal3f dndy = isect.shading.dndu * isect.dudy + isect.shading.dndv * isect.dvdy; Vector3f dwodx = -ray.rxDirection - wo, dwody = -ray.ryDirection - wo; Float dDNdx = Dot(dwodx, ns) + Dot(wo, dndx); Float dDNdy = Dot(dwody, ns) + Dot(wo, dndy); rd.rxDirection = wi - dwodx + 2.f * Vector3f(Dot(wo, ns) * dndx + dDNdx * ns); rd.ryDirection = wi - dwody + 2.f * Vector3f(Dot(wo, ns) * dndy + dDNdy * ns);
}

Finding the directions of these rays is slightly trickier. Igehy (1999) observed that if we know how much the reflected direction omega Subscript normal i changes with respect to a shift of a pixel sample in the x and y directions on the image plane, we can use this information to approximate the direction of the offset rays. For example, the direction for the ray offset in x is

omega Subscript Baseline almost-equals omega Subscript normal i Baseline plus StartFraction partial-differential omega Subscript normal i Baseline Over partial-differential x EndFraction period

Recall from Equation (8.5) that for a general world space surface normal and outgoing direction, the direction for perfect specular reflection is

omega Subscript normal i Baseline equals minus omega Subscript normal o Baseline plus 2 left-parenthesis omega Subscript normal o Baseline dot bold n Subscript Baseline right-parenthesis bold n Subscript Baseline period

Fortunately, the partial derivatives of this expression are easily computed:

StartLayout 1st Row 1st Column StartFraction partial-differential omega Subscript normal i Baseline Over partial-differential x EndFraction 2nd Column equals StartFraction partial-differential Over partial-differential x EndFraction left-parenthesis minus omega Subscript normal o Baseline plus 2 left-parenthesis omega Subscript normal o Baseline dot bold n Subscript Baseline right-parenthesis bold n Subscript Baseline right-parenthesis 2nd Row 1st Column Blank 2nd Column equals minus StartFraction partial-differential omega Subscript normal o Baseline Over partial-differential x EndFraction plus 2 left-parenthesis left-parenthesis omega Subscript normal o Baseline dot bold n Subscript Baseline right-parenthesis StartFraction partial-differential bold n Subscript Baseline Over partial-differential x EndFraction plus StartFraction partial-differential left-parenthesis omega Subscript normal o Baseline dot bold n Subscript Baseline right-parenthesis Over partial-differential x EndFraction bold n Subscript Baseline right-parenthesis period EndLayout

Using the properties of the dot product, it can be shown that

StartFraction partial-differential left-parenthesis omega Subscript normal o Baseline dot bold n Subscript Baseline right-parenthesis Over partial-differential x EndFraction equals StartFraction partial-differential omega Subscript normal o Baseline Over partial-differential x EndFraction dot bold n Subscript Baseline plus omega Subscript normal o Baseline dot StartFraction partial-differential bold n Subscript Baseline Over partial-differential x EndFraction period

The value of partial-differential omega Subscript normal o slash partial-differential x can be found from the difference between the direction of the ray differential’s main ray and the direction of the r Subscript x offset ray, and all of the other necessary quantities are readily available from the SurfaceInteraction, so the implementation of this computation for the partial derivatives in x and y is straightforward.

<<Compute differential reflected directions>>= 
Normal3f dndx = isect.shading.dndu * isect.dudx + isect.shading.dndv * isect.dvdx; Normal3f dndy = isect.shading.dndu * isect.dudy + isect.shading.dndv * isect.dvdy; Vector3f dwodx = -ray.rxDirection - wo, dwody = -ray.ryDirection - wo; Float dDNdx = Dot(dwodx, ns) + Dot(wo, dndx); Float dDNdy = Dot(dwody, ns) + Dot(wo, dndy); rd.rxDirection = wi - dwodx + 2.f * Vector3f(Dot(wo, ns) * dndx + dDNdx * ns); rd.ryDirection = wi - dwody + 2.f * Vector3f(Dot(wo, ns) * dndy + dDNdy * ns);

A similar process of differentiating the equation for the direction of a specularly transmitted ray, Equation (8.8), gives the equation to find the differential change in the transmitted direction. We won’t include the derivation or our implementation here, but refer the interested reader to the original paper and to the pbrt source code, respectively.