14.2 Sampling Light Sources

Being able to take a point and sample the directions around it where direct illumination may be incident is another important sampling operation for rendering. Consider a diffuse surface illuminated by a small spherical area light source (Figure 14.7): sampling directions using the BSDF’s sampling distribution is likely to be very inefficient because the light is only visible along a small cone of directions from the point. A much better approach is to instead use a sampling distribution that is based on the light source. For example, the sampling routine could choose from among only those directions where the sphere is potentially visible. This section introduces the Light::Sample_Li() method, which allows this operation to be implemented for the lights in pbrt.

Figure 14.7: An effective sampling strategy for choosing an incident direction from a point normal p Subscript for direct lighting computations is to use the light source to define a distribution of directions with respect to solid angle at the point. Here, a small spherical light source is illuminating the point. The cone of directions that the sphere subtends is a much better sampling distribution to use than a uniform distribution over the hemisphere, for example.

There are two sampling methods that Lights must implement. The first, Sample_Li(), samples an incident direction at a point in the scene along which illumination from the light may be arriving. The second, Light::Sample_Le(), will be defined in Section 16.1.2; it returns a light-carrying ray leaving the light source. Both have corresponding methods that return the PDF for an incident direction or a ray, respectively.

The Light::Sample_Li() method was first introduced in Section 12.2. For reference, here is its declaration:

virtual Spectrum Sample_Li(const Interaction &ref, const Point2f &u, Vector3f *wi, Float *pdf, VisibilityTester *vis) const = 0;

We can now understand the meaning of its u and pdf parameters: u provides a 2D sample value for sampling the light source, and the PDF for sampling the chosen direction is returned in *pdf.

The Light’s Pdf_Li() method returns the probability density with respect to solid angle for the light’s Sample_Li() method to sample the direction wi from the reference point ref.

<<Light Interface>>+=  
virtual Float Pdf_Li(const Interaction &ref, const Vector3f &wi) const = 0;

14.2.1 Lights with Singularities

Just as with perfect specular reflection and transmission, light sources that are defined in terms of delta distributions fit naturally into this sampling framework, although they require care on the part of the routines that call their sampling methods, since there are implicit delta distributions in the radiance and PDF values that they return. For the most part, these delta distributions naturally cancel out when estimators are evaluated, although multiple importance sampling code must be aware of this case, just as was the case with BSDFs. (The IsDeltaLight() utility function can be used to see if a light source is described by a delta distribution.)

Point lights are described by a delta distribution such that they only illuminate a receiving point from a single direction. Thus, the sampling problem is trivial. The PointLight::Sample_Li() method was already implemented in Section 12.3, where we glossed over the nuance that the method performs Monte Carlo sampling of a delta distribution and thus always returns a single direction and doesn’t need the random sample values.

Due to the delta distribution, the PointLight::Pdf_Li() method returns 0. This value reflects the fact that there is no chance for some other sampling process to randomly generate a direction that would intersect an infinitesimal light source.

<<PointLight Method Definitions>>+=  
Float PointLight::Pdf_Li(const Interaction &, const Vector3f &) const { return 0; }

The Sample_Li() methods for SpotLights, ProjectionLights, GonioPhotometricLights, and DistantLights were also previously implemented in Sections 12.3.1, 12.3.2, 12.3.3, and 12.4, respectively. All also return 0 from their Pdf_Li() methods.

14.2.2 Sampling Shapes

In pbrt, area lights are defined by attaching an emission profile to a Shape. Therefore, in order to sample incident illumination from such light sources, it is useful to be able to generate samples over the surface of shapes. To make this possible, we will add sampling methods to the Shape class that sample points on their surfaces. The AreaLight sampling methods to be defined shortly will in turn call these methods.

There are two shape sampling methods, both named Shape::Sample(). The first chooses points on the surface of the shape using a sampling distribution with respect to surface area and returns the local geometric information about the sampled point in an Interaction. In addition to initializing the position p and normal n of the sampled point, the Sample() method should set Interaction::pError with bounds on the floating-point rounding error in the computed p value. pError is used to compute robust origins for rays leaving the surface of the light—see Section 3.9.5.

<<Shape Interface>>+=  
virtual Interaction Sample(const Point2f &u) const = 0;

Shapes almost always sample uniformly by area on their surface. Therefore, we will provide a default implementation of the Shape::Pdf() method corresponding to this sampling approach that returns the corresponding PDF: 1 over the surface area.

<<Shape Interface>>+=  
virtual Float Pdf(const Interaction &) const { return 1 / Area(); }

The second shape sampling method takes the point from which the surface of the shape is being integrated over as a parameter. This method is particularly useful for lighting, since the caller can pass in the point to be lit and allow shape implementations to ensure that they only sample the portion of the shape that is potentially visible from that point. The default implementation ignores the additional point and calls the earlier sampling method.

<<Shape Interface>>+= 
virtual Interaction Sample(const Interaction &ref, const Point2f &u) const { return Sample(u); }

Unlike the first Shape sampling method, which generates points on the shape according to a probability density with respect to surface area on the shape, the second one uses a density with respect to solid angle from the reference point ref. This difference stems from the fact that the area light sampling routines evaluate the direct lighting integral as an integral over directions from the reference point—expressing these sampling densities with respect to solid angle at the point is more convenient. Therefore, the standard implementation of the second Pdf() method here transforms the density from one defined over area to one defined over solid angle.

<<Shape Method Definitions>>+= 
Float Shape::Pdf(const Interaction &ref, const Vector3f &wi) const { <<Intersect sample ray with area light geometry>> 
Ray ray = ref.SpawnRay(wi); Float tHit; SurfaceInteraction isectLight; if (!Intersect(ray, &tHit, &isectLight, false)) return 0;
<<Convert light sample weight to solid angle measure>> 
Float pdf = DistanceSquared(ref.p, isectLight.p) / (AbsDot(isectLight.n, -wi) * Area());
return pdf; }

Given a reference point and direction omega Subscript normal i , the Pdf() method determines if the ray from the point in direction omega Subscript normal i intersects the shape. If the ray doesn’t intersect the shape at all, the probability that the shape would have chosen the direction omega Subscript normal i can be assumed to be 0 (an effective sampling algorithm wouldn’t generate such a sample, and in any case the light will not contribute to such directions, so using a zero probability density is fine).

Note that this ray intersection test is only between the ray and the single shape under consideration. The rest of the scene geometry is ignored, and thus the intersection test is fairly efficient.

<<Intersect sample ray with area light geometry>>= 
Ray ray = ref.SpawnRay(wi); Float tHit; SurfaceInteraction isectLight; if (!Intersect(ray, &tHit, &isectLight, false)) return 0;

To compute the value of the PDF with respect to solid angle from the reference point, the method starts by computing the PDF with respect to surface area. Conversion from a density with respect to area to a density with respect to solid angle requires division by the factor

StartFraction normal d omega Subscript normal i Baseline Over normal d upper A EndFraction equals StartFraction cosine theta Subscript normal o Baseline Over r squared EndFraction comma

where theta Subscript normal o is the angle between the direction of the ray from the point on the light to the reference point and the light’s surface normal, and r squared is the distance between the point on the light and the point being shaded (recall the discussion about transforming between area and directional integration domains in Section 5.5).

<<Convert light sample weight to solid angle measure>>= 
Float pdf = DistanceSquared(ref.p, isectLight.p) / (AbsDot(isectLight.n, -wi) * Area());

Sampling Disks

The Disk sampling method uses the concentric disk sampling function to find a point on the unit disk and then scales and offsets this point to lie on the disk of a given radius and height. Note that this method does not account for partial disks due to Disk::innerRadius being nonzero or Disk::phiMax being less than 2 pi . Fixing this bug is left for an exercise at the end of the chapter.

Because the object space z value of the sampled point is equal to Disk::height, zero-extent bounds can be used for the error bounds for rays leaving the sampled point, just as with ray–disk intersections. (These bounds may later be expanded by the object to world transformation, however.)

<<Disk Method Definitions>>+= 
Interaction Disk::Sample(const Point2f &u) const { Point2f pd = ConcentricSampleDisk(u); Point3f pObj(pd.x * radius, pd.y * radius, height); Interaction it; it.n = Normalize((*ObjectToWorld)(Normal3f(0, 0, 1))); if (reverseOrientation) it.n *= -1; it.p = (*ObjectToWorld)(pObj, Vector3f(0, 0, 0), &it.pError); return it; }

Sampling Cylinders

Uniform sampling on cylinders is straightforward. The height and phi value are sampled uniformly. Intuitively, it can be understood that this approach works because a cylinder is just a rolled-up rectangle.

<<Cylinder Method Definitions>>+= 
Interaction Cylinder::Sample(const Point2f &u) const { Float z = Lerp(u[0], zMin, zMax); Float phi = u[1] * phiMax; Point3f pObj = Point3f(radius * std::cos(phi), radius * std::sin(phi), z); Interaction it; it.n = Normalize((*ObjectToWorld)(Normal3f(pObj.x, pObj.y, 0))); if (reverseOrientation) it.n *= -1; <<Reproject pObj to cylinder surface and compute pObjError>> 
Float hitRad = std::sqrt(pObj.x * pObj.x + pObj.y * pObj.y); pObj.x *= radius / hitRad; pObj.y *= radius / hitRad; Vector3f pObjError = gamma(3) * Abs(Vector3f(pObj.x, pObj.y, 0));
it.p = (*ObjectToWorld)(pObj, pObjError, &it.pError); return it; }

If the system’s std::sin() and std::cos() functions compute results that are as precise as possible—i.e., they always return the closest floating-point value to the fully-precise result, then it can be shown that the x and y components of pObj are within a factor of gamma 3 of the actual surface of the cylinder. While many implementations of those functions are that precise, not all are, especially on GPUs. To be safe, the implementation here reprojects the sampled point to lie on the cylinder. In this case, the error bounds are the same as were derived for reprojected ray–cylinder intersection points in Section 3.9.4.

<<Reproject pObj to cylinder surface and compute pObjError>>= 
Float hitRad = std::sqrt(pObj.x * pObj.x + pObj.y * pObj.y); pObj.x *= radius / hitRad; pObj.y *= radius / hitRad; Vector3f pObjError = gamma(3) * Abs(Vector3f(pObj.x, pObj.y, 0));

Sampling Triangles

The UniformSampleTriangle() function, defined in the previous chapter, returns the barycentric coordinates for a uniformly sampled point on a triangle. The point on a particular triangle for those barycentrics is easily computed.

<<Triangle Method Definitions>>+= 
Interaction Triangle::Sample(const Point2f &u) const { Point2f b = UniformSampleTriangle(u); <<Get triangle vertices in p0, p1, and p2>> 
const Point3f &p0 = mesh->p[v[0]]; const Point3f &p1 = mesh->p[v[1]]; const Point3f &p2 = mesh->p[v[2]];
Interaction it; it.p = b[0] * p0 + b[1] * p1 + (1 - b[0] - b[1]) * p2; <<Compute surface normal for sampled point on triangle>> 
if (mesh->n) it.n = Normalize(b[0] * mesh->n[v[0]] + b[1] * mesh->n[v[1]] + (1 - b[0] - b[1]) * mesh->n[v[2]]); else it.n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); if (reverseOrientation) it.n *= -1;
<<Compute error bounds for sampled point on triangle>> 
Point3f pAbsSum = Abs(b[0] * p0) + Abs(b[1] * p1) + Abs((1 - b[0] - b[1]) * p2); it.pError = gamma(6) * Vector3f(pAbsSum);
return it; }

<<Compute surface normal for sampled point on triangle>>= 
if (mesh->n) it.n = Normalize(b[0] * mesh->n[v[0]] + b[1] * mesh->n[v[1]] + (1 - b[0] - b[1]) * mesh->n[v[2]]); else it.n = Normalize(Normal3f(Cross(p1 - p0, p2 - p0))); if (reverseOrientation) it.n *= -1;

Because the sampled point is computed with barycentric interpolation, it has the same error bounds as were computed in Section 3.9.4 for triangle intersection points.

<<Compute error bounds for sampled point on triangle>>= 
Point3f pAbsSum = Abs(b[0] * p0) + Abs(b[1] * p1) + Abs((1 - b[0] - b[1]) * p2); it.pError = gamma(6) * Vector3f(pAbsSum);

Sampling Spheres

As with Disks, the sampling method here does not handle partial spheres; an exercise at the end of the chapter discusses this issue further. For the sampling method that is not given an external point that’s being lit, sampling a point on a sphere is extremely simple. Sphere::Sample() just uses the UniformSampleSphere() function to generate a point on the unit sphere and scales the point by the sphere’s radius.

<<Sphere Method Definitions>>+=  
Interaction Sphere::Sample(const Point2f &u) const { Point3f pObj = Point3f(0, 0, 0) + radius * UniformSampleSphere(u); Interaction it; it.n = Normalize((*ObjectToWorld)(Normal3f(pObj.x, pObj.y, pObj.z))); if (reverseOrientation) it.n *= -1; <<Reproject pObj to sphere surface and compute pObjError>> 
pObj *= radius / Distance(pObj, Point3f(0, 0, 0)); Vector3f pObjError = gamma(5) * Abs((Vector3f)pObj);
it.p = (*ObjectToWorld)(pObj, pObjError, &it.pError); return it; }

Because UniformSampleSphere() uses std::sin() and std::cos(), the error bounds on the computed pObj value depend on the accuracy of those functions. Therefore, as with cylinders, the sampled point is reprojected to the sphere’s surface, so that the error bounds derived earlier in Equation (3.14) can be used without needing to worry about those functions’ accuracy.

<<Reproject pObj to sphere surface and compute pObjError>>= 
pObj *= radius / Distance(pObj, Point3f(0, 0, 0)); Vector3f pObjError = gamma(5) * Abs((Vector3f)pObj);

For the sphere sampling method that is given a point being illuminated, we can do much better than sampling over the sphere’s entire area. While uniform sampling over its surface is perfectly correct, a better approach is to not sample points on the sphere that are definitely not visible from the point (such as those on the back side of the sphere as seen from the point). The sampling routine here instead uniformly samples directions over the solid angle subtended by the sphere from the reference point and then computes the point on the sphere corresponding to the sampled direction.

<<Sphere Method Definitions>>+=  
Interaction Sphere::Sample(const Interaction &ref, const Point2f &u) const { <<Compute coordinate system for sphere sampling>> 
Point3f pCenter = (*ObjectToWorld)(Point3f(0, 0, 0)); Vector3f wc = Normalize(pCenter - ref.p); Vector3f wcX, wcY; CoordinateSystem(wc, &wcX, &wcY);
<<Sample uniformly on sphere if normal p Subscript is inside it>> 
Point3f pOrigin = OffsetRayOrigin(ref.p, ref.pError, ref.n, pCenter - ref.p); if (DistanceSquared(pOrigin, pCenter) <= radius * radius) return Sample(u);
<<Sample sphere uniformly inside subtended cone>> 
<<Compute theta and phi values for sample in cone>> 
Float sinThetaMax2 = radius * radius / DistanceSquared(ref.p, pCenter); Float cosThetaMax = std::sqrt(std::max((Float)0, 1 - sinThetaMax2)); Float cosTheta = (1 - u[0]) + u[0] * cosThetaMax; Float sinTheta = std::sqrt(std::max((Float)0, 1 - cosTheta * cosTheta)); Float phi = u[1] * 2 * Pi;
<<Compute angle alpha from center of sphere to sampled point on surface>> 
Float dc = Distance(ref.p, pCenter); Float ds = dc * cosTheta - std::sqrt(std::max((Float)0, radius * radius - dc * dc * sinTheta * sinTheta)); Float cosAlpha = (dc * dc + radius * radius - ds * ds) / (2 * dc * radius); Float sinAlpha = std::sqrt(std::max((Float)0, 1 - cosAlpha * cosAlpha));
<<Compute surface normal and sampled point on sphere>> 
Vector3f nObj = SphericalDirection(sinAlpha, cosAlpha, phi, -wcX, -wcY, -wc); Point3f pObj = radius * Point3f(nObj.x, nObj.y, nObj.z);
<<Return Interaction for sampled point on sphere>> 
Interaction it; <<Reproject pObj to sphere surface and compute pObjError>> 
pObj *= radius / Distance(pObj, Point3f(0, 0, 0)); Vector3f pObjError = gamma(5) * Abs((Vector3f)pObj);
it.p = (*ObjectToWorld)(pObj, pObjError, &it.pError); it.n = (*ObjectToWorld)(Normal3f(nObj)); if (reverseOrientation) it.n *= -1; return it;
}

This process is most easily done if we first compute a coordinate system to use for sampling the sphere, where the z axis is the vector between the sphere’s center and the point being illuminated:

<<Compute coordinate system for sphere sampling>>= 
Point3f pCenter = (*ObjectToWorld)(Point3f(0, 0, 0)); Vector3f wc = Normalize(pCenter - ref.p); Vector3f wcX, wcY; CoordinateSystem(wc, &wcX, &wcY);

For points that lie inside the surface of the sphere, the entire sphere should be sampled, since the whole sphere is visible from inside it. Note that the reference point used in this determination, pOrigin, is computed using the OffsetRayOrigin() function. Doing so ensures that if the reference point came from a ray intersecting the sphere, the point tested lies on the correct side of the sphere.

<<Sample uniformly on sphere if normal p Subscript is inside it>>= 
Point3f pOrigin = OffsetRayOrigin(ref.p, ref.pError, ref.n, pCenter - ref.p); if (DistanceSquared(pOrigin, pCenter) <= radius * radius) return Sample(u);

Otherwise sampling within the cone proceeds.

<<Sample sphere uniformly inside subtended cone>>= 
<<Compute theta and phi values for sample in cone>> 
Float sinThetaMax2 = radius * radius / DistanceSquared(ref.p, pCenter); Float cosThetaMax = std::sqrt(std::max((Float)0, 1 - sinThetaMax2)); Float cosTheta = (1 - u[0]) + u[0] * cosThetaMax; Float sinTheta = std::sqrt(std::max((Float)0, 1 - cosTheta * cosTheta)); Float phi = u[1] * 2 * Pi;
<<Compute angle alpha from center of sphere to sampled point on surface>> 
Float dc = Distance(ref.p, pCenter); Float ds = dc * cosTheta - std::sqrt(std::max((Float)0, radius * radius - dc * dc * sinTheta * sinTheta)); Float cosAlpha = (dc * dc + radius * radius - ds * ds) / (2 * dc * radius); Float sinAlpha = std::sqrt(std::max((Float)0, 1 - cosAlpha * cosAlpha));
<<Compute surface normal and sampled point on sphere>> 
Vector3f nObj = SphericalDirection(sinAlpha, cosAlpha, phi, -wcX, -wcY, -wc); Point3f pObj = radius * Point3f(nObj.x, nObj.y, nObj.z);
<<Return Interaction for sampled point on sphere>> 
Interaction it; <<Reproject pObj to sphere surface and compute pObjError>> 
pObj *= radius / Distance(pObj, Point3f(0, 0, 0)); Vector3f pObjError = gamma(5) * Abs((Vector3f)pObj);
it.p = (*ObjectToWorld)(pObj, pObjError, &it.pError); it.n = (*ObjectToWorld)(Normal3f(nObj)); if (reverseOrientation) it.n *= -1; return it;

If the reference point is outside the sphere, then as seen from the point being shaded normal p Subscript the sphere subtends an angle

theta Subscript normal m normal a normal x Baseline equals arc sine left-parenthesis StartFraction r Over StartAbsoluteValue normal p Subscript Baseline minus normal p Subscript normal c Baseline EndAbsoluteValue EndFraction right-parenthesis equals arc cosine StartRoot 1 minus left-parenthesis StartFraction r Over StartAbsoluteValue normal p Subscript Baseline minus normal p Subscript normal c Baseline EndAbsoluteValue EndFraction right-parenthesis squared EndRoot comma

where r is the radius of the sphere and normal p Subscript normal c is its center (Figure 14.8). The sampling method here computes the cosine of the subtended angle theta Subscript normal m normal a normal x using Equation (14.11) and then uniformly samples directions inside this cone of directions using the approach that was derived earlier for the UniformSampleCone() function, sampling an offset theta from the center vector omega Subscript normal c and then uniformly sampling a rotation angle phi around the vector. That function isn’t used here, however, as we will need some of the intermediate values in the following fragments.

Figure 14.8: To sample points on a spherical light source, we can uniformly sample within the cone of directions around a central vector omega Subscript normal c with an angular spread of up to theta Subscript normal m normal a normal x . Trigonometry can be used to derive the value of sine theta Subscript normal m normal a normal x , r slash StartAbsoluteValue normal p Subscript normal c Baseline minus normal p Subscript Baseline EndAbsoluteValue .

<<Compute theta and phi values for sample in cone>>= 
Float sinThetaMax2 = radius * radius / DistanceSquared(ref.p, pCenter); Float cosThetaMax = std::sqrt(std::max((Float)0, 1 - sinThetaMax2)); Float cosTheta = (1 - u[0]) + u[0] * cosThetaMax; Float sinTheta = std::sqrt(std::max((Float)0, 1 - cosTheta * cosTheta)); Float phi = u[1] * 2 * Pi;

Given a sample angle left-parenthesis theta comma phi right-parenthesis with respect to the sampling coordinate system computed earlier, we can directly compute the corresponding sample point on the sphere. The derivation of this approach follows three steps, illustrated in Figure 14.9.

Figure 14.9: Geometric Setting for Computing the Sampled Point on the Sphere Corresponding to a Sampled Angle theta . (a) Right triangle with hypotenuse d Subscript normal c and angle theta . (b) The right triangle with hypotenuse from the center of the sphere to the sampled point on the sphere. (c) Given the lengths of the three sides of this triangle, the law of cosines gives the angle alpha to the sampled point.

First, if we denote the distance from the reference point to the center of the sphere by d Subscript normal c and form a right triangle with angle theta at the reference point, then we can see that the lengths of the other two sides of the triangle, as shown in Figure 14.9(a), are d Subscript normal c Baseline cosine theta and d Subscript normal c Baseline sine theta .

Next, consider the right triangle shown in Figure 14.9(b), where the hypotenuse is the line segment with length equal to the sphere’s radius r that goes from the center of the sphere to the point where the line from the sampled angle intersects the sphere. From the Pythagorean theorem, we can see that the length of the third side of that triangle is

StartRoot r squared minus d Subscript normal c Superscript 2 Baseline sine squared theta EndRoot period

Subtracting this length from d Subscript normal c Baseline cosine theta gives us the length of the segment from the reference point to the sampled point on the sphere:

d Subscript normal s Baseline equals d Subscript normal c Baseline cosine theta minus StartRoot r squared minus d Subscript normal c Superscript 2 Baseline sine squared theta EndRoot period

We can now compute the angle alpha from the center of the sphere to the sampled point on the sphere using the law of cosines, which relates the squared lengths of two sides of the triangle and the angle alpha opposite them:

d Subscript normal s Superscript 2 Baseline equals d Subscript normal c Superscript 2 Baseline plus r squared minus 2 d Subscript normal c Baseline r cosine alpha period

(See Figure 14.9(c).) Solving for cosine alpha , we have

cosine alpha equals StartFraction d Subscript normal c Superscript 2 Baseline plus r squared minus d Subscript normal s Superscript 2 Baseline Over 2 d Subscript normal c Baseline r EndFraction period

<<Compute angle alpha from center of sphere to sampled point on surface>>= 
Float dc = Distance(ref.p, pCenter); Float ds = dc * cosTheta - std::sqrt(std::max((Float)0, radius * radius - dc * dc * sinTheta * sinTheta)); Float cosAlpha = (dc * dc + radius * radius - ds * ds) / (2 * dc * radius); Float sinAlpha = std::sqrt(std::max((Float)0, 1 - cosAlpha * cosAlpha));

The alpha angle and phi give the spherical coordinates for the sampled point on the unit sphere, with respect to a coordinate system centered around the vector from the sphere center to the reference point. Since we earlier computed a coordinate system from the reference point to the center, we can use that one here with each vector flipped.

<<Compute surface normal and sampled point on sphere>>= 
Vector3f nObj = SphericalDirection(sinAlpha, cosAlpha, phi, -wcX, -wcY, -wc); Point3f pObj = radius * Point3f(nObj.x, nObj.y, nObj.z);

As with the other Sphere::Sample() method, the sampled point is reprojected onto the surface of the sphere; in turn, we can use the same error bounds for the computed point as were derived earlier.

<<Return Interaction for sampled point on sphere>>= 
Interaction it; <<Reproject pObj to sphere surface and compute pObjError>> 
pObj *= radius / Distance(pObj, Point3f(0, 0, 0)); Vector3f pObjError = gamma(5) * Abs((Vector3f)pObj);
it.p = (*ObjectToWorld)(pObj, pObjError, &it.pError); it.n = (*ObjectToWorld)(Normal3f(nObj)); if (reverseOrientation) it.n *= -1; return it;

The value of the PDF for sampling a direction toward a sphere from a given point depends on which of the two sampling strategies would be used for the point.

<<Sphere Method Definitions>>+=  
Float Sphere::Pdf(const Interaction &ref, const Vector3f &wi) const { Point3f pCenter = (*ObjectToWorld)(Point3f(0, 0, 0)); <<Return uniform PDF if point is inside sphere>> 
Point3f pOrigin = OffsetRayOrigin(ref.p, ref.pError, ref.n, pCenter - ref.p); if (DistanceSquared(pOrigin, pCenter) <= radius * radius) return Shape::Pdf(ref, wi);
<<Compute general sphere PDF>> 
Float sinThetaMax2 = radius * radius / DistanceSquared(ref.p, pCenter); Float cosThetaMax = std::sqrt(std::max((Float)0, 1 - sinThetaMax2)); return UniformConePdf(cosThetaMax);
}

If the reference point was inside the sphere, a uniform sampling strategy was used, in which case, the implementation hands off to the Pdf() method of the Shape class, which takes care of the solid angle conversion.

<<Return uniform PDF if point is inside sphere>>= 
Point3f pOrigin = OffsetRayOrigin(ref.p, ref.pError, ref.n, pCenter - ref.p); if (DistanceSquared(pOrigin, pCenter) <= radius * radius) return Shape::Pdf(ref, wi);

In the general case, we recompute the angle subtended by the sphere and call UniformConePdf(). Note that no conversion of sampling measures is required here because UniformConePdf() already returns a value with respect to the solid angle measure.

<<Compute general sphere PDF>>= 
Float sinThetaMax2 = radius * radius / DistanceSquared(ref.p, pCenter); Float cosThetaMax = std::sqrt(std::max((Float)0, 1 - sinThetaMax2)); return UniformConePdf(cosThetaMax);

14.2.3 Area Lights

Given shape sampling methods, the DiffuseAreaLight::Sample_Li() method is quite straightforward. Most of the hard work is done by the Shapes, and the DiffuseAreaLight just needs to copy appropriate values to output parameters and compute the emitted radiance value.

<<DiffuseAreaLight Method Definitions>>+=  
Spectrum DiffuseAreaLight::Sample_Li(const Interaction &ref, const Point2f &u, Vector3f *wi, Float *pdf, VisibilityTester *vis) const { Interaction pShape = shape->Sample(ref, u); pShape.mediumInterface = mediumInterface; *wi = Normalize(pShape.p - ref.p); *pdf = shape->Pdf(ref, *wi); *vis = VisibilityTester(ref, pShape); return L(pShape, -*wi); }

Pdf_Li() calls the variant of Shape::Pdf() that returns a density with respect to solid angle, so the value it returns can be returned directly.

<<DiffuseAreaLight Method Definitions>>+=  
Float DiffuseAreaLight::Pdf_Li(const Interaction &ref, const Vector3f &wi) const { return shape->Pdf(ref, wi); }

14.2.4 Infinite Area Lights

The InfiniteAreaLight, defined in Section 12.6, can be considered to be an infinitely large sphere that surrounds the entire scene, illuminating it from all directions. The environment maps used with InfiniteAreaLights often have substantial variation along different directions: consider, for example, an environment map of the sky during daytime, where the relatively small number of directions that the sun subtends will be thousands of times brighter than the rest of the directions. Given this substantial variation, implementing a sampling method for InfiniteAreaLights that matches the illumination distribution will generally substantially reduce variance in images.

Figure 14.10 shows two images of a car model illuminated by the morning skylight environment map from Figure 12.20. The top image was rendered using a simple cosine-weighted sampling distribution for selecting incident illumination directions, while the bottom image was rendered using the sampling method implemented in this section. Both images used just 32 shadow samples per pixel. For the same number of samples taken and with negligible additional computational cost, this sampling method computes a much better result with much lower variance.

Figure 14.10: Car Model Illuminated by the Morning Skylight Environment Map. Both images were rendered with four image samples per pixel and eight light source samples per image sample. (a) The result of using a uniform sampling distribution and (b) the improvement from the importance sampling method implemented here. A total of just 32 light samples per pixel gives an excellent result with this approach.

There are three main steps to the sampling approach implemented here:

  1. We define a piecewise-constant 2D probability distribution function in left-parenthesis u comma v right-parenthesis image coordinates p left-parenthesis u comma v right-parenthesis that corresponds to the distribution of the radiance represented by the environment map.
  2. We apply the sampling method from Section 13.6.7 to transform 2D samples to samples drawn from the piecewise-constant p left-parenthesis u comma v right-parenthesis distribution.
  3. We define a probability density function over directions on the unit sphere based on the probability density over left-parenthesis u comma v right-parenthesis .

The combination of these three steps makes it possible to generate samples on the sphere of directions according to a distribution that matches the radiance function very closely, leading to substantial variance reduction.

We will start by defining the fragment <<Initialize sampling PDFs for infinite area light>> at the end of the InfiniteAreaLight constructor.

<<Initialize sampling PDFs for infinite area light>>= 
<<Compute scalar-valued image img from environment map>> 
int width = resolution.x, height = resolution.y; Float filter = (Float)1 / std::max(width, height); std::unique_ptr<Float[]> img(new Float[width * height]); for (int v = 0; v < height; ++v) { Float vp = (Float)v / (Float)height; Float sinTheta = std::sin(Pi * Float(v + .5f) / Float(height)); for (int u = 0; u < width; ++u) { Float up = (Float)u / (Float)width; img[u + v * width] = Lmap->Lookup(Point2f(up, vp), filter).y(); img[u + v * width] *= sinTheta; } }
<<Compute sampling distributions for rows and columns of image>> 
distribution.reset(new Distribution2D(img.get(), width, height));

The first step is to transform the continuously defined spectral radiance function defined by the environment map’s texels to a piecewise-constant scalar function by computing its luminance at a set of sample points using the Spectrum::y() method. There are three things to note in the code below that does this computation.

First, it computes values of the radiance function at the same number of points as there are texels in the original image map. It could use either more or fewer points, leading to a corresponding increase or decrease in memory use while still generating a valid sampling distribution, however. These values work well, though, as fewer points would lead to a sampling distribution that didn’t match the function as well while more would mostly waste memory with little incremental benefit.

The second thing of note in this code is that the piecewise constant function values being stored here in img are found by slightly blurring the radiance function with the MIPMap::Lookup() method (rather than just copying the corresponding texel values). The motivation for this is subtle; Figure 14.11 illustrates the idea in 1D. Because the continuous radiance function used for rendering is reconstructed by bilinearly interpolating between texels in the image, just because some texel is completely black, for example, the radiance function may be nonzero a tiny distance away from it due to a neighboring texel’s contribution. Because we are using a piecewise-constant function for sampling rather than a piecewise-linear one, it must account for this issue in order to ensure greater-than-zero probability of sampling any point where the radiance function is nonzero.

Figure 14.11: Finding a Piecewise-Constant Function (solid lines) That Approximates a Piecewise Linear Function (dashed lines) for Use as a Sampling Distribution. Even though some of the sample points that define the piecewise linear function (solid dots) may be zero-valued, the piecewise-constant sampling function must not be zero over any range where the actual function is nonzero. A reasonable approach to avoid this case, shown here and implemented in the InfiniteAreaLight sampling routines, is to find the average value of the function over some range and use that to define the piecewise-constant function.

Finally, each image value in the img buffer is multiplied by the value of sine theta corresponding to the theta value each row has when the latitude–longitude image is mapped to the sphere. Note that this multiplication has no effect on the sampling method’s correctness: because the value of sine theta is always greater than 0 over the left-bracket 0 comma pi right-bracket range, we are just reshaping the sampling PDF. The motivation for adjusting the PDF is to eliminate the effect of the distortion from mapping the 2D image to the unit sphere in the sampling method here; the details will be fully explained later in this section.

<<Compute scalar-valued image img from environment map>>= 
int width = resolution.x, height = resolution.y; Float filter = (Float)1 / std::max(width, height); std::unique_ptr<Float[]> img(new Float[width * height]); for (int v = 0; v < height; ++v) { Float vp = (Float)v / (Float)height; Float sinTheta = std::sin(Pi * Float(v + .5f) / Float(height)); for (int u = 0; u < width; ++u) { Float up = (Float)u / (Float)width; img[u + v * width] = Lmap->Lookup(Point2f(up, vp), filter).y(); img[u + v * width] *= sinTheta; } }

Given this filtered and scaled image, the Distribution2D structure handles computing and storing the 2D PDF.

<<Compute sampling distributions for rows and columns of image>>= 
distribution.reset(new Distribution2D(img.get(), width, height));

<<InfiniteAreaLight Private Data>>+= 
std::unique_ptr<Distribution2D> distribution;

Given this precomputed data, the task of the sampling method is relatively straightforward. Given a sample u over left-bracket 0 comma 1 right-parenthesis squared , it draws a sample from the function’s distribution using the sampling algorithm described in Section 13.6.7, which gives a left-parenthesis u comma v right-parenthesis value and the value of the PDF for taking this sample, p left-parenthesis u comma v right-parenthesis .

<<InfiniteAreaLight Method Definitions>>+=  
Spectrum InfiniteAreaLight::Sample_Li(const Interaction &ref, const Point2f &u, Vector3f *wi, Float *pdf, VisibilityTester *vis) const { <<Find left-parenthesis u comma v right-parenthesis sample coordinates in infinite light texture>> 
Float mapPdf; Point2f uv = distribution->SampleContinuous(u, &mapPdf); if (mapPdf == 0) return Spectrum(0.f);
<<Convert infinite light sample point to direction>> 
Float theta = uv[1] * Pi, phi = uv[0] * 2 * Pi; Float cosTheta = std::cos(theta), sinTheta = std::sin(theta); Float sinPhi = std::sin(phi), cosPhi = std::cos(phi); *wi = LightToWorld(Vector3f(sinTheta * cosPhi, sinTheta * sinPhi, cosTheta));
<<Compute PDF for sampled infinite light direction>> 
*pdf = mapPdf / (2 * Pi * Pi * sinTheta); if (sinTheta == 0) *pdf = 0;
<<Return radiance value for infinite light direction>> 
*vis = VisibilityTester(ref, Interaction(ref.p + *wi * (2 * worldRadius), ref.time, mediumInterface)); return Spectrum(Lmap->Lookup(uv), SpectrumType::Illuminant);
}

<<Find left-parenthesis u comma v right-parenthesis sample coordinates in infinite light texture>>= 
Float mapPdf; Point2f uv = distribution->SampleContinuous(u, &mapPdf); if (mapPdf == 0) return Spectrum(0.f);

The left-parenthesis u comma v right-parenthesis sample is mapped to spherical coordinates by

left-parenthesis theta comma phi right-parenthesis equals left-parenthesis pi v comma 2 pi u right-parenthesis comma

and then the spherical coordinates formula gives the direction omega equals left-parenthesis x comma y comma z right-parenthesis .

<<Convert infinite light sample point to direction>>= 
Float theta = uv[1] * Pi, phi = uv[0] * 2 * Pi; Float cosTheta = std::cos(theta), sinTheta = std::sin(theta); Float sinPhi = std::sin(phi), cosPhi = std::cos(phi); *wi = LightToWorld(Vector3f(sinTheta * cosPhi, sinTheta * sinPhi, cosTheta));

Recall that the probability density values returned by the light source sampling routines must be defined in terms of the solid angle measure on the unit sphere. Therefore, this routine must now compute the transformation between the sampling density used, which was the image function over left-bracket 0 comma 1 right-parenthesis squared , and the corresponding density after the image has been mapped to the unit sphere with the latitude–longitude mapping. (Recall that the latitude–longitude parameterization of an image left-parenthesis theta comma phi right-parenthesis is x equals r sine theta cosine phi , y equals r sine theta sine phi , and z equals r cosine theta .)

First, consider the function g that maps from left-parenthesis u comma v right-parenthesis to left-parenthesis theta comma phi right-parenthesis ,

g left-parenthesis u comma v right-parenthesis equals left-parenthesis pi v comma 2 pi u right-parenthesis period

The absolute value of the determinant of the Jacobian StartAbsoluteValue upper J Subscript g Baseline EndAbsoluteValue is 2 pi squared . Applying the multidimensional change of variables equation from Section 13.5.1, we can find the density in terms of spherical coordinates left-parenthesis theta comma phi right-parenthesis .

p left-parenthesis theta comma phi right-parenthesis equals StartFraction p left-parenthesis u comma v right-parenthesis Over 2 pi squared EndFraction period

From the definition of spherical coordinates, it is easy to determine that the absolute value of the determinant of the Jacobian for the mapping from left-parenthesis r comma theta comma phi right-parenthesis to left-parenthesis x comma y comma z right-parenthesis is r squared sine theta . Since we are interested in the unit sphere, r equals 1 , and again applying the multidimensional change of variables equation, we can find the final relationship between probability densities,

p left-parenthesis omega right-parenthesis equals StartFraction p left-parenthesis theta comma phi right-parenthesis Over sine theta EndFraction equals StartFraction p left-parenthesis u comma v right-parenthesis Over 2 pi squared sine theta EndFraction period

This is the key relationship for applying this technique: it lets us sample from the piecewise-constant distribution defined by the image map and transform the sample and its probability density to be in terms of directions on the unit sphere.

We can now see why the initialization routines multiplied the values of the piecewise-constant sampling function by a sine theta term. Consider, for example, a constant-valued environment map: with the p left-parenthesis u comma v right-parenthesis sampling technique, all left-parenthesis theta comma phi right-parenthesis values are equally likely to be chosen. Due to the mapping to directions on the sphere, however, this would lead to more directions being sampled near the poles of the sphere, not a uniform sampling of directions on the sphere, which would be a more desirable result. The 1 slash sine theta term in the p left-parenthesis omega Subscript Baseline right-parenthesis PDF corrects for this non-uniform sampling of directions so that correct results are computed in Monte Carlo estimates. Given this state of affairs, however, it’s better to have modified the p left-parenthesis u comma v right-parenthesis sampling distribution so that it’s less likely to select directions near the poles in the first place.

<<Compute PDF for sampled infinite light direction>>= 
*pdf = mapPdf / (2 * Pi * Pi * sinTheta); if (sinTheta == 0) *pdf = 0;

The method can finally initialize the VisibilityTester with a light sample point outside the scene’s bounds and return the radiance value for the chosen direction.

<<Return radiance value for infinite light direction>>= 
*vis = VisibilityTester(ref, Interaction(ref.p + *wi * (2 * worldRadius), ref.time, mediumInterface)); return Spectrum(Lmap->Lookup(uv), SpectrumType::Illuminant);

The InfiniteAreaLight::Pdf_Li() method needs to convert the direction omega Subscript to the corresponding left-parenthesis u comma v right-parenthesis coordinates in the sampling distribution. Given these, the PDF p left-parenthesis u comma v right-parenthesis is computed as the product of the two 1D PDFs by the Distribution2D::Pdf() method, which is adjusted here for mapping to the sphere as was done in the Sample_Li() method.

<<InfiniteAreaLight Method Definitions>>+=  
Float InfiniteAreaLight::Pdf_Li(const Interaction &, const Vector3f &w) const { Vector3f wi = WorldToLight(w); Float theta = SphericalTheta(wi), phi = SphericalPhi(wi); Float sinTheta = std::sin(theta); if (sinTheta == 0) return 0; return distribution->Pdf(Point2f(phi * Inv2Pi, theta * InvPi)) / (2 * Pi * Pi * sinTheta); }