3.8 Spherical Geometry
Geometry on the unit sphere is also frequently useful in rendering. 3D unit direction vectors can equivalently be represented as points on the unit sphere, and sets of directions can be represented as areas on the unit sphere. Useful operations such as bounding a set of directions can often be cleanly expressed as bounds on the unit sphere. We will therefore introduce some useful principles of spherical geometry and related classes and functions in this section.
3.8.1 Solid Angles
In 2D, the planar angle is the total angle subtended by some object with respect to some position (Figure 3.12). Consider the unit circle around the point ; if we project the shaded object onto that circle, some length of the circle will be covered by its projection. The arc length of (which is the same as the angle ) is the angle subtended by the object. Planar angles are measured in radians and the entire unit circle covers radians.
The solid angle extends the 2D unit circle to a 3D unit sphere (Figure 3.13). The total area is the solid angle subtended by the object. Solid angles are measured in steradians (sr). The entire sphere subtends a solid angle of , and a hemisphere subtends .
By providing a way to measure area on the unit sphere (and thus over the unit directions), the solid angle also provides the foundation for a measure for integrating spherical functions; the differential solid angle corresponds to the differential area measure on the unit sphere.
3.8.2 Spherical Polygons
We will sometimes find it useful to consider the set of directions from a point to the surface of a polygon. (Doing so can be useful, for example, when computing the illumination arriving at a point from an emissive polygon.) If a regular planar polygon is projected onto the unit sphere, it forms a spherical polygon.
A vertex of a spherical polygon can be found by normalizing the vector from the center of the sphere to the corresponding vertex of the original polygon. Each edge of a spherical polygon is given by the intersection of the unit sphere with the plane that goes through the sphere’s center and the corresponding two vertices of the polygon. The result is a great circle on the sphere that is the shortest distance between the two vertices on the surface of the sphere (Figure 3.14).
The angle at each vertex is given by the angle between the planes corresponding to the two edges that meet at the vertex (Figure 3.15). (The angle between two planes is termed their dihedral angle.) We will label the angle at each vertex with the Greek letter that corresponds to its label ( for the vertex and so forth). Unlike planar triangles, the three angles of a spherical triangle do not sum to radians; rather, their sum is , where is the spherical triangle’s area. Given the angles , , and , it follows that the area of a spherical triangle can be computed using Girard’s theorem, which says that a triangle’s surface area on the unit sphere is given by the “excess angle”
Direct implementation of Equation (3.5) requires multiple calls to expensive inverse trigonometric functions, and its computation can be prone to error due to floating-point cancellation. A more efficient and accurate approach is to apply the relationship
which can be derived from Equation (3.5) using spherical trigonometric identities. That approach is used in SphericalTriangleArea(), which takes three vectors on the unit sphere corresponding to the spherical triangle’s vertices.
The area of a quadrilateral projected onto the unit sphere is given by , where , , , and are its interior angles. This value is computed by SphericalQuadArea(), which takes the vertex positions on the unit sphere. Its implementation is very similar to SphericalTriangleArea(), so it is not included here.
3.8.3 Spherical Parameterizations
The 3D Cartesian coordinates of a point on the unit sphere are not always the most convenient representation of a direction. For example, if we are tabulating a function over the unit sphere, a 2D parameterization that takes advantage of the fact that the sphere’s surface is two-dimensional is preferable.
There are a variety of mappings between 2D and the sphere. Developing such mappings that fulfill various goals has been an important part of map making since its beginnings. It can be shown that any mapping from the plane to the sphere introduces some form of distortion; the task then is to choose a mapping that best fulfills the requirements for a particular application. pbrt thus uses three different spherical parameterizations, each with different advantages and disadvantages.
Spherical coordinates are a well-known parameterization of the sphere. For a general sphere of radius , they are related to Cartesian coordinates by
(See Figure 3.16.)
For convenience, we will define a SphericalDirection() function that converts a and pair into a unit vector, applying these equations directly. Notice that the function is given the sine and cosine of , rather than itself. This is because the sine and cosine of are often already available to the caller. This is not normally the case for , however, so is passed in as is.
The conversion of a direction to spherical coordinates can be found by
The corresponding functions follow. Note that SphericalTheta() assumes that the vector v has been normalized before being passed in; using SafeACos() in place of std::acos() avoids errors if is slightly greater than 1 due to floating-point round-off error.
SphericalPhi() returns an angle in , which sometimes requires an adjustment to the value returned by std::atan2().
Given a direction vector , it is easy to compute quantities like the cosine of the angle :
This is a much more efficient computation than it would have been to compute ’s value using first an expensive inverse trigonometric function to compute and then another expensive function to compute its cosine. The following functions compute this cosine and a few useful variations.
The value of can be efficiently computed using the trigonometric identity , though we need to be careful to avoid returning a negative value in the rare case that 1 - Cos2Theta(w) is less than zero due to floating-point round-off error.
The tangent of the angle can be computed via the identity .
The sine and cosine of the angle can also be easily found from coordinates without using inverse trigonometric functions (Figure 3.17). In the plane, the vector has coordinates , which are given by and , respectively. The radius is , so
Finally, the cosine of the angle between two vectors’ values can be found by zeroing their coordinates to get 2D vectors in the plane and then normalizing them. The dot product of these two vectors gives the cosine of the angle between them. The implementation below rearranges the terms a bit for efficiency so that only a single square root operation needs to be performed.
Parameterizing the sphere with spherical coordinates corresponds to the equirectangular mapping of the sphere. It is not a particularly good parameterization for representing regularly sampled data on the sphere due to substantial distortion at the sphere’s poles.
While Vector3f is a convenient representation for computation using unit vectors, it does not use storage efficiently: not only does it use 12 bytes of memory (assuming 4-byte Floats), but it is capable of representing 3D direction vectors of arbitrary length. Normalized vectors are a small subset of all the possible Vector3fs, however, which means that the storage represented by those 12 bytes is not well allocated for them. When many normalized vectors need to be stored in memory, a more space-efficient representation can be worthwhile.
Spherical coordinates could be used for this task. Doing so would reduce the storage required to two Floats, though with the disadvantage that relatively expensive trigonometric and inverse trigonometric functions would be required to convert to and from Vector3s. Further, spherical coordinates provide more precision near the poles and less near the equator; a more equal distribution of precision across all unit vectors is preferable. (Due to the way that floating-point numbers are represented, Vector3f suffers from providing different precision in different parts of the unit sphere as well.)
OctahedralVector provides a compact representation for unit vectors with an even distribution of precision and efficient encoding and decoding routines. Our implementation uses just 4 bytes of memory for each unit vector; all the possible values of those 4 bytes correspond to a valid unit vector. Its representation is not suitable for computation, but it is easy to convert between it and Vector3f, which makes it an appealing option for in-memory storage of normalized vectors.
As indicated by its name, this unit vector is based on an octahedral mapping of the unit sphere that is illustrated in Figure 3.18.
The algorithm to convert a unit vector to this representation is surprisingly simple. The first step is to project the vector onto the faces of the 3D octahedron; this can be done by dividing the vector components by the vector’s L1 norm, . For points in the upper hemisphere (i.e., with ), projection down to the plane then just requires taking the and components directly.
For directions in the lower hemisphere, the reprojection to the appropriate point in is slightly more complex, though it can be expressed without any conditional control flow with a bit of care. (Here is another concise fragment of code that is worth understanding; consider in comparison code based on if statements that handled unwrapping the four triangles independently.)
The helper function OctahedralVector::Sign() uses the standard math library function std::copysign() to return according to the sign of v (positive/negative zero are treated like ordinary numbers).
The 2D parameterization in Figure 3.18(d) is then represented using a 16-bit value for each coordinate that quantizes the range with steps.
Encode() performs the encoding from a value in to the integer encoding.
The mapping back to a Vector3f follows the same steps in reverse. For directions in the upper hemisphere, the value on the octahedron face is easily found. Normalizing that vector then gives the corresponding unit vector.
For directions in the lower hemisphere, the inverse of the mapping implemented in the <<Encode octahedral vector with >> fragment must be performed before the direction is normalized.
The third spherical parameterization used in pbrt is carefully designed to preserve area: any area on the surface of the sphere maps to a proportional area in the parametric domain. This representation is a good choice for tabulating functions on the sphere, as it is continuous, has reasonably low distortion, and all values stored represent the same solid angle. It combines the octahedral mapping used in the OctahedralVector class with a variant of the square-to-disk mapping from Section A.5.1, which maps the unit square to the hemisphere in a way that preserves area. The mapping splits the unit square into four sectors, each of which is mapped to a sector of the hemisphere (see Figure 3.19).
Given ; then in the first sector where and , defining the polar coordinates of a point on the unit disk by
gives an area-preserving mapping with . Similar mappings can be found for the other three sectors.
Given ), the corresponding point on the positive hemisphere is then given by
This mapping is also area-preserving.
This mapping can be extended to the entire sphere using the same octahedral mapping that was used for the OctahedralVector. There are then three steps:
- First, the octahedral mapping is applied to the direction, giving a point .
- For directions in the upper hemisphere, the concentric hemisphere mapping, Equation (3.9), is applied to the inner square of the octahedral mapping. Doing so requires accounting for the fact that it is rotated by from the square expected by the hemispherical mapping.
- Directions in the lower hemisphere are mirrored over across their quadrant’s diagonal before the hemispherical mapping is applied. The resulting direction vector’s component is then negated.
The following implementation of this approach goes through some care to be branch free: no matter what the input value, there is a single path of control flow through the function. When possible, this characteristic is often helpful for performance, especially on the GPU, though we note that this function usually represents a small fraction of pbrt’s execution time, so this characteristic does not affect the system’s overall performance.
After transforming the original point p in to , the implementation also computes the absolute value of these coordinates and . Doing so remaps the three quadrants with one or two negative coordinate values to the positive quadrant, flipping each quadrant so that its upper hemisphere is mapped to , which corresponds to the upper hemisphere in the original positive quadrant. (Each lower hemisphere is also mapped to the region, corresponding to the original negative quadrant.)
Most of this function’s implementation operates using in the positive quadrant. Its next step is to compute the radius for the mapping to the disk by computing the signed distance to the diagonal that splits the upper and lower hemispheres where the lower hemisphere’s signed distance is negative (Figure 3.20).
The computation accounts for the rotation with an added term.
The sign of the signed distance computed earlier indicates whether the point is in the lower hemisphere; the returned coordinate takes its sign.
After computing and in the positive quadrant, it is necessary to remap those values to the correct ones for the actual quadrant of the original point . Associating the sign of with the computed value and the sign of with suffices to do so and this operation can be done with another use of copysign().
The inverse mapping is performed by the EqualAreaSphereToSquare() function, which effectively performs the same operations in reverse and is therefore not included here. Also useful and also not included, WrapEqualAreaSquare() handles the boundary cases of points p that are just outside of (as may happen during bilinear interpolation with image texture lookups) and wraps them around to the appropriate valid coordinates that can be passed to EqualAreaSquareToSphere().
3.8.4 Bounding Directions
In addition to bounding regions of space, it is also sometimes useful to bound a set of directions. For example, if a light source emits illumination in some directions but not others, that information can be used to cull that light source from being included in lighting calculations for points it certainly does not illuminate. pbrt provides the DirectionCone class for such uses; it represents a cone that is parameterized by a central direction and an angular spread (see Figure 3.21).
The DirectionCone provides a variety of constructors, including one that takes the central axis of the cone and the cosine of its spread angle and one that bounds a single direction. For both the constructor parameters and the cone representation stored in the class, the cosine of the spread angle is used rather than the angle itself. Doing so makes it possible to perform some of the following operations with DirectionCones using efficient dot products in place of more expensive trigonometric functions.
The default DirectionCone is empty; an invalid value of infinity for cosTheta encodes that case.
A convenience method reports whether the cone is empty.
Another convenience method provides the bound for all directions.
Given a DirectionCone, it is easy to check if a given direction vector is inside its bounds: the cosine of the angle between the direction and the cone’s central direction must be greater than the cosine of the cone’s spread angle. (Note that for the angle to be smaller, the cosine must be larger.)
BoundSubtendedDirections() returns a DirectionCone that bounds the directions subtended by a given bounding box with respect to a point p.
First, a bounding sphere is found for the bounds b. If the given point p is inside the sphere, then a direction bound of all directions is returned. Note that the point p may be inside the sphere but outside b, in which case the returned bounds will be overly conservative. This issue is discussed further in an exercise at the end of the chapter.
Otherwise the central axis of the bounds is given by the vector from p to the center of the sphere and the cosine of the spread angle is easily found using basic trigonometry (see Figure 3.22).
If one of the cones is empty, we can immediately return the other one.
Otherwise the implementation computes a few angles that will be helpful, including the actual spread angle of each cone as well as the angle between their two central direction vectors. These values give enough information to determine if one cone is entirely bounded by the other (see Figure 3.23).
Otherwise it is necessary to compute a new cone that bounds both of them. As illustrated in Figure 3.24, the sum of , , and gives the full angle that the new cone must cover; half of that is its spread angle.
The direction vector for the new cone should not be set with the average of the two cones’ direction vectors; that vector and a spread angle of does not necessarily bound the two given cones. Using that vector would require a spread angle of , which is never less than . (It is worthwhile to sketch out a few cases on paper to convince yourself of this.)
Instead, we find the vector perpendicular to the cones’ direction vectors using the cross product and rotate a.w by the angle around that axis that causes it to bound both cones’ angles. (The Rotate() function used for this will be introduced shortly, in Section 3.9.7.) In the case that LengthSquared(wr) == 0, the vectors face in opposite directions and a bound of the entire sphere is returned.