7.3 Stratified Sampling

The first Sampler implementation that we will introduce subdivides pixel areas into rectangular regions and generates a single sample inside each region. These regions are commonly called strata, and this sampler is called the StratifiedSampler. The key idea behind stratification is that by subdividing the sampling domain into nonoverlapping regions and taking a single sample from each one, we are less likely to miss important features of the image entirely, since the samples are guaranteed not to all be close together. Put another way, it does us no good if many samples are taken from nearby points in the sample space, since each new sample doesn’t add much new information about the behavior of the image function. From a signal processing viewpoint, we are implicitly defining an overall sampling rate such that the smaller the strata are, the more of them we have, and thus the higher the sampling rate.

The stratified sampler places each sample at a random point inside each stratum by jittering the center point of the stratum by a random amount up to half the stratum’s width and height. The nonuniformity that results from this jittering helps turn aliasing into noise, as discussed in Section 7.1. The sampler also offers an unjittered mode, which gives uniform sampling in the strata; this mode is mostly useful for comparisons between different sampling techniques rather than for rendering high quality images.

Direct application of stratification to high-dimensional sampling quickly leads to an intractable number of samples. For example, if we divided the 5D image, lens, and time sample space into four strata in each dimension, the total number of samples per pixel would be 4 Superscript 5 Baseline equals 1024 . We could reduce this impact by taking fewer samples in some dimensions (or not stratifying some dimensions, effectively using a single stratum), but we would then lose the benefit of having well-stratified samples in those dimensions. This problem with stratification is known as the curse of dimensionality.

We can reap most of the benefits of stratification without paying the price in excessive total sampling by computing lower dimensional stratified patterns for subsets of the domain’s dimensions and then randomly associating samples from each set of dimensions. (This process is sometimes called padding.) Figure 7.16 shows the basic idea: we might want to take just four samples per pixel but still have the samples be stratified over all dimensions. We independently generate four 2D stratified image samples, four 1D stratified time samples, and four 2D stratified lens samples. Then we randomly associate a time and lens sample value with each image sample. The result is that each pixel has samples that together have good coverage of the sample space.

Figure 7.16: We can generate a good sample pattern that reaps the benefits of stratification without requiring that all of the sampling dimensions be stratified simultaneously. Here, we have split left-parenthesis x comma y right-parenthesis image position, time t , and left-parenthesis u comma v right-parenthesis lens position into independent strata with four regions each. Each is sampled independently, then a time sample and a lens sample are randomly associated with each image sample. We retain the benefits of stratification in each of the individual dimensions without having to exponentially increase the total number of samples.

Figure 7.17 shows the improvement in image quality from using stratified lens samples versus using unstratified random samples when rendering depth of field.

Figure 7.17: Effect of Sampling Patterns in Rendering a Purple Sphere with Depth of Field. (1) A high-quality reference image of a blurry sphere. (2) An image generated with random sampling in each pixel without stratification. (3) An image generated with the same number of samples, but with the StratifiedSampler, which stratified both the image and, more importantly for this image, the lens samples. Stratification makes a substantial improvement for this situation.

Figure 7.18 shows a comparison of a few sampling patterns. The first is a completely random pattern: we generated a number of samples without using the strata at all. The result is terrible; some regions have few samples and other areas have clumps of many samples. The second is a uniform stratified pattern. In the last, the uniform pattern has been jittered, with a random offset added to each sample’s location, keeping it inside its cell. This gives a better overall distribution than the purely random pattern while preserving the benefits of stratification, though there are still some clumps of samples and some regions that are undersampled.

Figure 7.18: Three 2D Sampling Patterns. (a) The random pattern is an ineffective pattern, with many clumps of samples that leave large sections of the image poorly sampled. (b) A uniform stratified pattern is better distributed but can exacerbate aliasing artifacts. (c) A stratified jittered pattern turns aliasing from the uniform pattern into high-frequency noise while still maintaining the benefits of stratification.

Figure 7.19 shows images rendered using the StratifiedSampler and shows how jittered sample positions turn aliasing artifacts into less objectionable noise.

Figure 7.19: Comparison of Image Sampling Methods with a Checkerboard Texture. This is a difficult image to render well, since the checkerboard’s frequency with respect to the pixel spacing tends toward infinity as we approach the horizon. (1) A reference image, rendered with 256 samples per pixel, showing something close to an ideal result. (2) An image rendered with one sample per pixel, with no jittering. Note the jaggy artifacts at the edges of checks in the foreground. Notice also the artifacts in the distance where the checker function goes through many cycles between samples; as expected from the signal processing theory presented earlier, that detail reappears incorrectly as lower frequency aliasing. (3) The result of jittering the image samples, still with just one sample per pixel. The regular aliasing of the second image has been replaced by less objectionable noise artifacts. (4) The result of four jittered samples per pixel is still inferior to the reference image but is substantially better than the previous result.

<<StratifiedSampler Declarations>>= 
class StratifiedSampler : public PixelSampler { public: <<StratifiedSampler Public Methods>> 
StratifiedSampler(int xPixelSamples, int yPixelSamples, bool jitterSamples, int nSampledDimensions) : PixelSampler(xPixelSamples * yPixelSamples, nSampledDimensions), xPixelSamples(xPixelSamples), yPixelSamples(yPixelSamples), jitterSamples(jitterSamples) { } void StartPixel(const Point2i &); std::unique_ptr<Sampler> Clone(int seed);
private: <<StratifiedSampler Private Data>> 
const int xPixelSamples, yPixelSamples; const bool jitterSamples;
};

<<StratifiedSampler Public Methods>>= 
StratifiedSampler(int xPixelSamples, int yPixelSamples, bool jitterSamples, int nSampledDimensions) : PixelSampler(xPixelSamples * yPixelSamples, nSampledDimensions), xPixelSamples(xPixelSamples), yPixelSamples(yPixelSamples), jitterSamples(jitterSamples) { }

<<StratifiedSampler Private Data>>= 
const int xPixelSamples, yPixelSamples; const bool jitterSamples;

As a PixelSampler subclass, the implementation of StartPixel() must both generate 1D and 2D samples for the number of dimensions nSampledDimensions passed to the PixelSampler constructor as well as samples for the requested arrays.

<<StratifiedSampler Method Definitions>>= 
void StratifiedSampler::StartPixel(const Point2i &p) { <<Generate single stratified samples for the pixel>> 
for (size_t i = 0; i < samples1D.size(); ++i) { StratifiedSample1D(&samples1D[i][0], xPixelSamples * yPixelSamples, rng, jitterSamples); Shuffle(&samples1D[i][0], xPixelSamples * yPixelSamples, 1, rng); } for (size_t i = 0; i < samples2D.size(); ++i) { StratifiedSample2D(&samples2D[i][0], xPixelSamples, yPixelSamples, rng, jitterSamples); Shuffle(&samples2D[i][0], xPixelSamples * yPixelSamples, 1, rng); }
<<Generate arrays of stratified samples for the pixel>> 
for (size_t i = 0; i < samples1DArraySizes.size(); ++i) for (int64_t j = 0; j < samplesPerPixel; ++j) { int count = samples1DArraySizes[i]; StratifiedSample1D(&sampleArray1D[i][j * count], count, rng, jitterSamples); Shuffle(&sampleArray1D[i][j * count], count, 1, rng); } for (size_t i = 0; i < samples2DArraySizes.size(); ++i) for (int64_t j = 0; j < samplesPerPixel; ++j) { int count = samples2DArraySizes[i]; LatinHypercube(&sampleArray2D[i][j * count].x, count, 2, rng); }
PixelSampler::StartPixel(p); }

After the initial stratified samples are generated, they are randomly shuffled; this is the padding approach described at the start of the section. If this shuffling wasn’t done, then the sample dimensions’ values would be correlated in a way that would lead to errors in images—for example, both the first 2D sample used to choose the film location as well as the first 2D lens sample would always both be in the lower left stratum adjacent to the origin.

<<Generate single stratified samples for the pixel>>= 
for (size_t i = 0; i < samples1D.size(); ++i) { StratifiedSample1D(&samples1D[i][0], xPixelSamples * yPixelSamples, rng, jitterSamples); Shuffle(&samples1D[i][0], xPixelSamples * yPixelSamples, 1, rng); } for (size_t i = 0; i < samples2D.size(); ++i) { StratifiedSample2D(&samples2D[i][0], xPixelSamples, yPixelSamples, rng, jitterSamples); Shuffle(&samples2D[i][0], xPixelSamples * yPixelSamples, 1, rng); }

The 1D and 2D stratified sampling routines are implemented as utility functions. Both loop over the given number of strata in the domain and place a sample point in each one.

<<Sampling Function Definitions>>= 
void StratifiedSample1D(Float *samp, int nSamples, RNG &rng, bool jitter) { Float invNSamples = (Float)1 / nSamples; for (int i = 0; i < nSamples; ++i) { Float delta = jitter ? rng.UniformFloat() : 0.5f; samp[i] = std::min((i + delta) * invNSamples, OneMinusEpsilon); } }

StratifiedSample2D() similarly generates samples in the range left-bracket 0 comma 1 right-parenthesis squared .

<<Sampling Function Definitions>>+=  
void StratifiedSample2D(Point2f *samp, int nx, int ny, RNG &rng, bool jitter) { Float dx = (Float)1 / nx, dy = (Float)1 / ny; for (int y = 0; y < ny; ++y) for (int x = 0; x < nx; ++x) { Float jx = jitter ? rng.UniformFloat() : 0.5f; Float jy = jitter ? rng.UniformFloat() : 0.5f; samp->x = std::min((x + jx) * dx, OneMinusEpsilon); samp->y = std::min((y + jy) * dy, OneMinusEpsilon); ++samp; } }

The Shuffle() function randomly permutes an array of count sample values, each of which has nDimensions dimensions. (In other words, blocks of values of size nDimensions are permuted.)

<<Sampling Inline Functions>>= 
template <typename T> void Shuffle(T *samp, int count, int nDimensions, RNG &rng) { for (int i = 0; i < count; ++i) { int other = i + rng.UniformUInt32(count - i); for (int j = 0; j < nDimensions; ++j) std::swap(samp[nDimensions * i + j], samp[nDimensions * other + j]); } }

Arrays of samples present us with a quandary: for example, if an integrator asks for an array of 64 2D sample values in the sample vector for each sample in a pixel, the sampler has two different goals to try to fulfill:

  1. It’s desirable that the samples in the array themselves be well distributed in 2D (e.g., by using an 8 times 8 stratified grid). Stratification here will improve the quality of the computed results for each individual sample vector.
  2. It’s desirable to ensure that each of the samples in the array for one image sample isn’t too similar to any of the sample values for samples nearby in the image. Rather, we’d like the points to be well distributed with respect to their neighbors, so that over the region around a single pixel, there is good coverage of the entire sample space.

Rather than trying to solve both of these problems simultaneously here, the StratifiedSampler only addresses the first one. The other samplers later in this chapter will revisit this issue with more sophisticated techniques and solve both of them simultaneously to various degrees.

A second complication comes from the fact that the caller may have asked for an arbitrary number of samples per image sample, so stratification may not be easily applied. (For example, how do we generate a stratified 2D pattern of seven samples?) We could just generate an n times 1 or 1 times n stratified pattern, but this only gives us the benefit of stratification in one dimension and no guarantee of a good pattern in the other dimension. A StratifiedSampler::RoundSize() method could round requests up to the next number that’s the square of integers, but instead we will use an approach called Latin hypercube sampling (LHS), which can generate any number of samples in any number of dimensions with a reasonably good distribution.

LHS uniformly divides each dimension’s axis into n regions and generates a jittered sample in each of the n regions along the diagonal, as shown on the left in Figure 7.20. These samples are then randomly shuffled in each dimension, creating a pattern with good distribution.

Figure 7.20: Latin hypercube sampling (sometimes called n -rooks sampling) chooses samples such that only a single sample is present in each row and each column of a grid. This can be done by generating random samples in the cells along the diagonal and then randomly permuting their coordinates. One advantage of LHS is that it can generate any number of samples with a good distribution, not just m times n samples, as with stratified patterns.

An advantage of LHS is that it minimizes clumping of the samples when they are projected onto any of the axes of the sampling dimensions. This property is in contrast to stratified sampling, where 2 n of the n times n samples in a 2D pattern may project to essentially the same point on each of the axes. Figure 7.21 shows this worst-case situation for a stratified sampling pattern.

Figure 7.21: A Worst-Case Situation for Stratified Sampling. In an n times n 2D pattern, up to 2 n of the points may project to essentially the same point on one of the axes. When “unlucky” patterns like this are generated, the quality of the results computed with them usually suffers. (Here, 8 of the samples have nearly the same x value.)

In spite of addressing the clumping problem, LHS isn’t necessarily an improvement to stratified sampling; it’s easy to construct cases where the sample positions are essentially colinear and large areas of the sampling domain have no samples near them (e.g., when the permutation of the original samples is the identity, leaving them all where they started). In particular, as n increases, Latin hypercube patterns are less and less effective compared to stratified patterns.

The general-purpose LatinHypercube() function generates an arbitrary number of LHS samples in an arbitrary dimension. The number of elements in the samples array should thus be nSamples*nDim.

<<Sampling Function Definitions>>+=  
void LatinHypercube(Float *samples, int nSamples, int nDim, RNG &rng) { <<Generate LHS samples along diagonal>> 
Float invNSamples = (Float)1 / nSamples; for (int i = 0; i < nSamples; ++i) for (int j = 0; j < nDim; ++j) { Float sj = (i + (rng.UniformFloat())) * invNSamples; samples[nDim * i + j] = std::min(sj, OneMinusEpsilon); }
<<Permute LHS samples in each dimension>> 
for (int i = 0; i < nDim; ++i) { for (int j = 0; j < nSamples; ++j) { int other = j + rng.UniformUInt32(nSamples - j); std::swap(samples[nDim * j + i], samples[nDim * other + i]); } }
}

<<Generate LHS samples along diagonal>>= 
Float invNSamples = (Float)1 / nSamples; for (int i = 0; i < nSamples; ++i) for (int j = 0; j < nDim; ++j) { Float sj = (i + (rng.UniformFloat())) * invNSamples; samples[nDim * i + j] = std::min(sj, OneMinusEpsilon); }

To do the permutation, this function loops over the samples, randomly permuting the sample points in one dimension at a time. Note that this is a different permutation than the earlier Shuffle() routine: that routine does one permutation, keeping all nDim sample points in each sample together, while here nDim separate permutations of a single dimension at a time are done (Figure 7.22).

Figure 7.22: (a) The permutation done by the Shuffle() routine moves entire blocks of elements around. (b) The permutation for Latin hypercube sampling permutes each dimension’s samples independently. Here, the shuffling of the second dimension’s samples from a four-element pattern of three dimensions is shown.

<<Permute LHS samples in each dimension>>= 
for (int i = 0; i < nDim; ++i) { for (int j = 0; j < nSamples; ++j) { int other = j + rng.UniformUInt32(nSamples - j); std::swap(samples[nDim * j + i], samples[nDim * other + i]); } }

Given the LatinHypercube() function, we can now write the code to compute sample arrays for the current pixel. 1D samples are stratified and then randomly shuffled, while 2D samples are generated using Latin hypercube sampling.

<<Generate arrays of stratified samples for the pixel>>= 
for (size_t i = 0; i < samples1DArraySizes.size(); ++i) for (int64_t j = 0; j < samplesPerPixel; ++j) { int count = samples1DArraySizes[i]; StratifiedSample1D(&sampleArray1D[i][j * count], count, rng, jitterSamples); Shuffle(&sampleArray1D[i][j * count], count, 1, rng); } for (size_t i = 0; i < samples2DArraySizes.size(); ++i) for (int64_t j = 0; j < samplesPerPixel; ++j) { int count = samples2DArraySizes[i]; LatinHypercube(&sampleArray2D[i][j * count].x, count, 2, rng); }

We’ll use the scene in Figure 7.23 to demonstrate properties of some of the Sampler implementations.

Figure 7.23: Area Light Sampling Example Scene.

Figure 7.24 shows the improvement from good samples for the DirectLightingIntegrator. The first image was computed with 1 image sample per pixel, each with 16 shadow samples. The second was computed with 16 image samples per pixel, each with 1 shadow sample. Because the StratifiedSampler could generate a good LHS pattern for the first case, the quality of the shadow is much better, even with the same total number of shadow samples taken.

Figure 7.24: Sampling an Area Light with Samples from the Stratified Sampler. (1) shows the result of using 1 image sample per pixel and 16 shadow samples, and (2) shows the result of 16 image samples, each with just 1 shadow sample. The total number of shadow samples is the same in both cases, but because the version with 16 shadow samples per image sample is able to use an LHS pattern, all of the shadow samples in a pixel’s area are well distributed, while in the second image the implementation here has no way to prevent them from being poorly distributed. The difference is striking.