7.2 Sampling Interface

As first introduced in Section 7.1.5, the rendering approach implemented in pbrt involves choosing sample points in additional dimensions beyond 2D points on the image plane. Various algorithms will be used to generate these points, but all of their implementations inherit from an abstract Sampler class that defines their interface. The core sampling declarations and functions are in the files core/sampler.h and core/sampler.cpp. Each of the sample generation implementations is in its own source files in the samplers/ directory.

The task of a Sampler is to generate a sequence of n -dimensional samples in left-bracket 0 comma 1 right-parenthesis Superscript n , where one such sample vector is generated for each image sample and where the number of dimensions n in each sample may vary, depending on the calculations performed by the light transport algorithms. (See Figure 7.13.)

Figure 7.13: Samplers generate an n -dimensional sample vector for each of the image samples taken to generate the final image. Here, the pixel left-parenthesis 3 comma 8 right-parenthesis is being sampled, and there are two image samples in the pixel area. The first two dimensions of the sample vector give the left-parenthesis x comma y right-parenthesis offset of the sample within the pixel, and the next three dimensions determine the time and lens position of the corresponding camera ray. Subsequent dimensions are used by the Monte Carlo light transport algorithms in Chapters 14, 15, and 16. Here, the light transport algorithm has requested a 2D array of four samples in the sample vector; these values might be used to select four points on an area light source to compute the radiance for the image sample, for example.

Because sample values must be strictly less than 1, it’s useful to define a constant, OneMinusEpsilon, that represents the largest representable floating-point constant that is less than 1. Later, we will clamp sample vector values to be no larger than this value.

<<Random Number Declarations>>= 
#ifdef PBRT_FLOAT_IS_DOUBLE static const Float OneMinusEpsilon = 0x1.fffffffffffffp-1; #else static const Float OneMinusEpsilon = 0x1.fffffep-1; #endif

The simplest possible implementation of a Sampler would just return uniform random values in left-bracket 0 comma 1 right-parenthesis each time an additional component of the sample vector was needed. Such a sampler would produce correct images but would require many more samples (and thus, many more rays traced and much more time) to create images of the same quality achievable with more sophisticated samplers. The run-time expense for using better sampling patterns is approximately the same as that for lower-quality patterns like uniform random numbers; because evaluating the radiance for each image sample is much more expensive than computing the sample’s component values, doing this work pays dividends (Figure 7.14).

Figure 7.14: Scene rendered with (1) a relatively ineffective sampler and (2) a carefully designed sampler, using the same number of samples for each. The improvement in image quality, ranging from the edges of the highlights to the quality of the glossy reflections, is noticeable.

A few characteristics of these sample vectors are assumed in the following:

  • The first five dimensions generated by Samplers are generally used by the Camera. In this case, the first two are specifically used to choose a point on the image inside the current pixel area; the third is used to compute the time at which the sample should be taken; and the fourth and fifth dimensions give a left-parenthesis u comma v right-parenthesis lens position for depth of field.
  • Some sampling algorithms generate better samples in some dimensions of the sample vector than in others. Elsewhere in the system, we assume that in general, the earlier dimensions have the most well-placed sample values.

Note also that the n -dimensional samples generated by the Sampler are generally not represented explicitly or stored in their entirety but are often generated incrementally as needed by the light transport algorithm. (However, storing the entire sample vector and making incremental changes to its components is the basis of the MLTSampler in Section 16.4.4, which is used by the MLTIntegrator in Section 16.4.5.)

7.2.1 Evaluating Sample Patterns: Discrepancy

Fourier analysis gave us one way of evaluating the quality of a 2D sampling pattern, but it took us only as far as being able to quantify the improvement from adding more evenly spaced samples in terms of the band-limited frequencies that could be represented. Given the presence of infinite frequency content from edges in images and given the need for left-parenthesis n greater-than 2 right-parenthesis -dimensional sample vectors for Monte Carlo light transport algorithms, Fourier analysis alone isn’t enough for our needs.

Given a renderer and a candidate algorithm for placing samples, one way to evaluate the algorithm’s effectiveness is to use that sampling pattern to render an image and to compute the error in the image compared to a reference image rendered with a large number of samples. We will use this approach to compare sampling algorithms later in this chapter, though it only tells us how well the algorithm did for one specific scene, and it doesn’t give us a sense of the quality of the sample points without going through the rendering process.

Outside of Fourier analysis, mathematicians have developed a concept called discrepancy that can be used to evaluate the quality of a pattern of n -dimensional sample positions. Patterns that are well distributed (in a manner to be formalized shortly) have low discrepancy values, and thus the sample pattern generation problem can be considered to be one of finding a suitable low-discrepancy pattern of points. A number of deterministic techniques have been developed that generate low-discrepancy point sets, even in high-dimensional spaces. (Most of the sampling algorithms used later in this chapter use these techniques.)

The basic idea of discrepancy is that the quality of a set of points in an n -dimensional space left-bracket 0 comma 1 right-parenthesis Superscript n can be evaluated by looking at regions of the domain left-bracket 0 comma 1 right-parenthesis Superscript n , counting the number of points inside each region, and comparing the volume of each region to the number of sample points inside. In general, a given fraction of the volume should have roughly the same fraction of the total number of sample points inside of it. While it’s not possible for this always to be the case, we can still try to use patterns that minimize the maximum difference between the actual volume and the volume estimated by the points (the discrepancy). Figure 7.15 shows an example of the idea in two dimensions.

Figure 7.15: The discrepancy of a box (shaded) given a set of 2D sample points in left-bracket 0 comma 1 right-parenthesis squared . One of the four sample points is inside the box, so this set of points would estimate the box’s area to be 1 slash 4 . The true area of the box is 0.3 times 0.3 equals .09 , so the discrepancy for this particular box is .25 minus .09 equals .16 . In general, we’re interested in finding the maximum discrepancy of all possible boxes (or some other shape).

To compute the discrepancy of a set of points, we first pick a family of shapes upper B that are subsets of left-bracket 0 comma 1 right-parenthesis Superscript n . For example, boxes with one corner at the origin are often used. This corresponds to

upper B equals StartSet left-bracket 0 comma v 1 right-bracket times left-bracket 0 comma v 2 right-bracket times midline-horizontal-ellipsis times left-bracket 0 comma v Subscript n Baseline right-bracket EndSet comma

where 0 less-than-or-equal-to v Subscript i Baseline less-than 1 . Given a sequence of sample points upper P equals x 1 comma ellipsis comma x Subscript upper N Baseline , the discrepancy of upper P with respect to upper B is

upper D Subscript upper N Baseline left-parenthesis upper B comma upper P right-parenthesis equals sup Underscript b element-of upper B Endscripts StartAbsoluteValue StartFraction normal ♯ StartSet x Subscript i Baseline element-of b EndSet Over upper N EndFraction minus upper V left-parenthesis b right-parenthesis EndAbsoluteValue comma
(7.4)

where normal ♯ StartSet x Subscript i Baseline element-of b EndSet is the number of points in b and upper V left-parenthesis b right-parenthesis is the volume of b .

The intuition for why Equation (7.4) is a reasonable measure of quality is that the value normal ♯ StartSet x Subscript i Baseline element-of b EndSet slash upper N is an approximation of the volume of the box b given by the particular points upper P . Therefore, the discrepancy is the worst error over all possible boxes from this way of approximating the volume. When the set of shapes  upper B is the set of boxes with a corner at the origin, this value is called the star discrepancy, upper D Subscript upper N Superscript asterisk Baseline left-parenthesis upper P right-parenthesis . Another popular option for upper B is the set of all axis-aligned boxes, where the restriction that one corner be at the origin has been removed.

For a few particular point sets, the discrepancy can be computed analytically. For example, consider the set of points in one dimension

x Subscript i Baseline equals StartFraction i Over upper N EndFraction period

We can see that the star discrepancy of x Subscript i is

upper D Subscript upper N Superscript asterisk Baseline left-parenthesis x 1 comma ellipsis comma x Subscript n Baseline right-parenthesis equals StartFraction 1 Over upper N EndFraction period

For example, take the interval b equals left-bracket 0 comma 1 slash upper N right-parenthesis . Then upper V left-parenthesis b right-parenthesis equals 1 slash upper N , but normal ♯ StartSet x Subscript i Baseline element-of b EndSet equals 0 . This interval (and the intervals left-bracket 0 comma 2 slash upper N right-parenthesis , etc.) is the interval where the largest differences between volume and fraction of points inside the volume are seen.

The star discrepancy of this sequence can be improved by modifying it slightly:

x Subscript i Baseline equals StartFraction i minus one-half Over upper N EndFraction period
(7.5)

Then

upper D Subscript upper N Superscript asterisk Baseline left-parenthesis x Subscript i Baseline right-parenthesis equals StartFraction 1 Over 2 upper N EndFraction period

The bounds for the star discrepancy of a sequence of points in one dimension have been shown to be

upper D Subscript upper N Superscript asterisk Baseline left-parenthesis x Subscript i Baseline right-parenthesis equals StartFraction 1 Over 2 upper N EndFraction plus max Underscript 1 less-than-or-equal-to i less-than-or-equal-to upper N Endscripts StartAbsoluteValue x Subscript i Baseline minus StartFraction 2 i minus 1 Over 2 upper N EndFraction EndAbsoluteValue period

Thus, the earlier sequence from Equation (7.5) has the lowest possible discrepancy for a sequence in 1D. In general, it is much easier to analyze and compute bounds for the discrepancy of sequences in 1D than for those in higher dimensions. For less simply constructed point sequences and for sequences in higher dimensions and for more irregular shapes than boxes, the discrepancy often must be estimated numerically by constructing a large number of shapes  b , computing their discrepancy, and reporting the maximum value found.

The astute reader will notice that according to the low-discrepancy measure, this uniform sequence in 1D is optimal, but earlier in this chapter we claimed that irregular jittered patterns were perceptually superior to uniform patterns for image sampling in 2D since they replaced aliasing error with noise. In that framework, uniform samples are clearly not optimal. Fortunately, low-discrepancy patterns in higher dimensions are much less uniform than they are in one dimension and thus usually work reasonably well as sample patterns in practice. Nevertheless, their underlying uniformity means that low-discrepancy patterns can be more prone to visually objectionable aliasing than patterns with pseudo-random variation.

Discrepancy alone isn’t necessarily a good metric: some low-discrepancy point sets exhibit some clumping of samples, where two or more samples may be quite close together. The Sobol prime sampler in Section 7.7 particularly suffers from this issue—see Figure 7.36, which shows a plot of its first two dimensions. Intuitively, samples that are too close together aren’t a good use of sampling resources: the closer one sample is to another, the less likely it is to give useful new information about the function being sampled. Therefore, computing the minimum distance between any two samples in a set of points has also proved to be a useful metric of sample pattern quality; the higher the minimum distance, the better.

There are a variety of algorithms for generating Poisson disk sampling patterns that score well by this metric. By construction, no two points in a Poisson disk pattern are closer than some distance d . Studies have shown that the rods and cones in the eye are distributed in a similar way, which further validates the idea that this distribution is a good one for imaging. In practice, we have found that Poisson disk patterns work very well for sampling 2D images but are less effective than the better low discrepancy patterns for the higher-dimensional sampling done in more complex rendering situations; see the “Further Reading” section for more information.

7.2.2 Basic Sampler Interface

The Sampler base class not only defines the interface to samplers but also provides some common functionality for use by Sampler implementations.

<<Sampler Declarations>>= 
class Sampler { public: <<Sampler Interface>> 
virtual ~Sampler(); Sampler(int64_t samplesPerPixel); virtual void StartPixel(const Point2i &p); virtual Float Get1D() = 0; virtual Point2f Get2D() = 0; CameraSample GetCameraSample(const Point2i &pRaster); void Request1DArray(int n); void Request2DArray(int n); virtual int RoundCount(int n) const { return n; } const Float *Get1DArray(int n); const Point2f *Get2DArray(int n); virtual bool StartNextSample(); virtual std::unique_ptr<Sampler> Clone(int seed) = 0; virtual bool SetSampleNumber(int64_t sampleNum);
<<Sampler Public Data>> 
const int64_t samplesPerPixel;
protected: <<Sampler Protected Data>> 
Point2i currentPixel; int64_t currentPixelSampleIndex; std::vector<int> samples1DArraySizes, samples2DArraySizes; std::vector<std::vector<Float>> sampleArray1D; std::vector<std::vector<Point2f>> sampleArray2D;
private: <<Sampler Private Data>> 
size_t array1DOffset, array2DOffset;
};

All Sampler implementations must supply the constructor with the number of samples that will be generated for each pixel in the final image. In rare cases, it may be useful for the system to model the film as having only a single “pixel” that covers the entire viewing region. (This overloading of the definition of pixel is something of a stretch, but we allow it to simplify certain implementation aspects.) Since this “pixel” could potentially have billions of samples, we store the sample count using a variable with 64 bits of precision.

<<Sampler Method Definitions>>= 

<<Sampler Public Data>>= 
const int64_t samplesPerPixel;

When the rendering algorithm is ready to start work on a given pixel, it starts by calling StartPixel(), providing the coordinates of the pixel in the image. Some Sampler implementations use the knowledge of which pixel is being sampled to improve the overall distribution of the samples that they generate for the pixel, while others ignore this information.

<<Sampler Interface>>= 
virtual void StartPixel(const Point2i &p);

The Get1D() method returns the sample value for the next dimension of the current sample vector, and Get2D() returns the sample values for the next two dimensions. While a 2D sample value could be constructed by using values returned by a pair of calls to Get1D(), some samplers can generate better point distributions if they know that two dimensions will be used together.

<<Sampler Interface>>+=  
virtual Float Get1D() = 0; virtual Point2f Get2D() = 0;

In pbrt, we don’t support requests for 3D or higher dimensional sample values from samplers because these are generally not needed for the types of rendering algorithms implemented here. If necessary, multiple values from lower dimensional components can be used to construct higher dimensional sample points.

A sharp edge of these interfaces is that code that uses sample values must be carefully written so that it always requests sample dimensions in the same order. Consider the following code:

sampler->StartPixel(p); do { Float v = a(sampler->Get1D()); if (v > 0) v += b(sampler->Get1D()); v += c(sampler->Get1D()); } while (sampler->StartNextSample());

In this case, the first dimension of the sample vector will always be passed to the function a(); when the code path that calls b() is executed, b() will receive the second dimension. However, if the if test isn’t always true or false, then c() will sometimes receive a sample from the second dimension of the sample vector and otherwise receive a sample from the third dimension. Thus, efforts by the sampler to provide well-distributed sample points in each dimension being evaluated have been thwarted. Code that uses Samplers should therefore be carefully written so that it consistently consumes sample vector dimensions to avoid this issue.

For convenience, the Sampler base class provides a method that initializes a CameraSample for a given pixel.

<<Sampler Method Definitions>>+=  
CameraSample Sampler::GetCameraSample(const Point2i &pRaster) { CameraSample cs; cs.pFilm = (Point2f)pRaster + Get2D(); cs.time = Get1D(); cs.pLens = Get2D(); return cs; }

Some rendering algorithms make use of arrays of sample values for some of the dimensions they sample; most sample-generation algorithms can generate higher quality arrays of samples than series of individual samples by accounting for the distribution of sample values across all elements of the array and across the samples in a pixel.

If arrays of samples are needed, they must be requested before rendering begins. The Request[12]DArray() methods should be called for each such dimension’s array before rendering begins—for example, in methods that override the SamplerIntegrator::Preprocess() method. For example, in a scene with two area light sources, where the integrator traces four shadow rays to the first source and eight to the second, the integrator would ask for two 2D sample arrays for each image sample, with four and eight samples each. (A 2D array is required because two dimensions are needed to parameterize the surface of a light.) In Section 13.7, we will see how using arrays of samples corresponds to more densely sampling some of the dimensions of the light transport integral using the Monte Carlo technique of “splitting.”

<<Sampler Interface>>+=  
void Request1DArray(int n); void Request2DArray(int n);

Most Samplers can do a better job of generating some particular sizes of these arrays than others. For example, samples from the ZeroTwoSequenceSampler are much better distributed in quantities that are in powers of 2. The Sampler::RoundCount() method helps communicate this information. Code that needs arrays of samples should call this method with the desired number of samples to be taken, giving the Sampler an opportunity to adjust the number of samples to a better number. The returned value should then be used as the number of samples to actually request from the Sampler. The default implementation returns the given count unchanged.

<<Sampler Interface>>+=  
virtual int RoundCount(int n) const { return n; }

During rendering, the Get[12]DArray() methods can be called to get a pointer to the start of a previously requested array of samples for the current dimension. Along the lines of Get1D() and Get2D(), these return a pointer to an array of samples whose size is given by the parameter n to the corresponding call to Request[12]DArray() during initialization. The caller must also provide the array size to the “get” method, which is used to verify that the returned buffer has the expected size.

<<Sampler Interface>>+=  
const Float *Get1DArray(int n); const Point2f *Get2DArray(int n);

When the work for one sample is complete, the integrator calls StartNextSample(). This call notifies the Sampler that subsequent requests for sample components should return values starting at the first dimension of the next sample for the current pixel. This method returns true until the number of the originally requested samples per pixel have been generated (at which point the caller should either start work on another pixel or stop trying to use more samples.)

<<Sampler Interface>>+=  
virtual bool StartNextSample();

Sampler implementations store a variety of state about the current sample: which pixel is being sampled, how many dimensions of the sample have been used, and so forth. It is therefore natural for it to be unsafe for a single Sampler to be used concurrently by multiple threads. The Clone() method generates a new instance of an initial Sampler for use by a rendering thread; it takes a seed value for the sampler’s random number generator (if any), so that different threads see different sequences of random numbers. Reusing the same pseudo-random number sequence across multiple image tiles can lead to subtle image artifacts, such as repeating noise patterns.

Implementations of the various Clone() methods aren’t generally interesting, so they won’t be included in the text here.

<<Sampler Interface>>+=  
virtual std::unique_ptr<Sampler> Clone(int seed) = 0;

Some light transport algorithms (notably stochastic progressive photon mapping in Section 16.2) don’t use all of the samples in a pixel before going to the next pixel, but instead jump around pixels, taking one sample at a time in each one. The SetSampleNumber() method allows integrators to set the index of the sample in the current pixel to generate next. This method returns false once sampleNum is greater than or equal to the number of originally requested samples per pixel.

<<Sampler Interface>>+= 
virtual bool SetSampleNumber(int64_t sampleNum);

7.2.3 Sampler Implementation

The Sampler base class provides implementations of some of the methods in its interface. First, the StartPixel() method implementation records the coordinates of the current pixel being sampled and resets currentPixelSampleIndex, the sample number in the pixel currently being generated, to zero. Note that this is a virtual method with an implementation; subclasses that override this method are required to explicitly call Sampler::StartPixel().

<<Sampler Method Definitions>>+=  

The current pixel coordinates and sample number within the pixel are made available to Sampler subclasses, though they should treat these as read-only values.

<<Sampler Protected Data>>= 
Point2i currentPixel; int64_t currentPixelSampleIndex;

When the pixel sample is advanced or explicitly set, currentPixelSampleIndex is updated accordingly. As with StartPixel(), the methods StartNextSample() and SetSampleNumber() are both virtual implementations; these implementations also must be explicitly called by overridden implementations of them in Sampler subclasses.

<<Sampler Method Definitions>>+=  

<<Sampler Method Definitions>>+=  

The base Sampler implementation also takes care of recording requests for arrays of sample components and allocating storage for their values. The sizes of the requested sample arrays are stored in samples1DArraySizes and samples2DArraySizes, and memory for an entire pixel’s worth of array samples is allocated in sampleArray1D and sampleArray2D. The first n values in each allocation are used for the corresponding array for the first sample in the pixel, and so forth.

<<Sampler Method Definitions>>+=  
void Sampler::Request1DArray(int n) { samples1DArraySizes.push_back(n); sampleArray1D.push_back(std::vector<Float>(n * samplesPerPixel)); }

<<Sampler Method Definitions>>+=  
void Sampler::Request2DArray(int n) { samples2DArraySizes.push_back(n); sampleArray2D.push_back(std::vector<Point2f>(n * samplesPerPixel)); }

<<Sampler Protected Data>>+= 
std::vector<int> samples1DArraySizes, samples2DArraySizes; std::vector<std::vector<Float>> sampleArray1D; std::vector<std::vector<Point2f>> sampleArray2D;

As arrays in the current sample are accessed by the Get[12]DArray() methods, array1DOffset and array2DOffset are updated to hold the index of the next array to return for the sample vector.

<<Sampler Private Data>>= 
size_t array1DOffset, array2DOffset;

When a new pixel is started or when the sample number in the current pixel changes, these array offsets must be reset to 0.

<<Reset array offsets for next pixel sample>>= 

Returning the appropriate array pointer is a matter of first choosing the appropriate array based on how many have been consumed in the current sample vector and then returning the appropriate instance of it based on the current pixel sample index.

<<Sampler Method Definitions>>+=  
const Float *Sampler::Get1DArray(int n) { if (array1DOffset == sampleArray1D.size()) return nullptr; return &sampleArray1D[array1DOffset++][currentPixelSampleIndex * n]; }

<<Sampler Method Definitions>>+=  
const Point2f *Sampler::Get2DArray(int n) { if (array2DOffset == sampleArray2D.size()) return nullptr; return &sampleArray2D[array2DOffset++][currentPixelSampleIndex * n]; }

7.2.4 Pixel Sampler

While some sampling algorithms can easily incrementally generate elements of each sample vector, others more naturally generate all of the dimensions’ sample values for all of the sample vectors for a pixel at the same time. The PixelSampler class implements some functionality that is useful for the implementation of these types of samplers.

<<Sampler Declarations>>+=  
class PixelSampler : public Sampler { public: <<PixelSampler Public Methods>> 
PixelSampler(int64_t samplesPerPixel, int nSampledDimensions); bool StartNextSample(); bool SetSampleNumber(int64_t); Float Get1D(); Point2f Get2D();
protected: <<PixelSampler Protected Data>> 
std::vector<std::vector<Float>> samples1D; std::vector<std::vector<Point2f>> samples2D; int current1DDimension = 0, current2DDimension = 0; RNG rng;
};

The number of dimensions of the sample vectors that will be used by the rendering algorithm isn’t known ahead of time. (Indeed, it’s only determined implicitly by the number of Get1D() and Get2D() calls and the requested arrays.) Therefore, the PixelSampler constructor takes a maximum number of dimensions for which non-array sample values will be computed by the Sampler. If all of these dimensions of components are consumed, then the PixelSampler just returns uniform random values for additional dimensions.

For each precomputed dimension, the constructor allocates a vector to store sample values, with one value for each sample in the pixel. These vectors are indexed as sample1D[dim][pixelSample]; while interchanging the order of these indices might seem more sensible, this memory layout—where all of the sample component values for all of the samples for a given dimension are contiguous in memory—turns out to be more convenient for code that generates these values.

<<Sampler Method Definitions>>+=  
PixelSampler::PixelSampler(int64_t samplesPerPixel, int nSampledDimensions) : Sampler(samplesPerPixel) { for (int i = 0; i < nSampledDimensions; ++i) { samples1D.push_back(std::vector<Float>(samplesPerPixel)); samples2D.push_back(std::vector<Point2f>(samplesPerPixel)); } }

The key responsibility of Sampler implementations that inherit from PixelSampler then is to fill in the samples1D and samples2D arrays (in addition to sampleArray1D and sampleArray2D) in their StartPixel() methods.

current1DDimension and current2DDimension store the offsets into the respective arrays for the current pixel sample. They must be reset to 0 at the start of each new sample.

<<PixelSampler Protected Data>>= 
std::vector<std::vector<Float>> samples1D; std::vector<std::vector<Point2f>> samples2D; int current1DDimension = 0, current2DDimension = 0;

<<Sampler Method Definitions>>+=  
bool PixelSampler::StartNextSample() { current1DDimension = current2DDimension = 0; return Sampler::StartNextSample(); }

<<Sampler Method Definitions>>+=  
bool PixelSampler::SetSampleNumber(int64_t sampleNum) { current1DDimension = current2DDimension = 0; return Sampler::SetSampleNumber(sampleNum); }

Given sample values in the arrays computed by the PixelSampler subclass, the implementation of Get1D() is just a matter of returning values for successive dimensions until all of the computed dimensions have been consumed, at which point uniform random values are returned.

<<Sampler Method Definitions>>+=  
Float PixelSampler::Get1D() { if (current1DDimension < samples1D.size()) return samples1D[current1DDimension++][currentPixelSampleIndex]; else return rng.UniformFloat(); }

The PixelSampler::Get2D() follows similarly, so it won’t be included here.

The random number generator used by the PixelSampler is protected rather than private; this is a convenience for some of its subclasses that also need random numbers when they initialize samples1D and samples2D.

<<PixelSampler Protected Data>>+= 
RNG rng;

7.2.5 Global Sampler

Other algorithms for generating samples are very much not pixel-based but naturally generate consecutive samples that are spread across the entire image, visiting completely different pixels in succession. (Many such samplers are effectively placing each additional sample such that it fills the biggest hole in the n -dimensional sample space, which naturally leads to subsequent samples being inside different pixels.) These sampling algorithms are somewhat problematic with the Sampler interface as described so far: consider, for example, a sampler that generates the series of sample values shown in the middle column of Table 7.2 for the first two dimensions. These sample values are multiplied by the image resolution in each dimension to get sample positions in the image plane (here we’re considering a 2 times 3 image for simplicity.) Note that for the sampler here (actually the HaltonSampler), each pixel is visited by each sixth sample. If we are rendering an image with three samples per pixel, then to generate all of the samples for the pixel left-parenthesis 0 comma 0 right-parenthesis , we need to generate the samples with indices 0, 6, and 12, and so forth.

Table 7.2: The HaltonSampler generates the coordinates in the middle column for the first two dimensions. Because it is a GlobalSampler, it must define an inverse mapping from the pixel coordinates to sample indices; here, it places samples across a 2 times 3 pixel image, by scaling the first coordinate by 2 and the second coordinate by three, giving the pixel sample coordinates in the right column.

Sample index left-bracket 0 comma 1 right-parenthesis squared sample coordinates Pixel sample coordinates
0 left-parenthesis 0.000000 comma 0.000000 right-parenthesis left-parenthesis 0.000000 comma 0.000000 right-parenthesis
1 left-parenthesis 0.500000 comma 0.333333 right-parenthesis left-parenthesis 1.000000 comma 1.000000 right-parenthesis
2 left-parenthesis 0.250000 comma 0.666667 right-parenthesis left-parenthesis 0.500000 comma 2.000000 right-parenthesis
3 left-parenthesis 0.750000 comma 0.111111 right-parenthesis left-parenthesis 1.500000 comma 0.333333 right-parenthesis
4 left-parenthesis 0.125000 comma 0.444444 right-parenthesis left-parenthesis 0.250000 comma 1.333333 right-parenthesis
5 left-parenthesis 0.625000 comma 0.777778 right-parenthesis left-parenthesis 1.250000 comma 2.333333 right-parenthesis
6 left-parenthesis 0.375000 comma 0.222222 right-parenthesis left-parenthesis 0.750000 comma 0.666667 right-parenthesis
7 left-parenthesis 0.875000 comma 0.555556 right-parenthesis left-parenthesis 1.750000 comma 1.666667 right-parenthesis
8 left-parenthesis 0.062500 comma 0.888889 right-parenthesis left-parenthesis 0.125000 comma 2.666667 right-parenthesis
9 left-parenthesis 0.562500 comma 0.037037 right-parenthesis left-parenthesis 1.125000 comma 0.111111 right-parenthesis
10 left-parenthesis 0.312500 comma 0.370370 right-parenthesis left-parenthesis 0.625000 comma 1.111111 right-parenthesis
11 left-parenthesis 0.812500 comma 0.703704 right-parenthesis left-parenthesis 1.625000 comma 2.111111 right-parenthesis
12 left-parenthesis 0.187500 comma 0.148148 right-parenthesis left-parenthesis 0.375000 comma 0.444444 right-parenthesis
vertical-ellipsis

Given the existence of such samplers, we could have defined the Sampler interface so that it specifies the pixel being rendered for each sample rather than the other way around (i.e., the Sampler being told which pixel to render).

However, there were good reasons to adopt the current design: this approach makes it easy to decompose the film into small image tiles for multi-threaded rendering, where each thread computes pixels in a local region that can be efficiently merged into the final image. Thus, we must require that such samplers generate samples out of order, so that all samples for each pixel are generated in succession.

The GlobalSampler helps bridge between the expectations of the Sampler interface and the natural operation of these types of samplers. It provides implementations of all of the pure virtual Sampler methods, implementing them in terms of two new pure virtual methods that its subclasses must implement instead.

<<Sampler Declarations>>+= 
class GlobalSampler : public Sampler { public: <<GlobalSampler Public Methods>> 
bool StartNextSample(); void StartPixel(const Point2i &); bool SetSampleNumber(int64_t sampleNum); Float Get1D(); Point2f Get2D(); GlobalSampler(int64_t samplesPerPixel) : Sampler(samplesPerPixel) { } virtual int64_t GetIndexForSample(int64_t sampleNum) const = 0; virtual Float SampleDimension(int64_t index, int dimension) const = 0;
private: <<GlobalSampler Private Data>> 
int dimension; int64_t intervalSampleIndex; static const int arrayStartDim = 5; int arrayEndDim;
};

<<GlobalSampler Public Methods>>= 
GlobalSampler(int64_t samplesPerPixel) : Sampler(samplesPerPixel) { }

There are two methods that implementations must provide. The first one, GetIndexForSample(), performs the inverse mapping from the current pixel and given sample index to a global index into the overall set of sample vectors. For example, for the Sampler that generated the values in Table 7.2, if currentPixel was left-parenthesis 0 comma 2 right-parenthesis , then GetIndexForSample(0) would return 2, since the corresponding pixel sample coordinates for sample index 2, left-parenthesis 0.25 comma 0.666667 right-parenthesis correspond to the first sample that lands in that pixel’s area.

<<GlobalSampler Public Methods>>+=  
virtual int64_t GetIndexForSample(int64_t sampleNum) const = 0;

Closely related, SampleDimension() returns the sample value for the given dimension of the indexth sample vector in the sequence. Because the first two dimensions are used to offset into the current pixel, they are handled specially: the value returned by implementations of this method should be the sample offset within the current pixel, rather than the original left-bracket 0 comma 1 right-parenthesis squared sample value. For the example in Table 7.2, SampleDimension(4,1) would return 0.333333 , since the second dimension of the sample with index 4 is that offset into the pixel left-parenthesis 0 comma 1 right-parenthesis .

<<GlobalSampler Public Methods>>+= 
virtual Float SampleDimension(int64_t index, int dimension) const = 0;

When it’s time to start to generate samples for a pixel, it’s necessary to reset the dimension of the sample and find the index of the first sample in the pixel. As with all samplers, values for sample arrays are all generated next.

<<Sampler Method Definitions>>+=  
void GlobalSampler::StartPixel(const Point2i &p) { Sampler::StartPixel(p); dimension = 0; intervalSampleIndex = GetIndexForSample(0); <<Compute arrayEndDim for dimensions used for array samples>>  <<Compute 1D array samples for GlobalSampler>> 
for (size_t i = 0; i < samples1DArraySizes.size(); ++i) { int nSamples = samples1DArraySizes[i] * samplesPerPixel; for (int j = 0; j < nSamples; ++j) { int64_t index = GetIndexForSample(j); sampleArray1D[i][j] = SampleDimension(index, arrayStartDim + i); } }
<<Compute 2D array samples for GlobalSampler>> 
int dim = arrayStartDim + samples1DArraySizes.size(); for (size_t i = 0; i < samples2DArraySizes.size(); ++i) { int nSamples = samples2DArraySizes[i] * samplesPerPixel; for (int j = 0; j < nSamples; ++j) { int64_t idx = GetIndexForSample(j); sampleArray2D[i][j].x = SampleDimension(idx, dim); sampleArray2D[i][j].y = SampleDimension(idx, dim+1); } dim += 2; } Assert(dim == arrayEndDim);
}

The dimension member variable tracks the next dimension that the sampler implementation will be asked to generate a sample value for; it’s incremented as Get1D() and Get2D() are called. intervalSampleIndex records the index of the sample that corresponds to the current sample s Subscript i in the current pixel.

<<GlobalSampler Private Data>>= 
int dimension; int64_t intervalSampleIndex;

It’s necessary to decide which dimensions of the sample vector to use for array samples. Under the assumption that the earlier dimensions will be better quality than later dimensions, it’s important to set aside the first few dimensions for the CameraSample, since the quality of those sample values often has a large impact on final image quality.

Therefore, the first dimensions up to arrayStartDim are devoted to regular 1D and 2D samples, and then the subsequent dimensions are devoted to first 1D and then 2D array samples. Finally, higher dimensions starting at arrayEndDim are used for further non-array 1D and 2D samples. It isn’t possible to compute arrayEndDim when the GlobalSampler constructor runs, since array samples haven’t been requested yet by the integrators. Therefore, this value is computed (repeatedly and redundantly) in the StartPixel() method.

<<GlobalSampler Private Data>>+= 
static const int arrayStartDim = 5; int arrayEndDim;

The total number of array samples for all pixel samples is given by the product of the number of pixel samples and the requested sample array size.

<<Compute arrayEndDim for dimensions used for array samples>>= 

Actually generating the array samples is just a matter of computing the number of needed values in the current sample dimension.

<<Compute 1D array samples for GlobalSampler>>= 
for (size_t i = 0; i < samples1DArraySizes.size(); ++i) { int nSamples = samples1DArraySizes[i] * samplesPerPixel; for (int j = 0; j < nSamples; ++j) { int64_t index = GetIndexForSample(j); sampleArray1D[i][j] = SampleDimension(index, arrayStartDim + i); } }

The 2D sample arrays are generated analogously; the <<Compute 2D array samples for GlobalSampler>> fragment isn’t included here.

When the pixel sample changes, it’s necessary to reset the current sample dimension counter and to compute the sample index for the next sample inside the pixel.

<<Sampler Method Definitions>>+=  
bool GlobalSampler::StartNextSample() { dimension = 0; intervalSampleIndex = GetIndexForSample(currentPixelSampleIndex + 1); return Sampler::StartNextSample(); }

<<Sampler Method Definitions>>+=  
bool GlobalSampler::SetSampleNumber(int64_t sampleNum) { dimension = 0; intervalSampleIndex = GetIndexForSample(sampleNum); return Sampler::SetSampleNumber(sampleNum); }

Given this machinery, getting regular 1D sample values is just a matter of skipping over the dimensions allocated to array samples and passing the current sample index and dimension to the implementation’s SampleDimension() method.

<<Sampler Method Definitions>>+=  

2D samples follow analogously.

<<Sampler Method Definitions>>+=