7.9 Film and the Imaging Pipeline

The type of film or sensor in a camera has a dramatic effect on the way that incident light is transformed into colors in an image. In pbrt, the Film class models the sensing device in the simulated camera. After the radiance is found for each camera ray, the Film implementation determines the sample’s contribution to the pixels around the point on the film plane where the camera ray began and updates its representation of the image. When the main rendering loop exits, the Film writes the final image to a file.

For realistic camera models, Section 6.4.7 introduced the measurement equation, which describes how a sensor in a camera measures the amount of energy arriving over the sensor area over a period of time. For simpler camera models, we can consider the sensor to be measuring the average radiance over a small area over some period of time. The effect of the choice of which measurement to take is encapsulated in the weight for the ray returned by Camera::GenerateRayDifferential(). Therefore, the Film implementation can proceed without having to account for these variations, as long as it scales the provided radiance values by these weights.

This section introduces a single Film implementation that applies the pixel reconstruction equation to compute final pixel values. For a physically based renderer, it’s generally best for the resulting images to be stored in a floating-point image format. Doing so provides more flexibility in how the output can be used than if a traditional image format with 8-bit unsigned integer values is used; floating-point formats avoid the substantial loss of information that comes from quantizing images to 8 bits.

In order to display such images on modern display devices, it is necessary to map these floating-point pixel values to discrete values for display. For example, computer monitors generally expect the color of each pixel to be described by an RGB color triple, not an arbitrary spectral power distribution. Spectra described by general basis function coefficients must therefore be converted to an RGB representation before they can be displayed. A related problem is that displays have a substantially smaller range of displayable radiance values than the range present in many real-world scenes. Therefore, pixel values must be mapped to the displayable range in a way that causes the final displayed image to appear as close as possible to the way it would appear on an ideal display device without this limitation. These issues are addressed by research into tone mapping; the “Further Reading” section has more information about this topic.

7.9.1 The Film Class

Film is defined in the files core/film.h and core/film.cpp.

<<Film Declarations>>= 
class Film { public: <<Film Public Methods>> 
Film(const Point2i &resolution, const Bounds2f &cropWindow, std::unique_ptr<Filter> filter, Float diagonal, const std::string &filename, Float scale); Bounds2i GetSampleBounds() const; Bounds2f GetPhysicalExtent() const; std::unique_ptr<FilmTile> GetFilmTile(const Bounds2i &sampleBounds); void MergeFilmTile(std::unique_ptr<FilmTile> tile); void SetImage(const Spectrum *img) const; void AddSplat(const Point2f &p, const Spectrum &v); void WriteImage(Float splatScale = 1); void Clear();
<<Film Public Data>> 
const Point2i fullResolution; const Float diagonal; std::unique_ptr<Filter> filter; const std::string filename; Bounds2i croppedPixelBounds;
private: <<Film Private Data>> 
struct Pixel { Float xyz[3] = { 0, 0, 0 }; Float filterWeightSum = 0; AtomicFloat splatXYZ[3]; Float pad; }; std::unique_ptr<Pixel[]> pixels; static constexpr int filterTableWidth = 16; Float filterTable[filterTableWidth * filterTableWidth]; std::mutex mutex; const Float scale;
<<Film Private Methods>> 
Pixel &GetPixel(const Point2i &p) { int width = croppedPixelBounds.pMax.x - croppedPixelBounds.pMin.x; int offset = (p.x - croppedPixelBounds.pMin.x) + (p.y - croppedPixelBounds.pMin.y) * width; return pixels[offset]; }
};

A number of values are passed to the constructor: the overall resolution of the image in pixels; a crop window that may specify a subset of the image to render; the length of the diagonal of the film’s physical area, which is specified to the constructor in millimeters but is converted to meters here; a filter function; the filename for the output image and parameters that control how the image pixel values are stored in files.

<<Film Method Definitions>>= 
Film::Film(const Point2i &resolution, const Bounds2f &cropWindow, std::unique_ptr<Filter> filt, Float diagonal, const std::string &filename, Float scale) : fullResolution(resolution), diagonal(diagonal * .001), filter(std::move(filt)), filename(filename), scale(scale) { <<Compute film image bounds>> 
croppedPixelBounds = Bounds2i(Point2i(std::ceil(fullResolution.x * cropWindow.pMin.x), std::ceil(fullResolution.y * cropWindow.pMin.y)), Point2i(std::ceil(fullResolution.x * cropWindow.pMax.x), std::ceil(fullResolution.y * cropWindow.pMax.y)));
<<Allocate film image storage>> 
pixels = std::unique_ptr<Pixel[]>(new Pixel[croppedPixelBounds.Area()]);
<<Precompute filter weight table>> 
int offset = 0; for (int y = 0; y < filterTableWidth; ++y) { for (int x = 0; x < filterTableWidth; ++x, ++offset) { Point2f p; p.x = (x + 0.5f) * filter->radius.x / filterTableWidth; p.y = (y + 0.5f) * filter->radius.y / filterTableWidth; filterTable[offset] = filter->Evaluate(p); } }
}

<<Film Public Data>>= 
const Point2i fullResolution; const Float diagonal; std::unique_ptr<Filter> filter; const std::string filename;

In conjunction with the overall image resolution, the crop window gives the bounds of pixels that need to be actually stored and written out. Crop windows are useful for debugging or for breaking a large image into small pieces that can be rendered on different computers and reassembled later. The crop window is specified in NDC space, with each coordinate ranging from 0 to 1 (Figure 7.47).

Figure 7.47: The image crop window specifies a subset of the image to be rendered. It is given in NDC space, with coordinates ranging from left-parenthesis 0 comma 0 right-parenthesis to left-parenthesis 1 comma 1 right-parenthesis . The Film class only allocates space for and stores pixel values in the region inside the crop window.

Film::croppedPixelBounds stores the pixel bounds from the upper-left to the lower-right corners of the crop window. Fractional pixel coordinates are rounded up; this ensures that if an image is rendered in pieces with abutting crop windows, each final pixel will be present in only one of the subimages.

<<Compute film image bounds>>= 
croppedPixelBounds = Bounds2i(Point2i(std::ceil(fullResolution.x * cropWindow.pMin.x), std::ceil(fullResolution.y * cropWindow.pMin.y)), Point2i(std::ceil(fullResolution.x * cropWindow.pMax.x), std::ceil(fullResolution.y * cropWindow.pMax.y)));

<<Film Public Data>>+= 
Bounds2i croppedPixelBounds;

Given the pixel resolution of the (possibly cropped) image, the constructor allocates an array of Pixel structures, one for each pixel. The running weighted sums of spectral pixel contributions are represented using XYZ colors (Section 5.2.1) and are stored in the xyz member variable. filterWeightSum holds the sum of filter weight values for the sample contributions to the pixel. splatXYZ holds an (unweighted) sum of sample splats. The pad member is unused; its sole purpose is to ensure that the Pixel structure is 32 bytes in size, rather than 28 as it would be otherwise (assuming 4-byte Floats; otherwise, it ensures a 64-byte structure). This padding ensures that a Pixel won’t straddle a cache line, so that no more than one cache miss will be incurred when a Pixel is accessed (as long as the first Pixel in the array is allocated at the start of a cache line).

<<Film Private Data>>= 
struct Pixel { Float xyz[3] = { 0, 0, 0 }; Float filterWeightSum = 0; AtomicFloat splatXYZ[3]; Float pad; }; std::unique_ptr<Pixel[]> pixels;

<<Allocate film image storage>>= 
pixels = std::unique_ptr<Pixel[]>(new Pixel[croppedPixelBounds.Area()]);

Two natural alternatives to using XYZ colors to store pixel values would be to use Spectrum values or to store RGB color. Here, it isn’t worthwhile to store complete Spectrum values, even when doing full spectral rendering. Because the final colors written to the output file don’t include the full set of Spectrum samples, converting to a tristimulus value here doesn’t represent a loss of information versus storing Spectrums and converting to a tristimulus value on image output. Not storing complete Spectrum values in this case can save a substantial amount of memory if the Spectrum has a large number of samples. (If pbrt supported saving SampledSpectrum values to files, then this design choice would need to be revisited.)

We have chosen to use XYZ color rather than RGB to emphasize that XYZ is a display-independent representation of color, while RGB requires assuming a particular set of display response curves (Section 5.2.2). (In the end, we will, however, have to convert to RGB, since few image file formats store XYZ color.)

With typical filter settings, every image sample may contribute to 16 or more pixels in the final image. Particularly for simple scenes, where relatively little time is spent on ray intersection testing and shading computations, the time spent updating the image for each sample can be significant. Therefore, the Film precomputes a table of filter values so that we can avoid the expense of virtual function calls to the Filter::Evaluate() method as well as the expense of evaluating the filter and can instead use values from the table for filtering. The error introduced by not evaluating the filter at each sample’s precise location isn’t noticeable in practice.

The implementation here makes the reasonable assumption that the filter is defined such that f left-parenthesis x comma y right-parenthesis equals f left-parenthesis StartAbsoluteValue x EndAbsoluteValue comma StartAbsoluteValue y EndAbsoluteValue right-parenthesis , so the table needs to hold values for only the positive quadrant of filter offsets. This assumption is true for all of the Filters currently available in pbrt and is true for most filters used in practice. This makes the table one-fourth the size and improves the coherence of memory accesses, leading to better cache performance.

<<Precompute filter weight table>>= 
int offset = 0; for (int y = 0; y < filterTableWidth; ++y) { for (int x = 0; x < filterTableWidth; ++x, ++offset) { Point2f p; p.x = (x + 0.5f) * filter->radius.x / filterTableWidth; p.y = (y + 0.5f) * filter->radius.y / filterTableWidth; filterTable[offset] = filter->Evaluate(p); } }

<<Film Private Data>>+=  
static constexpr int filterTableWidth = 16; Float filterTable[filterTableWidth * filterTableWidth];

The Film implementation is responsible for determining the range of integer pixel values that the Sampler is responsible for generating samples for. The area to be sampled is returned by the GetSampleBounds() method. Because the pixel reconstruction filter generally spans a number of pixels, the Sampler must generate image samples a bit outside of the range of pixels that will actually be output. This way, even pixels at the boundary of the image will have an equal density of samples around them in all directions and won’t be biased with only values from toward the interior of the image. This detail is also important when rendering images in pieces with crop windows, since it eliminates artifacts at the edges of the subimages.

Computing the sample bounds involves accounting for the half-pixel offsets when converting from discrete to continuous pixel coordinates, expanding by the filter radius, and then rounding outward.

<<Film Method Definitions>>+=  
Bounds2i Film::GetSampleBounds() const { Bounds2f floatBounds( Floor(Point2f(croppedPixelBounds.pMin) + Vector2f(0.5f, 0.5f) - filter->radius), Ceil( Point2f(croppedPixelBounds.pMax) - Vector2f(0.5f, 0.5f) + filter->radius)); return (Bounds2i)floatBounds; }

GetPhysicalExtent() returns the actual extent of the film in the scene. This information is specifically needed by the RealisticCamera. Given the length of the film diagonal and the aspect ratio of the image, we can compute the size of the sensor in the x and y directions. If we denote the diagonal length by d and the width and height of the film sensor by x and y , then we know that x squared plus y squared equals d squared . We can define the aspect ratio a of the image by a equals y slash x , so y equals a x , which gives x squared plus left-parenthesis a squared x squared right-parenthesis equals d squared . Solving for x gives

x equals StartRoot StartFraction d squared Over 1 plus a squared EndFraction EndRoot period

The implementation of GetPhysicalExtent() follows directly. The returned extent is centered around left-parenthesis 0 comma 0 right-parenthesis .

<<Film Method Definitions>>+=  
Bounds2f Film::GetPhysicalExtent() const { Float aspect = (Float)fullResolution.y / (Float)fullResolution.x; Float x = std::sqrt(diagonal * diagonal / (1 + aspect * aspect)); Float y = aspect * x; return Bounds2f(Point2f(-x / 2, -y / 2), Point2f(x / 2, y / 2)); }

7.9.2 Supplying Pixel Values to the Film

There are three ways that the sample contributions can be provided to the film. The first is driven by samples generated by the Sampler over tiles of the image. While the most straightforward interface would be to allow renderers to provide a film pixel location and a Spectrum with the contribution of the corresponding ray directly to the Film, it’s not easy to provide a high-performance implementation of such a method in the presence of multi-threading, where multiple threads may end up trying to update the same portion of the image concurrently.

Therefore, Film defines an interface where threads can specify that they’re generating samples in some extent of pixels with respect to the overall image. Given the sample bounds, GetFilmTile() in turn returns a pointer to a FilmTile object that stores contributions for the pixels in the corresponding region of the image. Ownership of the FilmTile and the data it stores is exclusive to the caller, so that thread can provide sample values to the FilmTile without worrying about contention with other threads. When it has finished work on the tile, the thread passes the completed tile back to the Film, which safely merges it into the final image.

<<Film Method Definitions>>+=  
std::unique_ptr<FilmTile> Film::GetFilmTile( const Bounds2i &sampleBounds) { <<Bound image pixels that samples in sampleBounds contribute to>> 
Vector2f halfPixel = Vector2f(0.5f, 0.5f); Bounds2f floatBounds = (Bounds2f)sampleBounds; Point2i p0 = (Point2i)Ceil(floatBounds.pMin - halfPixel - filter->radius); Point2i p1 = (Point2i)Floor(floatBounds.pMax - halfPixel + filter->radius) + Point2i(1, 1); Bounds2i tilePixelBounds = Intersect(Bounds2i(p0, p1), croppedPixelBounds);
return std::unique_ptr<FilmTile>(new FilmTile(tilePixelBounds, filter->radius, filterTable, filterTableWidth)); }

Given a bounding box of the pixel area that samples will be generated in, there are two steps to compute the bounding box of image pixels that the sample values will contribute to. First, the effects of the discrete-to-continuous pixel coordinate transformation and the radius of the filter must be accounted for. Second, the resulting bound must be clipped to the overall image pixel bounds; pixels outside the image by definition don’t need to be accounted for.

<<Bound image pixels that samples in sampleBounds contribute to>>= 
Vector2f halfPixel = Vector2f(0.5f, 0.5f); Bounds2f floatBounds = (Bounds2f)sampleBounds; Point2i p0 = (Point2i)Ceil(floatBounds.pMin - halfPixel - filter->radius); Point2i p1 = (Point2i)Floor(floatBounds.pMax - halfPixel + filter->radius) + Point2i(1, 1); Bounds2i tilePixelBounds = Intersect(Bounds2i(p0, p1), croppedPixelBounds);

<<Film Declarations>>+= 
class FilmTile { public: <<FilmTile Public Methods>> 
FilmTile(const Bounds2i &pixelBounds, const Vector2f &filterRadius, const Float *filterTable, int filterTableSize) : pixelBounds(pixelBounds), filterRadius(filterRadius), invFilterRadius(1 / filterRadius.x, 1 / filterRadius.y), filterTable(filterTable), filterTableSize(filterTableSize) { pixels = std::vector<FilmTilePixel>(std::max(0, pixelBounds.Area())); } void AddSample(const Point2f &pFilm, const Spectrum &L, Float sampleWeight = 1.) { <<Compute sample’s raster bounds>> 
Point2f pFilmDiscrete = pFilm - Vector2f(0.5f, 0.5f); Point2i p0 = (Point2i)Ceil(pFilmDiscrete - filterRadius); Point2i p1 = (Point2i)Floor(pFilmDiscrete + filterRadius) + Point2i(1, 1); p0 = Max(p0, pixelBounds.pMin); p1 = Min(p1, pixelBounds.pMax);
<<Loop over filter support and add sample to pixel arrays>> 
<<Precompute x and y filter table offsets>> 
int *ifx = ALLOCA(int, p1.x - p0.x); for (int x = p0.x; x < p1.x; ++x) { Float fx = std::abs((x - pFilmDiscrete.x) * invFilterRadius.x * filterTableSize); ifx[x - p0.x] = std::min((int)std::floor(fx), filterTableSize - 1); } int *ify = ALLOCA(int, p1.y - p0.y); for (int y = p0.y; y < p1.y; ++y) { Float fy = std::abs((y - pFilmDiscrete.y) * invFilterRadius.y * filterTableSize); ify[y - p0.y] = std::min((int)std::floor(fy), filterTableSize - 1); }
for (int y = p0.y; y < p1.y; ++y) { for (int x = p0.x; x < p1.x; ++x) { <<Evaluate filter value at left-parenthesis x comma y right-parenthesis pixel>> 
int offset = ify[y - p0.y] * filterTableSize + ifx[x - p0.x]; Float filterWeight = filterTable[offset];
<<Update pixel values with filtered sample contribution>> 
FilmTilePixel &pixel = GetPixel(Point2i(x, y)); pixel.contribSum += L * sampleWeight * filterWeight; pixel.filterWeightSum += filterWeight;
} }
} FilmTilePixel &GetPixel(const Point2i &p) { int width = pixelBounds.pMax.x - pixelBounds.pMin.x; int offset = (p.x - pixelBounds.pMin.x) + (p.y - pixelBounds.pMin.y) * width; return pixels[offset]; } const FilmTilePixel &GetPixel(const Point2i &p) const { int width = pixelBounds.pMax.x - pixelBounds.pMin.x; int offset = (p.x - pixelBounds.pMin.x) + (p.y - pixelBounds.pMin.y) * width; return pixels[offset]; } Bounds2i GetPixelBounds() const { return pixelBounds; }
private: <<FilmTile Private Data>> 
const Bounds2i pixelBounds; const Vector2f filterRadius, invFilterRadius; const Float *filterTable; const int filterTableSize; std::vector<FilmTilePixel> pixels; friend class Film;
};

The FilmTile constructor takes a 2D bounding box that gives the bounds of the pixels in the final image that it must provide storage for as well as additional information about the reconstruction filter being used, including a pointer to the filter function values tabulated in <<Precompute filter weight table>>.

<<FilmTile Public Methods>>= 
FilmTile(const Bounds2i &pixelBounds, const Vector2f &filterRadius, const Float *filterTable, int filterTableSize) : pixelBounds(pixelBounds), filterRadius(filterRadius), invFilterRadius(1 / filterRadius.x, 1 / filterRadius.y), filterTable(filterTable), filterTableSize(filterTableSize) { pixels = std::vector<FilmTilePixel>(std::max(0, pixelBounds.Area())); }

<<FilmTile Private Data>>= 
const Bounds2i pixelBounds; const Vector2f filterRadius, invFilterRadius; const Float *filterTable; const int filterTableSize; std::vector<FilmTilePixel> pixels;

For each pixel, both a sum of the weighted contributions from the pixel samples (according to the reconstruction filter weights) and a sum of the filter weights is maintained.

<<FilmTilePixel Declarations>>= 
struct FilmTilePixel { Spectrum contribSum = 0.f; Float filterWeightSum = 0.f; };

Once the radiance carried by a ray for a sample has been computed, the Integrator calls FilmTile::AddSample(). It takes a sample and corresponding radiance value as well as the weight for the sample’s contribution originally returned by Camera::GenerateRayDifferential(). It updates the stored image using the reconstruction filter with the pixel filtering equation.

<<FilmTile Public Methods>>+=  
void AddSample(const Point2f &pFilm, const Spectrum &L, Float sampleWeight = 1.) { <<Compute sample’s raster bounds>> 
Point2f pFilmDiscrete = pFilm - Vector2f(0.5f, 0.5f); Point2i p0 = (Point2i)Ceil(pFilmDiscrete - filterRadius); Point2i p1 = (Point2i)Floor(pFilmDiscrete + filterRadius) + Point2i(1, 1); p0 = Max(p0, pixelBounds.pMin); p1 = Min(p1, pixelBounds.pMax);
<<Loop over filter support and add sample to pixel arrays>> 
<<Precompute x and y filter table offsets>> 
int *ifx = ALLOCA(int, p1.x - p0.x); for (int x = p0.x; x < p1.x; ++x) { Float fx = std::abs((x - pFilmDiscrete.x) * invFilterRadius.x * filterTableSize); ifx[x - p0.x] = std::min((int)std::floor(fx), filterTableSize - 1); } int *ify = ALLOCA(int, p1.y - p0.y); for (int y = p0.y; y < p1.y; ++y) { Float fy = std::abs((y - pFilmDiscrete.y) * invFilterRadius.y * filterTableSize); ify[y - p0.y] = std::min((int)std::floor(fy), filterTableSize - 1); }
for (int y = p0.y; y < p1.y; ++y) { for (int x = p0.x; x < p1.x; ++x) { <<Evaluate filter value at left-parenthesis x comma y right-parenthesis pixel>> 
int offset = ify[y - p0.y] * filterTableSize + ifx[x - p0.x]; Float filterWeight = filterTable[offset];
<<Update pixel values with filtered sample contribution>> 
FilmTilePixel &pixel = GetPixel(Point2i(x, y)); pixel.contribSum += L * sampleWeight * filterWeight; pixel.filterWeightSum += filterWeight;
} }
}

To understand the operation of FilmTile::AddSample(), first recall the pixel filtering equation:

upper I left-parenthesis x comma y right-parenthesis equals StartFraction sigma-summation Underscript i Endscripts f left-parenthesis x minus x Subscript i Baseline comma y minus y Subscript i Baseline right-parenthesis w left-parenthesis x Subscript i Baseline comma y Subscript i Baseline right-parenthesis upper L left-parenthesis x Subscript i Baseline comma y Subscript i Baseline right-parenthesis Over sigma-summation Underscript i Endscripts f left-parenthesis x minus x Subscript i Baseline comma y minus y Subscript i Baseline right-parenthesis EndFraction period

It computes each pixel’s value upper I left-parenthesis x comma y right-parenthesis as the weighted sum of nearby samples’ radiance values, using both a filter function  f and the sample weight returned by the Camera w left-parenthesis x Subscript i Baseline comma y Subscript i Baseline right-parenthesis to compute the contribution of the radiance value to the final pixel value. Because all of the Filters in pbrt have finite extent, this method starts by computing which pixels will be affected by the current sample. Then, turning the pixel filtering equation inside out, it updates two running sums for each pixel left-parenthesis x comma y right-parenthesis that is affected by the sample. One sum accumulates the numerator of the pixel filtering equation, and the other accumulates the denominator. At the end of rendering, the final pixel values are computed by performing the division.

To find which pixels a sample potentially contributes to, FilmTile::AddSample() converts the continuous sample coordinates to discrete coordinates by subtracting 0.5 from  x and  y . It then offsets this value by the filter radius in each direction (Figure 7.48), transforms it to the tile coordinate space, and takes the ceiling of the minimum coordinates and the floor of the maximum, since pixels outside the bound of the extent are unaffected by the sample. Finally, the pixel bounds are clipped to the bounds of the pixels in the tile. While the sample may theoretically contribute to pixels outside the tile, any such pixels must be outside the image extent.

Figure 7.48: Given an image sample at some position on the image plane (solid dot), it is necessary to determine which pixel values (empty dots) are affected by the sample’s contribution. This is done by taking the offsets in the x and y directions according to the pixel reconstruction filter’s radius (solid lines) and finding the pixels inside this region.

<<Compute sample’s raster bounds>>= 
Point2f pFilmDiscrete = pFilm - Vector2f(0.5f, 0.5f); Point2i p0 = (Point2i)Ceil(pFilmDiscrete - filterRadius); Point2i p1 = (Point2i)Floor(pFilmDiscrete + filterRadius) + Point2i(1, 1); p0 = Max(p0, pixelBounds.pMin); p1 = Min(p1, pixelBounds.pMax);

Given the bounds of pixels that are affected by this sample, it’s now possible to loop over all of those pixels and accumulate the filtered sample weights at each of them.

<<Loop over filter support and add sample to pixel arrays>>= 
<<Precompute x and y filter table offsets>> 
int *ifx = ALLOCA(int, p1.x - p0.x); for (int x = p0.x; x < p1.x; ++x) { Float fx = std::abs((x - pFilmDiscrete.x) * invFilterRadius.x * filterTableSize); ifx[x - p0.x] = std::min((int)std::floor(fx), filterTableSize - 1); } int *ify = ALLOCA(int, p1.y - p0.y); for (int y = p0.y; y < p1.y; ++y) { Float fy = std::abs((y - pFilmDiscrete.y) * invFilterRadius.y * filterTableSize); ify[y - p0.y] = std::min((int)std::floor(fy), filterTableSize - 1); }
for (int y = p0.y; y < p1.y; ++y) { for (int x = p0.x; x < p1.x; ++x) { <<Evaluate filter value at left-parenthesis x comma y right-parenthesis pixel>> 
int offset = ify[y - p0.y] * filterTableSize + ifx[x - p0.x]; Float filterWeight = filterTable[offset];
<<Update pixel values with filtered sample contribution>> 
FilmTilePixel &pixel = GetPixel(Point2i(x, y)); pixel.contribSum += L * sampleWeight * filterWeight; pixel.filterWeightSum += filterWeight;
} }

Each discrete integer pixel left-parenthesis x comma y right-parenthesis has an instance of the filter function centered around it. To compute the filter weight for a particular sample, it’s necessary to find the offset from the pixel to the sample’s position in discrete coordinates and evaluate the filter function. If we were evaluating the filter explicitly, the appropriate computation would be

filterWeight = filter->Evaluate(Point2i(x - pFilmDiscrete.x, y - pFilmDiscrete.y));

Instead, the implementation retrieves the appropriate filter weight from the table.

To find the filter weight for a pixel left-parenthesis x prime comma y prime right-parenthesis given the sample position left-parenthesis x comma y right-parenthesis , this routine computes the offset left-parenthesis x prime minus x comma y prime minus y right-parenthesis and converts it into coordinates for the filter weights lookup table. This can be done directly by dividing each component of the sample offset by the filter radius in that direction, giving a value between 0 and 1, and then multiplying by the table size. This process can be further optimized by noting that along each row of pixels in the  x direction, the difference in  y , and thus the  y offset into the filter table, is constant. Analogously, for each column of pixels, the  x offset is constant. Therefore, before looping over the pixels here it’s possible to precompute these indices and store them in two 1D arrays, saving repeated work in the loop.

<<Precompute x and y filter table offsets>>= 
int *ifx = ALLOCA(int, p1.x - p0.x); for (int x = p0.x; x < p1.x; ++x) { Float fx = std::abs((x - pFilmDiscrete.x) * invFilterRadius.x * filterTableSize); ifx[x - p0.x] = std::min((int)std::floor(fx), filterTableSize - 1); } int *ify = ALLOCA(int, p1.y - p0.y); for (int y = p0.y; y < p1.y; ++y) { Float fy = std::abs((y - pFilmDiscrete.y) * invFilterRadius.y * filterTableSize); ify[y - p0.y] = std::min((int)std::floor(fy), filterTableSize - 1); }

Now at each pixel, the x and y offsets into the filter table can be found for the pixel, leading to the offset into the array and thus the filter value.

<<Evaluate filter value at left-parenthesis x comma y right-parenthesis pixel>>= 
int offset = ify[y - p0.y] * filterTableSize + ifx[x - p0.x]; Float filterWeight = filterTable[offset];

For each affected pixel, we can now add its weighted spectral contribution and the filter weight to the appropriate value in the pixels array.

<<Update pixel values with filtered sample contribution>>= 
FilmTilePixel &pixel = GetPixel(Point2i(x, y)); pixel.contribSum += L * sampleWeight * filterWeight; pixel.filterWeightSum += filterWeight;

The GetPixel() method takes pixel coordinates with respect to the overall image and converts them to coordinates in the film tile before indexing into the pixels array. In addition to the version here, there is also a const variant of the method that returns a const FilmTilePixel &.

<<FilmTile Public Methods>>+=  
FilmTilePixel &GetPixel(const Point2i &p) { int width = pixelBounds.pMax.x - pixelBounds.pMin.x; int offset = (p.x - pixelBounds.pMin.x) + (p.y - pixelBounds.pMin.y) * width; return pixels[offset]; }

Rendering threads present FilmTiles to be merged into the image stored by Film using the MergeFilmTile() method. Its implementation starts by acquiring a lock to a mutex in order to ensure that multiple threads aren’t simultaneously modifying image pixel values. Note that because MergeFilmTile() takes a std::unique_ptr to the tile, ownership of the tile’s memory is transferred when this method is called. Calling code should therefore no longer attempt to add contributions to a tile after calling this method. Storage for the FilmTile is freed automatically at the end of the execution of MergeFilmTile() when the tile parameter goes out of scope.

<<Film Method Definitions>>+=  
void Film::MergeFilmTile(std::unique_ptr<FilmTile> tile) { std::lock_guard<std::mutex> lock(mutex); for (Point2i pixel : tile->GetPixelBounds()) { <<Merge pixel into Film::pixels>> 
const FilmTilePixel &tilePixel = tile->GetPixel(pixel); Pixel &mergePixel = GetPixel(pixel); Float xyz[3]; tilePixel.contribSum.ToXYZ(xyz); for (int i = 0; i < 3; ++i) mergePixel.xyz[i] += xyz[i]; mergePixel.filterWeightSum += tilePixel.filterWeightSum;
} }

<<Film Private Data>>+=  
std::mutex mutex;

When merging a tile’s contributions in the final image, it’s necessary for calling code to be able to find the bound of pixels that the tile has contributions for.

<<FilmTile Public Methods>>+= 
Bounds2i GetPixelBounds() const { return pixelBounds; }

For each pixel in the tile, it’s just necessary to merge its contribution into the values stored in Film::pixels.

<<Merge pixel into Film::pixels>>= 
const FilmTilePixel &tilePixel = tile->GetPixel(pixel); Pixel &mergePixel = GetPixel(pixel); Float xyz[3]; tilePixel.contribSum.ToXYZ(xyz); for (int i = 0; i < 3; ++i) mergePixel.xyz[i] += xyz[i]; mergePixel.filterWeightSum += tilePixel.filterWeightSum;

<<Film Private Methods>>= 
Pixel &GetPixel(const Point2i &p) { int width = croppedPixelBounds.pMax.x - croppedPixelBounds.pMin.x; int offset = (p.x - croppedPixelBounds.pMin.x) + (p.y - croppedPixelBounds.pMin.y) * width; return pixels[offset]; }

It’s also useful for some Integrator implementations to be able to just provide values for all of the pixels in the entire image all at once. The SetImage() method allows this mode of operation. Note that number of elements in the array pointed to by the image parameter should be equal to croppedPixelBounds.Area(). The implementation of SetImage() is a straightforward matter of copying the given values after converting them to XYZ color.

<<Film Method Definitions>>+=  
void Film::SetImage(const Spectrum *img) const { int nPixels = croppedPixelBounds.Area(); for (int i = 0; i < nPixels; ++i) { Pixel &p = pixels[i]; img[i].ToXYZ(p.xyz); p.filterWeightSum = 1; p.splatXYZ[0] = p.splatXYZ[1] = p.splatXYZ[2] = 0; } }

Some light transport algorithms (notably bidirectional path tracing, which is introduced in Section 16.3) require the ability to “splat” contributions to arbitrary pixels. Rather than computing the final pixel value as a weighted average of contributing splats, splats are simply summed. Generally, the more splats that are around a given pixel, the brighter the pixel will be. The Pixel::splatXYZ member variable is declared to be of AtomicFloat type, which allows multiple threads to concurrently update pixel values via the AddSplat() method without additional synchronization.

<<Film Method Definitions>>+=  
void Film::AddSplat(const Point2f &p, const Spectrum &v) { if (!InsideExclusive((Point2i)p, croppedPixelBounds)) return; Float xyz[3]; v.ToXYZ(xyz); Pixel &pixel = GetPixel((Point2i)p); for (int i = 0; i < 3; ++i) pixel.splatXYZ[i].Add(xyz[i]); }

7.9.3 Image Output

After the main rendering loop exits, the Integrator’s Render() method generally calls the Film::WriteImage() method, which directs the film to do the processing necessary to generate the final image and store it in a file. This method takes a scale factor that is applied to the samples provided to the AddSplat() method. (See the end of Section 16.4.5 for further discussion of this scale factor’s use with the MLTIntegrator.)

<<Film Method Definitions>>+= 
void Film::WriteImage(Float splatScale) { <<Convert image to RGB and compute final pixel values>> 
std::unique_ptr<Float[]> rgb(new Float[3 * croppedPixelBounds.Area()]); int offset = 0; for (Point2i p : croppedPixelBounds) { <<Convert pixel XYZ color to RGB>> 
Pixel &pixel = GetPixel(p); XYZToRGB(pixel.xyz, &rgb[3 * offset]);
<<Normalize pixel with weight sum>> 
Float filterWeightSum = pixel.filterWeightSum; if (filterWeightSum != 0) { Float invWt = (Float)1 / filterWeightSum; rgb[3 * offset ] = std::max((Float)0, rgb[3 * offset ] * invWt); rgb[3 * offset+1] = std::max((Float)0, rgb[3 * offset + 1] * invWt); rgb[3 * offset+2] = std::max((Float)0, rgb[3 * offset + 2] * invWt); }
<<Add splat value at pixel>> 
Float splatRGB[3]; Float splatXYZ[3] = { pixel.splatXYZ[0], pixel.splatXYZ[1], pixel.splatXYZ[2] }; XYZToRGB(splatXYZ, splatRGB); rgb[3 * offset ] += splatScale * splatRGB[0]; rgb[3 * offset + 1] += splatScale * splatRGB[1]; rgb[3 * offset + 2] += splatScale * splatRGB[2];
<<Scale pixel value by scale>> 
rgb[3 * offset ] *= scale; rgb[3 * offset + 1] *= scale; rgb[3 * offset + 2] *= scale;
++offset; }
<<Write RGB image>> 
::WriteImage(filename, &rgb[0], croppedPixelBounds, fullResolution);
}

This method starts by allocating an array to store the final RGB pixel values. It then loops over all of the pixels in the image to fill in this array.

<<Convert image to RGB and compute final pixel values>>= 
std::unique_ptr<Float[]> rgb(new Float[3 * croppedPixelBounds.Area()]); int offset = 0; for (Point2i p : croppedPixelBounds) { <<Convert pixel XYZ color to RGB>> 
Pixel &pixel = GetPixel(p); XYZToRGB(pixel.xyz, &rgb[3 * offset]);
<<Normalize pixel with weight sum>> 
Float filterWeightSum = pixel.filterWeightSum; if (filterWeightSum != 0) { Float invWt = (Float)1 / filterWeightSum; rgb[3 * offset ] = std::max((Float)0, rgb[3 * offset ] * invWt); rgb[3 * offset+1] = std::max((Float)0, rgb[3 * offset + 1] * invWt); rgb[3 * offset+2] = std::max((Float)0, rgb[3 * offset + 2] * invWt); }
<<Add splat value at pixel>> 
Float splatRGB[3]; Float splatXYZ[3] = { pixel.splatXYZ[0], pixel.splatXYZ[1], pixel.splatXYZ[2] }; XYZToRGB(splatXYZ, splatRGB); rgb[3 * offset ] += splatScale * splatRGB[0]; rgb[3 * offset + 1] += splatScale * splatRGB[1]; rgb[3 * offset + 2] += splatScale * splatRGB[2];
<<Scale pixel value by scale>> 
rgb[3 * offset ] *= scale; rgb[3 * offset + 1] *= scale; rgb[3 * offset + 2] *= scale;
++offset; }

Given information about the response characteristics of the display device being used, the pixel values can be converted to device-dependent RGB values from the device-independent XYZ tristimulus values. This conversion is another change of spectral basis, where the new basis is determined by the spectral response curves of the red, green, and blue elements of the display device. Here, weights to convert from XYZ to the device RGB based on the sRGB primaries are used; sRGB is a standardized color space that is supported by virtually all 2015-era displays and printers.

<<Convert pixel XYZ color to RGB>>= 
Pixel &pixel = GetPixel(p); XYZToRGB(pixel.xyz, &rgb[3 * offset]);

As the RGB output values are being initialized, their final values from the pixel filtering equation are computed by dividing each pixel sample value by Pixel::filterWeightSum. This conversion can lead to RGB values where some components are negative; these are out-of-gamut colors that can’t be represented with the chosen display primaries. Various approaches have been suggested to deal with this issue, ranging from clamping to 0, offsetting all components to lie within the gamut, or even performing a global optimization based on all of the pixels in the image. Reconstructed pixels may also end up with negative values due to negative lobes in the reconstruction filter function. Color components are clamped to 0 here to handle both of these cases.

<<Normalize pixel with weight sum>>= 
Float filterWeightSum = pixel.filterWeightSum; if (filterWeightSum != 0) { Float invWt = (Float)1 / filterWeightSum; rgb[3 * offset ] = std::max((Float)0, rgb[3 * offset ] * invWt); rgb[3 * offset+1] = std::max((Float)0, rgb[3 * offset + 1] * invWt); rgb[3 * offset+2] = std::max((Float)0, rgb[3 * offset + 2] * invWt); }

It’s also necessary to add in the contributions of splatted values for this pixel to the final value.

<<Add splat value at pixel>>= 
Float splatRGB[3]; Float splatXYZ[3] = { pixel.splatXYZ[0], pixel.splatXYZ[1], pixel.splatXYZ[2] }; XYZToRGB(splatXYZ, splatRGB); rgb[3 * offset ] += splatScale * splatRGB[0]; rgb[3 * offset + 1] += splatScale * splatRGB[1]; rgb[3 * offset + 2] += splatScale * splatRGB[2];

The final pixel value is scaled by a user-supplied factor (or by 1, if none was specified); this can be useful when writing images to 8-bit integer image formats to make the most of the limited dynamic range.

<<Scale pixel value by scale>>= 
rgb[3 * offset ] *= scale; rgb[3 * offset + 1] *= scale; rgb[3 * offset + 2] *= scale;

<<Film Private Data>>+= 
const Float scale;

The WriteImage() function, defined in Section A.2, handles the details of writing the image to a file. If writing to an 8-bit integer format, it applies gamma correction to the floating-point pixel values according to the sRGB standard before converting them to integers. (See the “Further Reading” section at the end of Chapter 10 for more information about gamma correction.)

<<Write RGB image>>= 
::WriteImage(filename, &rgb[0], croppedPixelBounds, fullResolution);