②
It’s possible to implement a specialized version of
ScrambledRadicalInverse() for base 2, along the lines of the
implementation in RadicalInverse(). Determine how to map the random
digit permutation to a single bitwise operation and implement this
approach. Compare the values computed to those generated by the current
implementation to ensure your method is correct and measure how much faster
yours is by writing a small benchmark program.
② Currently, the third through fifth dimensions of each
sample vector are consumed for time and lens samples, even though not all
scenes need these sample values. Because lower dimensions in the sample
vector are often better distributed than later ones, this can cause an
unnecessary reduction in image quality.
Modify pbrt so that the camera can report its sample requirements and
then use this information when samples are requested to initialize
CameraSamples. Don’t forget to update the value of
GlobalSampler::arrayStartDim. Render images with the
DirectLightingIntegrator and compare results to the current
implementation. Do you see an improvement? How do results differ with
different samplers? How do you explain any differences you see across
samplers?
② Implement the improved multi-jittered sampling method
introduced by Kensler (2013) as a new Sampler in pbrt. Compare
image quality and rendering time to rendering with the
StratifiedSampler, the HaltonSampler, and the
SobolSampler.
② Keller (2004) and Dammertz and Keller (2008b) described the
application of rank-1 lattices to image synthesis. Rank-1 lattices
are another way of efficiently generating high-quality low-discrepancy
sequences of sample points. Read their papers and implement a
Sampler based on this approach. Compare results to the other
samplers in pbrt.
② With pbrt’s current FilmTile implementation, the
pixel values in an image may change by small amounts if an image is
rerendered, due to threads finishing tiles in different orders over
subsequent runs. For example, a pixel that had a final value that came
from samples from three different image sampling tiles, ,
may sometimes have its value computed as and sometimes
as . Due to floating-point round-off, these two
values may be different. While these differences aren’t normally a
problem, they wreak havoc with automated testing scripts that might want to
verify that a believed-to-be-innocuous change to the system didn’t actually
cause any differences in rendered images.
Modify Film::MergeFilmTile() so that it merges tiles in a consistent
order so that final pixel values don’t suffer from this inconsistency.
(For example, your implementation might buffer up FilmTiles and only
merge a tile when all neighboring tiles above and to its left have already
been merged.) Ensure that your implementation doesn’t introduce any
meaningful performance regression. Measure the additional memory usage due
to longer lived FilmTiles; how does it relate to total memory usage?
② As mentioned in Section 7.9, the
Film::AddSplat() method doesn’t use a filter function but instead just
splats the sample to the single pixel it’s closest to, effectively using a
box filter. In order to apply an arbitrary filter, the filter must be
normalized so that it integrates to one over its domain; this constraint
isn’t currently required of Filters by pbrt. Modify the computation
of filterTable in the Film constructor so that the tabulated
function is normalized. (Don’t forget that the table only stores
one-quarter of the function’s extent when computing the normalization factor.)
Then modify the implementation of the AddSplat() method to use this
filter. Investigate the execution time and image quality differences that
result.
① Modify pbrt to create images where
the value stored in the Film for each camera ray is proportional to
the time taken to compute the ray’s radiance. (A 1-pixel-wide box filter
is probably the most useful filter for this exercise.) Render images of a
variety of scenes with this technique. What insight about the system’s
performance do the resulting images bring? You may need to scale pixel
values or take their logarithm to see meaningful variation when you view them.
② One of the advantages of the linearity assumption in
radiometry is that the final image of a scene is the same as the sum of
individual images that account for each light source’s contribution
(assuming a floating-point image file format is used that doesn’t clip
pixel radiance values). An implication of this property is that if a
renderer creates a separate image for each light source, it is possible to
write interactive lighting design tools that make it possible to quickly
see the effects of scaling the contributions of individual lights in the
scene without needing to rerender it from scratch. Instead, a light’s
individual image can be scaled and the final image regenerated by summing
all of the light images again. (This technique was first applied for opera
lighting design by Dorsey, Sillion, and Greenberg (1991).) Modify
pbrt to output a separate image for each of the lights in the scene, and
write an interactive lighting design tool that uses them in this manner.
③ Mitchell and Netravali (1988) noted that there is a family of
reconstruction filters that use both the value of a function and its
derivative at the point to do substantially better reconstruction than if
just the value of the function is known. Furthermore, they report that
they have derived closed-form expressions for the screen space derivatives
of Lambertian and Phong reflection models, although they do not include
these expressions in their paper. Investigate derivative-based
reconstruction, and extend pbrt to support this technique. Because it
will likely be difficult to derive expressions for the screen space
derivatives for general shapes and BSDF models, investigate approximations
based on finite differencing. Techniques built on the ideas behind the ray
differentials of Section 10.1 may be fruitful
for this effort.
③ Image-based rendering is the general name for a set of techniques
that use one or more images of a scene to synthesize new images from
viewpoints different from the original ones. One such approach is
light field rendering, where a set of images from a densely spaced set of
positions is used (Levoy and Hanrahan 1996; Gortler et al. 1996).
Read these two papers on
light fields, and modify pbrt to directly generate light fields of scenes,
without requiring that the renderer be run multiple times, once for each
camera position. It will probably be necessary to write a specialized
Camera, Sampler, and Film to do this. Also, write an
interactive light field viewer that loads light fields generated by your
implementation and generates new views of the scene.
③ Rather than just storing spectral values
in an image, it’s often useful to store additional
information about the objects in the scene that were visible at each pixel.
See, for example, the SIGGRAPH papers by Perlin (1985a) and Saito and Takahashi (1990). For example, if the 3D position, surface normal,
and BRDF of the object at each pixel are stored, then the scene
can be efficiently rerendered after moving the light
sources (Gershbein and Hanrahan 2000). Alternatively, if each sample stores
information about all of the objects visible along its camera ray, rather
than just the first one, new images from shifted viewpoints can be
rerendered (Shade et al. 1998). Investigate representations for deep
frame buffers and algorithms that use them; extend pbrt to support
the creation of images like these, and develop tools that operate on them.
② Implement a median filter for image reconstruction: for
each pixel, store the median of all of the samples within a filter extent
around it. This task is complicated by the fact that filters in the
current Film implementation must be linear—the value of the
filter function is determined solely by the position of the sample with
respect to the pixel position, and the value of the sample has no impact on
the value of the filter function. Because the implementation assumes that
filters are linear, and because it doesn’t store sample values after adding
their contribution to the image, implementing the median filter will
require generalizing the Film or developing a new Film
implementation.
Render images using integrators like the PathIntegrator that have
objectionable image noise with regular image filters. How successful is
the median filter at reducing noise? Are there visual shortcomings to
using the median filter? Can you implement this approach without needing
to store all of the image sample values before computing final pixel
values?
② An alternative to the median filter is to discard the
sample with the lowest contribution and the sample with the largest
contribution in a pixel’s filter region. This approach uses more of the
information gathered during sampling.
Implement this approach and compare
the results to the median filter.
③ Implement the discontinuity buffer, as described by Keller
and collaborators (Keller 1998; Wald et al. 2002). You will probably need
to modify the interface to the Integrators so that they can
separately return direct and indirect illumination contributions and then
pass these separately to the Film. Render images showing its
effectiveness when rendering images with indirect illumination.
③ Implement one of the recent adaptive sampling and
reconstruction techniques such as the ones described by Hachisuka et al. (2008a), Egan et al. (2009), Overbeck et al. (2009), or Moon et al. (2014). How much more efficiently do they generate images of equal
quality than just uniformly sampling at a high rate? How do they affect
running time for simple scenes where adaptive sampling isn’t needed?
③ Investigate current research in tone reproduction algorithms
(see, e.g., Reinhard et al. 2010; 2012), and implement one or more
of these algorithms. Use your implementation with a number of scenes
rendered by pbrt, and discuss the improvements you see compared to viewing
the images without tone reproduction.