② Some types of cameras expose the film by sliding a
rectangular slit across the film. This leads to interesting effects when
objects are moving in a different direction from the exposure slit
(Glassner 1999; Stephenson 2007).
Furthermore, most digital cameras read
out pixel values from scanlines in succession over a period of a few
milliseconds; this leads to rolling shutter artifacts, which have
similar visual characteristics. Modify the way that time samples are
generated in one or more of the camera implementations in this chapter to
model such effects. Render images with moving objects that clearly show
the effect of accounting for this issue.
② Write an application that loads images rendered by the
SphericalCamera and uses texture mapping to apply them to a sphere
centered at the eyepoint such that they can be viewed interactively. The
user should be able to freely change the viewing direction. If the
correct texture-mapping function is used for generating texture
coordinates on the sphere, the image generated by the application will
appear as if the viewer was at the camera’s location in the scene when it
was rendered, thus giving the user the ability to interactively look around
the scene.
②Focal stack rendering: A focal stack is a series
of images of a fixed scene where the camera is focused at a different
distance for each image. Hasinoff and Kutulakos (2011)
and Jacobs et al. (2012) introduced a number of
applications of focal stacks, including freeform depth of field, where the
user can specify arbitrary depths that are in focus, achieving effects not
possible with traditional optics. Render focal stacks with pbrt and
write an interactive tool to control focus effects with them.
③Light field camera: Ng et al. (2005)
discussed the physical design and applications of a camera that captures
small images of the exit pupil across the film, rather than averaging the
radiance over the entire exit pupil at each pixel, as conventional cameras
do. Such a camera captures a representation of the light
field—the spatially and directionally varying distribution of radiance
arriving at the camera sensor. By capturing the light field, a number of
interesting operations are possible, including refocusing photographs after
they have been taken. Read Ng et al.’s paper and implement a Camera
in pbrt that captures the light field of a scene. Write a tool to allow
users to interactively refocus these light fields.
③ The Cameras in this chapter place the film at
the center of and perpendicular to the optical axis. While this is the
most common configuration of actual cameras, interesting effects can be
achieved by adjusting the film’s placement with respect to the lens system.
For example, the plane of focus in the current implementation is always
perpendicular to the optical axis; if the film plane (or the lens system)
is tilted so that the film is not perpendicular to the optical axis, then
the plane of focus is no longer perpendicular to the optical axis. (This
can be useful for landscape photography, for example, where aligning the
plane of focus with the ground plane allows greater depth of field even
with larger apertures.) Alternatively, the film plane can be shifted so
that it is not centered on the optical axis; this shift can be used to keep
the plane of focus aligned with a very tall object, for example.
Modify the PerspectiveCamera to
allow one or both of these adjustments and render images showing the
result. (You may find Kensler’s
(2021) chapter useful.)
② The clamping approach used to suppress outlier sample
values in the RGBFilm and GBufferFilm is a
heavy-handed solution that can cause a significant amount of energy loss in
the image. (Consider, for example, pixels where the sun is directly
visible—the radiance along rays in those pixels may be extremely high,
though it is not a cause of spiky pixels and should not be clamped.)
Implement a more principled solution to this problem such as the technique
of Zirr et al. (2018). Render images with your
implementation and pbrt’s current approach and compare the results.
② Investigate the sources of noise in camera sensors and
mathematical models to simulate them. Then, modify the PixelSensor
class to model the effect of noise. In addition to shot noise, which
depends on the number of photons reaching each pixel, you may also want to
model factors like read noise and dark noise, which are independent of the
number of photons. Render images that exhibit noise and show the effect of
different types of it as exposure time varies.
② Because they are based on floating-point addition, which
is not associative, the AddSplat() methods implemented in this
chapter do not live up to pbrt’s goal of producing deterministic output:
if different threads add splats to the same pixel in a different order over
multiple runs of pbrt, the final image may differ. An alternative
implementation might allocate a separate buffer for each thread’s splats
and then sum the buffers at the end of rendering, which would be
deterministic but would incur a memory cost proportional to the number of
threads. Either implement that approach or come up with another one to
address this issue and implement it in pbrt. Measure the memory and performance
overhead of your approach as well as how often the current implementation
is non-deterministic. Is the current implementation defensible?
③ Image-based rendering is the general name for a set of
techniques that use one or more images of a scene to synthesize new images
from viewpoints different from the original ones. One such approach is
light field rendering, where a set of images from a densely spaced set of
positions is used—as described by Levoy and Hanrahan
(1996) and Gortler et al. (1996).
Read these two papers on light fields, and modify pbrt to directly
generate light fields of scenes, without requiring that the renderer be run
multiple times, once for each camera position. It will probably be
necessary to write a specialized Camera, Sampler, and
Film to do this. Also, write an interactive light field viewer that
loads light fields generated by your implementation and that generates new views
of the scene.