where is its variance and is the running time to compute its value. According to this metric, an estimator is more efficient than if it takes less time to produce the same variance, or if it produces less variance in the same amount of time. The next few sections discuss a number of techniques for improving the efficiency of Monte Carlo.

Russian roulette and splitting are two related techniques that can improve the efficiency of Monte Carlo estimates by increasing the likelihood that each sample will have a significant contribution to the result. Russian roulette addresses the problem of samples that are expensive to evaluate but make a small contribution to the final result, and splitting is a technique that makes it possible to place more samples in important dimensions of the integrand.

As an example to motivate Russian roulette, consider the problem of estimating the direct lighting integral, which gives reflected radiance at a point due to radiance from direct illumination from the light sources in the scene, :

Assume that we have decided to take samples from some distribution to compute the estimator

Most of the computational expense of evaluating each term of the sum comes from tracing a shadow ray from the point to see whether the light source is occluded as seen from .

For all of the directions where the integrand’s value is 0 because is 0 for that direction, we should obviously skip the work of tracing the shadow ray, since tracing it won’t change the final value computed. Russian roulette makes it possible to also skip tracing rays when the integrand’s value is very low but not necessarily 0, while still computing the correct value on average. For example, we might want to avoid tracing rays when is small or when is close to the horizon and thus is small. These samples just can’t be ignored completely, however, since the estimator would then consistently underestimate the correct result.

To apply Russian roulette, we select some termination probability . This value can be chosen in almost any manner; for example, it could be based on an estimate of the value of the integrand for the particular sample chosen, increasing as the integrand’s value becomes smaller. With probability , the integrand is not evaluated for the particular sample, and some constant value is used in its place ( is often used). With probability , the integrand is still evaluated but is weighted by a term, , that effectively accounts for all of the samples that were skipped. We have the new estimator

The expected value of the resulting estimator is the same as the expected value of the original estimator:

Russian roulette never reduces variance. In fact, unless somehow , it will always increase variance. However, it does improve efficiency if probabilities are chosen so that samples that are likely to make a small contribution to the final result are skipped.

One pitfall is that poorly
chosen Russian roulette weights can substantially increase variance.
Imagine applying Russian roulette to all of the camera rays with a
termination probability of : we’d only trace 1% of the camera rays,
weighting each of them by . The resulting image would still
be “correct” in a strictly mathematical sense, although visually the result
would be terrible: mostly black pixels with a few very bright ones. One of
the exercises at the end of this chapter discusses this problem further
and describes a technique called *efficiency-optimized Russian
roulette* that tries to set Russian roulette weights in a way that
minimizes the increase in variance.

### 13.7.1 Splitting

While Russian roulette reduces the effort spent evaluating unimportant samples, splitting increases the number of samples taken in order to improve efficiency. Consider again the problem of computing reflection due only to direct illumination. Ignoring pixel filtering, this problem can be written as a double integral over the area of the pixel and over the sphere of directions at the visible points on surfaces at each pixel position:

where denotes the exitant radiance at the object visible at the position on the image due to incident radiance from the direction .

The natural way to estimate the integral is to generate samples and apply the Monte Carlo estimator, where each sample consists of an image position and a direction toward a light source. If there are many light sources in the scene, or if there is an area light casting soft shadows, tens or hundreds of samples may be needed to compute an image with an acceptable variance level. Unfortunately, each sample requires that two rays be traced through the scene: one to compute the first visible surface from position on the image plane and one a shadow ray along to a light source.

The problem with this approach is that if samples are taken to estimate this integral, then 200 rays will be traced: 100 camera rays and 100 shadow rays. Yet, 100 camera rays may be many more than are needed for good pixel antialiasing and thus may make relatively little contribution to variance reduction in the final result. Splitting addresses this problem by formalizing the approach of taking multiple samples in some of the dimensions of integration for each sample taken in other dimensions.

With splitting, the estimator for this integral can be written taking image samples and light samples per image sample:

Thus, we could take just 5 image samples but take 20 light samples per image sample, for a total of 105 rays traced, rather than 200, while still taking 100 area light samples in order to compute a high-quality soft shadow.

The purpose of the `Sampler::Request1DArray()` and
`Sampler::Request2DArray()` methods defined in
Section 7.2.2 can now be seen in the light that
they make it possible for users of `Sampler`s to apply splitting to
some dimensions of the integrand.