11.3 Media

Implementations of the Medium base class provide various representations of volumetric scattering properties in a region of space. In a complex scene, there may be multiple Medium instances, each representing a different scattering effect. For example, an outdoor lake scene might have one Medium to model atmospheric scattering, another to model mist rising from the lake, and a third to model particles suspended in the water of the lake.

<<Medium Declarations>>= 
class Medium { public: <<Medium Interface>> 
virtual ~Medium() { } virtual Spectrum Tr(const Ray &ray, Sampler &sampler) const = 0; virtual Spectrum Sample(const Ray &ray, Sampler &sampler, MemoryArena &arena, MediumInteraction *mi) const = 0;
};

A key operation that Medium implementations must perform is to compute the beam transmittance, Equation (11.1), along a given ray passed to its Tr() method. Specifically, the method should return an estimate of the transmittance on the interval between the ray origin and the point at a distance of Ray::tMax from the origin.

Medium-aware Integrators using this interface are responsible for accounting for interactions with surfaces in the scene as well as the spatial extent of the Medium; hence we will assume that the ray passed to the Tr() method is both unoccluded and fully contained within the current Medium. Some implementations of this method use Monte Carlo integration to compute the transmittance; a Sampler is provided for this case. (See Section 15.2.)

<<Medium Interface>>= 
virtual Spectrum Tr(const Ray &ray, Sampler &sampler) const = 0;

The spatial distribution and extent of media in a scene is defined by associating Medium instances with the camera, lights, and primitives in the scene. For example, Cameras store a Medium pointer that gives the medium for rays leaving the camera and similarly for Lights.

In pbrt, the boundary between two different types of scattering media is always represented by the surface of a GeometricPrimitive. Rather than storing a single Medium pointer like lights and cameras, GeometricPrimitives hold a MediumInterface, which in turn holds pointers to one Medium for the interior of the primitive and one for the exterior. For all of these cases, a nullptr can be used to indicate a vacuum (where no volumetric scattering occurs.)

<<MediumInterface Declarations>>= 
struct MediumInterface { <<MediumInterface Public Methods>> 
MediumInterface(const Medium *medium) : inside(medium), outside(medium) { } MediumInterface(const Medium *inside, const Medium *outside) : inside(inside), outside(outside) { } bool IsMediumTransition() const { return inside != outside; }
const Medium *inside, *outside; };

This approach to specifying the extent of participating media does allow the user to specify impossible or inconsistent configurations. For example, a primitive could be specified as having one medium outside of it, and the camera could be specified as being in a different medium without a MediumInterface between the camera and the surface of the primitive. In this case, a ray leaving the primitive toward the camera would be treated as being in a different medium from a ray leaving the camera toward the primitive. In turn, light transport algorithms would be unable to compute consistent results. For pbrt’s purposes, we think it’s reasonable to expect that the user will be able to specify a consistent configuration of media in the scene and that the added complexity of code to check this isn’t worthwhile.

A MediumInterface can be initialized with either one or two Medium pointers. If only one is provided, then it represents an interface with the same medium on both sides.

<<MediumInterface Public Methods>>= 
MediumInterface(const Medium *medium) : inside(medium), outside(medium) { } MediumInterface(const Medium *inside, const Medium *outside) : inside(inside), outside(outside) { }

The function MediumInterface::IsMediumTransition() checks whether a particular MediumInterface instance marks a transition between two distinct media.

<<MediumInterface Public Methods>>+= 
bool IsMediumTransition() const { return inside != outside; }

We can now provide a missing piece in the implementation of the GeometricPrimitive::Intersect() method. The code in this fragment is executed whenever an intersection with a geometric primitive has been found; its job is to set the medium interface at the intersection point.

Instead of simply copying the value of the GeometricPrimitive::mediumInterface field, we will follow a slightly different approach and only use this information when this MediumInterface specifies a proper transition between participating media. Otherwise, the Ray::medium field takes precedence.

Setting the SurfaceInteraction’s mediumInterface field in this way greatly simplifies the specification of scenes containing media: in particular, it is not necessary to tag every scene surface that is in contact with a medium. Instead, only non-opaque surfaces that have different media on each side require an explicit medium reference in their GeometricPrimitive::mediumInterface field. In the simplest case where a scene containing opaque objects is filled with a participating medium (e.g., haze), it is enough to tag the camera and light sources.

<<Initialize SurfaceInteraction::mediumInterface after Shape intersection>>= 

Primitives associated with shapes that represent medium boundaries generally have a Material associated with them. For example, the surface of a lake might use an instance of GlassMaterial to describe scattering at the lake surface, which also acts as the boundary between the rising mist’s Medium and the lake water’s Medium. However, sometimes we only need the shape for the boundary surface it provides to delimit a participating medium boundary and we don’t want to see the surface itself. For example, the medium representing a cloud might be bounded by a box made of triangles where the triangles are only there to delimit the cloud’s extent and shouldn’t otherwise affect light passing through them.

While such a surface that disappears and doesn’t affect ray paths could be perfectly accurately described by a BTDF that represented perfect specular transmission with the same index of refraction on both sides, dealing with such surfaces places extra burden on the Integrators (not all of which handle this type of specular light transport well). Therefore, pbrt allows such surfaces to have a Material * that is nullptr, indicating that they do not affect light passing through them; in turn, SurfaceInteraction::bsdf will also be nullptr, and the light transport routines don’t worry about light scattering from such surfaces and only account for changes in the current medium at them. Figure 11.14 has two instances of the dragon model filled with scattering media; one has a scattering surface at the boundary and the other does not.

Figure 11.14: Scattering Media inside the Dragon. Both dragon models have the same homogeneous scattering media inside of them. On the left, the dragon’s surface has a glass material. On the right, the dragon’s Material * is nullptr, which indicates that the surface should be ignored by rays and is only used to delineate a participating medium’s extent.

Given these conventions for how Medium implementations are associated with rays passing through regions of space, we will implement a Scene::IntersectTr() method, which is a generalization of Scene::Intersect() that returns the first intersection with a light-scattering surface along the given ray as well as the beam transmittance up to that point. (If no intersection is found, this method returns false and doesn’t initialize the provided SurfaceInteraction.)

<<Scene Method Definitions>>+= 
bool Scene::IntersectTr(Ray ray, Sampler &sampler, SurfaceInteraction *isect, Spectrum *Tr) const { *Tr = Spectrum(1.f); while (true) { bool hitSurface = Intersect(ray, isect); <<Accumulate beam transmittance for ray segment>> 
if (ray.medium) *Tr *= ray.medium->Tr(ray, sampler);
<<Initialize next ray segment or terminate transmittance computation>> 
if (!hitSurface) return false; if (isect->primitive->GetMaterial() != nullptr) return true; ray = isect->SpawnRay(ray.d);
} }

Each time through the loop, the transmittance along the ray is accumulated into the overall beam transmittance *Tr. Recall that Scene::Intersect() will have updated the ray’s tMax member variable to the intersection point if it did intersect a surface. The Tr() implementation will use this value to find the segment over which to compute transmittance.

<<Accumulate beam transmittance for ray segment>>= 
if (ray.medium) *Tr *= ray.medium->Tr(ray, sampler);

The loop ends when no intersection is found or when a scattering surface is intersected. If an optically inactive surface with its bsdf equal to nullptr is intersected, a new ray is spawned in the same direction from the intersection point, though potentially in a different medium, based on the intersection’s MediumInterface field.

<<Initialize next ray segment or terminate transmittance computation>>= 
if (!hitSurface) return false; if (isect->primitive->GetMaterial() != nullptr) return true; ray = isect->SpawnRay(ray.d);

11.3.1 Medium Interactions

Section 2.10 introduced the general Interaction class as well as the SurfaceInteraction specialization to represent interactions at surfaces. Now that we have some machinery for describing scattering in volumes, it’s worth generalizing these representations. First, we’ll add two more Interaction constructors for interactions at points in scattering media.

<<Interaction Public Methods>>+=  
Interaction(const Point3f &p, const Vector3f &wo, Float time, const MediumInterface &mediumInterface) : p(p), time(time), wo(wo), mediumInterface(mediumInterface) { }

<<Interaction Public Methods>>+=  
Interaction(const Point3f &p, Float time, const MediumInterface &mediumInterface) : p(p), time(time), mediumInterface(mediumInterface) { }

<<Interaction Public Methods>>+=  
bool IsMediumInteraction() const { return !IsSurfaceInteraction(); }

For surface interactions where Interaction::n has been set, the Medium * for a ray leaving the surface in the direction w is returned by the GetMedium() method.

<<Interaction Public Methods>>+=  
const Medium *GetMedium(const Vector3f &w) const { return Dot(w, n) > 0 ? mediumInterface.outside : mediumInterface.inside; }

For interactions that are known to be inside participating media, another variant of GetMedium() that doesn’t take the unnecessary outgoing direction vector returns the Medium *.

<<Interaction Public Methods>>+= 
const Medium *GetMedium() const { Assert(mediumInterface.inside == mediumInterface.outside); return mediumInterface.inside; }

Just as the SurfaceInteraction class represents an interaction obtained by intersecting a ray against the scene geometry, MediumInteraction represents an interaction at a point in a scattering medium that is obtained using a similar kind of operation.

<<Interaction Declarations>>+= 
class MediumInteraction : public Interaction { public: <<MediumInteraction Public Methods>> 
MediumInteraction(const Point3f &p, const Vector3f &wo, Float time, const Medium *medium, const PhaseFunction *phase) : Interaction(p, wo, time, medium), phase(phase) { } bool IsValid() const { return phase != nullptr; }
<<MediumInteraction Public Data>> 
const PhaseFunction *phase;
};

<<MediumInteraction Public Methods>>= 
MediumInteraction(const Point3f &p, const Vector3f &wo, Float time, const Medium *medium, const PhaseFunction *phase) : Interaction(p, wo, time, medium), phase(phase) { }

MediumInteraction adds a new PhaseFunction member variable to store the phase function associated with its position.

<<MediumInteraction Public Data>>= 
const PhaseFunction *phase;

11.3.2 Homogeneous Medium

The HomogeneousMedium is the simplest possible medium; it represents a region of space with constant sigma Subscript normal a and sigma Subscript normal s values throughout its extent. It uses the Henyey–Greenstein phase function to represent scattering in the medium, also with a constant g value. This medium was used for the images in Figure 11.13 and 11.14.

<<HomogeneousMedium Declarations>>= 
class HomogeneousMedium : public Medium { public: <<HomogeneousMedium Public Methods>> 
HomogeneousMedium(const Spectrum &sigma_a, const Spectrum &sigma_s, Float g) : sigma_a(sigma_a), sigma_s(sigma_s), sigma_t(sigma_s + sigma_a), g(g) { } Spectrum Tr(const Ray &ray, Sampler &sampler) const; Spectrum Sample(const Ray &ray, Sampler &sampler, MemoryArena &arena, MediumInteraction *mi) const;
private: <<HomogeneousMedium Private Data>> 
const Spectrum sigma_a, sigma_s, sigma_t; const Float g;
};

<<HomogeneousMedium Public Methods>>= 
HomogeneousMedium(const Spectrum &sigma_a, const Spectrum &sigma_s, Float g) : sigma_a(sigma_a), sigma_s(sigma_s), sigma_t(sigma_s + sigma_a), g(g) { }

<<HomogeneousMedium Private Data>>= 
const Spectrum sigma_a, sigma_s, sigma_t; const Float g;

Because sigma Subscript normal t is constant throughout the medium, Beer’s law, Equation (11.3), can be used to compute transmittance along the ray. However, implementation of the Tr() method is complicated by some subtleties of floating-point arithmetic. As discussed in Section 3.9.1, IEEE floating point provides a representation for infinity; in pbrt, this value, Infinity, is used to initialize Ray::tMax for rays leaving cameras and lights, which is useful for ray–intersection tests in that it ensures that any actual intersection, even if far along the ray, is detected as an intersection for a ray that hasn’t intersected anything yet.

However, the use of Infinity for Ray::tMax creates a small problem when applying Beer’s law. In principle, we just need to compute the parametric t range that the ray spans, multiply by the ray direction’s length, and then multiply by sigma Subscript normal t :

Float d = ray.tMax * ray.Length(); Spectrum tau = sigma_t * d; return Exp(-tau);

The problem is that multiplying Infinity by zero results in the floating-point “not a number” (NaN) value, which propagates throughout all computations that use it. For a ray that passes infinitely far through a medium with zero absorption for a given spectral channel, the above code would attempt to perform the multiplication 0 * Infinity and would produce a NaN value rather than the expected transmittance of zero. The implementation here resolves this issue by clamping the ray segment length to the largest representable non-infinite floating-point value.

<<HomogeneousMedium Method Definitions>>= 
Spectrum HomogeneousMedium::Tr(const Ray &ray, Sampler &sampler) const { return Exp(-sigma_t * std::min(ray.tMax * ray.d.Length(), MaxFloat)); }

11.3.3 3D Grids

The GridDensityMedium class stores medium densities at a regular 3D grid of positions, similar to the way that the ImageTexture represents images with a 2D grid of samples. These samples are interpolated to compute the density at positions between the sample points. The implementation of the GridDensityMedium is in media/grid.h and media/grid.cpp.

<<GridDensityMedium Declarations>>= 
class GridDensityMedium : public Medium { public: <<GridDensityMedium Public Methods>> 
GridDensityMedium(const Spectrum &sigma_a, const Spectrum &sigma_s, Float g, int nx, int ny, int nz, const Transform &mediumToWorld, const Float *d) : sigma_a(sigma_a), sigma_s(sigma_s), g(g), nx(nx), ny(ny), nz(nz), WorldToMedium(Inverse(mediumToWorld)), density(new Float[nx * ny * nz]) { memcpy((Float *)density.get(), d, sizeof(Float) * nx * ny * nz); <<Precompute values for Monte Carlo sampling of GridDensityMedium>> 
sigma_t = (sigma_a + sigma_s)[0]; Float maxDensity = 0; for (int i = 0; i < nx * ny * nz; ++i) maxDensity = std::max(maxDensity, density[i]); invMaxDensity = 1 / maxDensity;
} Float Density(const Point3f &p) const; Float D(const Point3i &p) const { Bounds3i sampleBounds(Point3i(0, 0, 0), Point3i(nx, ny, nz)); if (!InsideExclusive(p, sampleBounds)) return 0; return density[(p.z * ny + p.y) * nx + p.x]; } Spectrum Sample(const Ray &ray, Sampler &sampler, MemoryArena &arena, MediumInteraction *mi) const; Spectrum Tr(const Ray &ray, Sampler &sampler) const;
private: <<GridDensityMedium Private Data>> 
const Spectrum sigma_a, sigma_s; const Float g; const int nx, ny, nz; const Transform WorldToMedium; std::unique_ptr<Float[]> density; Float sigma_t; Float invMaxDensity;
};

The constructor takes a 3D array of user-supplied density values, thus allowing a variety of sources of data (physical simulation, CT scan, etc.). The smoke data set rendered in Figures 11.2, 11.5, and 11.10 is represented with a GridDensityMedium. The caller also supplies baseline values of sigma Subscript normal a , sigma Subscript normal s , and g to the constructor, which does the usual initialization of the basic scattering properties and makes a local copy of the density values.

<<GridDensityMedium Public Methods>>= 
GridDensityMedium(const Spectrum &sigma_a, const Spectrum &sigma_s, Float g, int nx, int ny, int nz, const Transform &mediumToWorld, const Float *d) : sigma_a(sigma_a), sigma_s(sigma_s), g(g), nx(nx), ny(ny), nz(nz), WorldToMedium(Inverse(mediumToWorld)), density(new Float[nx * ny * nz]) { memcpy((Float *)density.get(), d, sizeof(Float) * nx * ny * nz); <<Precompute values for Monte Carlo sampling of GridDensityMedium>> 
sigma_t = (sigma_a + sigma_s)[0]; Float maxDensity = 0; for (int i = 0; i < nx * ny * nz; ++i) maxDensity = std::max(maxDensity, density[i]); invMaxDensity = 1 / maxDensity;
}

<<GridDensityMedium Private Data>>= 
const Spectrum sigma_a, sigma_s; const Float g; const int nx, ny, nz; const Transform WorldToMedium; std::unique_ptr<Float[]> density;

The Density() method of GridDensityMedium is called by GridDensityMedium::Tr(); it uses the provided samples to reconstruct the volume density function at the given point, which will already have been transformed into local coordinates using WorldToMedium. In turn, sigma Subscript normal a and sigma Subscript normal s will be scaled by the interpolated density at the point.

<<GridDensityMedium Method Definitions>>= 
Float GridDensityMedium::Density(const Point3f &p) const { <<Compute voxel coordinates and offsets for p>> 
Point3f pSamples(p.x * nx - .5f, p.y * ny - .5f, p.z * nz - .5f); Point3i pi = (Point3i)Floor(pSamples); Vector3f d = pSamples - (Point3f)pi;
<<Trilinearly interpolate density values to compute local density>> 
Float d00 = Lerp(d.x, D(pi), D(pi+Vector3i(1,0,0))); Float d10 = Lerp(d.x, D(pi+Vector3i(0,1,0)), D(pi+Vector3i(1,1,0))); Float d01 = Lerp(d.x, D(pi+Vector3i(0,0,1)), D(pi+Vector3i(1,0,1))); Float d11 = Lerp(d.x, D(pi+Vector3i(0,1,1)), D(pi+Vector3i(1,1,1))); Float d0 = Lerp(d.y, d00, d10); Float d1 = Lerp(d.y, d01, d11); return Lerp(d.z, d0, d1);
}

The grid samples are assumed to be over a canonical left-bracket 0 comma 1 right-bracket cubed domain. (The WorldToMedium transformation should be used to place the GridDensityMedium in the scene.) To interpolate the samples around a point, the Density() method first computes the coordinates of the point with respect to the sample coordinates and the distances from the point to the samples below it (along the lines of what was done in the Film and MIPMap—see also Section 7.1.7).

<<Compute voxel coordinates and offsets for p>>= 
Point3f pSamples(p.x * nx - .5f, p.y * ny - .5f, p.z * nz - .5f); Point3i pi = (Point3i)Floor(pSamples); Vector3f d = pSamples - (Point3f)pi;

The distances d can be used directly in a series of invocations of Lerp() to trilinearly interpolate the density at the sample point:

<<Trilinearly interpolate density values to compute local density>>= 
Float d00 = Lerp(d.x, D(pi), D(pi+Vector3i(1,0,0))); Float d10 = Lerp(d.x, D(pi+Vector3i(0,1,0)), D(pi+Vector3i(1,1,0))); Float d01 = Lerp(d.x, D(pi+Vector3i(0,0,1)), D(pi+Vector3i(1,0,1))); Float d11 = Lerp(d.x, D(pi+Vector3i(0,1,1)), D(pi+Vector3i(1,1,1))); Float d0 = Lerp(d.y, d00, d10); Float d1 = Lerp(d.y, d01, d11); return Lerp(d.z, d0, d1);

The D() utility method returns the density at the given integer sample position. Its only tasks are to handle out-of-bounds sample positions and to compute the appropriate array offset for the given sample. Unlike MIPMaps, where a variety of behavior is useful in the case of out-of-bounds coordinates, here it’s reasonable to always return a zero density for them: the density is defined over a particular domain, and it’s reasonable that points outside of it should have zero density.

<<GridDensityMedium Public Methods>>+= 
Float D(const Point3i &p) const { Bounds3i sampleBounds(Point3i(0, 0, 0), Point3i(nx, ny, nz)); if (!InsideExclusive(p, sampleBounds)) return 0; return density[(p.z * ny + p.y) * nx + p.x]; }