## 3.11 Interactions

The last abstractions in this chapter, `SurfaceInteraction`
and `MediumInteraction`,
respectively represent local information at points on surfaces and in participating media. For example, the
ray–shape intersection routines in Chapter 6 return
information about the local differential geometry at intersection points in
a `SurfaceInteraction`. Later, the texturing code in
Chapter 10 computes material properties using values from
the `SurfaceInteraction`. The closely related `MediumInteraction`
class is used to represent points where light interacts with participating
media like smoke or clouds. The implementations
of all of these classes are in the files
`interaction.h` and
`interaction.cpp`.

Both `SurfaceInteraction` and `MediumInteraction` inherit from a
generic `Interaction` class that provides common member variables and
methods, which allows parts of the system for which the differences between
surface and medium interactions do not matter to be implemented purely in
terms of `Interaction`s.

A variety of `Interaction` constructors are available;
depending on what sort of interaction is being constructed and what sort of
information about it is relevant, corresponding sets of parameters are
accepted. This one is the most general of them.

All interactions have a point associated with them. This point is
stored using the `Point3fi` class, which uses an `Interval` to
represent each coordinate value. Storing a small interval of
floating-point values rather than a single `Float` makes it possible to
represent bounds on the numeric error in the intersection point, as occurs
when the point was computed by a ray intersection calculation.
This information will be useful for avoiding incorrect self-intersections
for rays leaving surfaces, as will be discussed in
Section 6.8.6.

`Interaction` provides a convenience method that returns a regular
`Point3f` for the interaction point for the parts of the system that do
not need to account for any error in it (e.g., the texture evaluation
routines).

All interactions also have a time associated with them. Among other uses, this value is necessary for setting the time of a spawned ray leaving the interaction.

For interactions that lie along a ray (either from a ray–shape
intersection or from a ray passing through participating media), the
negative ray direction is stored in the `wo` member variable, which
corresponds to , the notation we use for the outgoing direction when
computing lighting at points. For other types of interaction points where
the notion of an outgoing direction does not apply (e.g., those found by
randomly sampling points on the surface of shapes), `wo` has the value
.

For interactions on surfaces, `n` stores the surface normal at the
point and `uv` stores its parametric coordinates. It is fair
to ask, why are these values stored in the base `Interaction`
class rather than in `SurfaceInteraction`? The reason is that there
are some parts of the system that *mostly* do not care about the
distinction between surface and medium interactions—for example, some of
the routines that sample points on light sources given a point to be
illuminated. Those make use of these values if they are available and
ignore them if they are set to zero. By accepting the small dissonance of
having them in the wrong place here, the implementations of those methods
and the code that calls them is made that much simpler.

It is possible to check if a pointer or reference to an `Interaction`
is one of the two subclasses. A nonzero surface normal is used as a
distinguisher for a surface.

Methods are provided to cast to the subclass types as well. This is a good
place for a runtime check to ensure that the requested conversion is valid.
The non-`const` variant of this method as well as corresponding
`AsMedium()` methods follow
similarly and are not included in the text.

Interactions can also represent either an interface between two types of
participating media using an instance of the `MediumInterface` class,
which is defined in Section 11.4, or the properties of the
scattering medium at their point using a `Medium`. Here as
well, the `Interaction` abstraction leaks: surfaces can represent
interfaces between media, and at a point inside a medium, there is no
interface but there is the current medium. Both of these values are stored
in `Interaction` for the same reasons of expediency that `n` and
`uv` were.

### 3.11.1 Surface Interaction

As described earlier, the geometry of a particular point on a surface
(often a position found by intersecting a ray against the surface) is
represented by a `SurfaceInteraction`. Having this abstraction lets
most of the system work with points on surfaces without needing to consider
the particular type of geometric shape the points lie on.

`shading.n`for

`SurfaceInteraction`>>

`shading`partial derivative values>> } std::string ToString() const; void SetIntersectionProperties(Material mtl, Light area, const MediumInterface *primMediumInterface, Medium rayMedium) { material = mtl; areaLight = area; <<Set medium properties at surface intersection>>

In addition to the point `p`, the surface normal `n`, and
coordinates from the parameterization of the surface from the
`Interaction` base class, the `SurfaceInteraction` also stores
the parametric partial derivatives of the point and and
the partial derivatives of the surface normal and . See
Figure 3.30 for a depiction of these
values.

This representation implicitly assumes that shapes have a parametric
description—that for some range of values, points on the
surface are given by some function such that . Although
this is not true for all shapes, all of the shapes that `pbrt` supports do
have at least a local parametric description, so we will stick with the
parametric representation since this assumption is helpful elsewhere (e.g.,
for antialiasing of textures in Chapter 10).

The `SurfaceInteraction` constructor takes parameters that set all of
these values. It computes the normal as the cross product of the partial
derivatives.

`SurfaceInteraction` stores a second instance of a surface normal and
the various partial derivatives to represent possibly perturbed values of
these quantities—as can be generated by bump mapping or interpolated
per-vertex normals with meshes. Some parts of the system use this shading
geometry, while others need to work with the original quantities.

The shading geometry values are initialized in the constructor to match the
original surface geometry. If shading geometry is present, it generally
is not computed until some time after the `SurfaceInteraction`
constructor runs. The `SetShadingGeometry()` method, to be defined
shortly, updates the shading geometry.

The surface normal has special meaning to `pbrt`, which assumes that, for
closed shapes, the normal is oriented such that it points to the outside of
the shape. For geometry used as an area light source, light is by default
emitted from only the side of the surface that the normal points toward;
the other side is black. Because normals have this special meaning, `pbrt` provides a mechanism for the user to reverse the orientation of the normal,
flipping it to point in the opposite direction. A
`ReverseOrientation` directive in a `pbrt` input file flips the normal
to point in the opposite, non-default direction. Therefore, it is
necessary to check if the given `Shape` has the corresponding flag set
and, if so, switch the normal’s direction here.

However, one other factor plays into the orientation of the normal and must
be accounted for here as well. If a shape’s transformation matrix
has switched the handedness of the object coordinate system from `pbrt`’s
default left-handed coordinate system to a right-handed one, we need to
switch the orientation of the normal as well. To see why this is so,
consider a scale matrix . We would naturally expect
this scale to switch the direction of the normal, although because we have
computed the normal by ,

Therefore, it is also necessary to flip the normal’s direction if the transformation switches the handedness of the coordinate system, since the flip will not be accounted for by the computation of the normal’s direction using the cross product. A flag passed by the caller indicates whether this flip is necessary.

`pbrt` also provides the capability to associate an integer index with each
face of a polygon mesh. This information is used for certain texture
mapping operations. A separate `SurfaceInteraction` constructor
allows its specification.

When a shading coordinate frame is computed, the `SurfaceInteraction`
is updated via its `SetShadingGeometry()` method.

`shading.n`for

`SurfaceInteraction`>>

`shading`partial derivative values>> }

After performing the same cross product (and possibly flipping the orientation of the normal) as before to compute an initial shading normal, the implementation then flips either the shading normal or the true geometric normal if needed so that the two normals lie in the same hemisphere. Since the shading normal generally represents a relatively small perturbation of the geometric normal, the two of them should always be in the same hemisphere. Depending on the context, either the geometric normal or the shading normal may more authoritatively point toward the correct “outside” of the surface, so the caller passes a Boolean value that determines which should be flipped if needed.

`shading.n`for

`SurfaceInteraction`>>=

With the normal set, the various partial derivatives can be copied.

`shading`partial derivative values>>=

### 3.11.2 Medium Interaction

As described earlier, the `MediumInteraction` class is used to
represent an interaction at a point in a scattering medium like smoke or
clouds.

In contrast to `SurfaceInteraction`, it adds little to the base
`Interaction` class. The only addition is a `PhaseFunction`, which
describes how the particles in the medium scatter light. Phase functions
and the `PhaseFunction` class are introduced in
Section 11.3.