## 2.10 Interactions

The last abstraction in this chapter, `SurfaceInteraction`,
represents local information
at a point on a 2D surface. For example, the ray–shape intersection
routines in Chapter 3 return information about the local differential
geometry at intersection points in a `SurfaceInteraction`. Later, the
texturing code in Chapter 10 computes material properties
given a point on a surface represented by a `SurfaceInteraction`. The
closely related
`MediumInteraction` class is used to represent points where light
scatters in participating media like smoke or clouds; it will be defined in
Section 11.3 after additional preliminaries have been
introduced. The implementations of these classes are in the files
`core/interaction.h` and
`core/interaction.cpp`.

Both `SurfaceInteraction` and `MediumInteraction` inherit from a
generic `Interaction` class, which provides some common member
variables and methods. Some parts of the system (notably the light source
implementations) operate with respect to `Interaction`s, as the
differences between surface and medium interactions don’t matter to them.

A number of `Interaction` constructors are available; depending on
what sort of interaction is being constructed and what sort of information
about it is relevant, corresponding sets of parameters are accepted. This
one is the most general of them.

All interactions must have a point and time associated with them.

For interactions where the point was computed by ray intersection,
some floating-point error is generally present in the `p`
value. `pError` gives a conservative bound on this error; it’s
for points in participating media. See
Section 3.9 for more on `pbrt`’s approach to managing
floating-point error and in particular
Section 3.9.4 for how this bound is
computed for various shapes.

For interactions that lie along a ray (either from a ray–shape
intersection or from a ray passing through participating media), the
negative ray direction is stored in `wo`, which corresponds to ,
the notation we use for the outgoing direction when computing lighting at
points. For other types of interaction points where the notion of an
outgoing direction doesn’t apply (e.g., those found by randomly sampling
points on the surface of shapes), `wo` has the value .

For interactions on surfaces, `n` stores the surface normal at the
point.

Interactions also need to record the scattering media at their point (if
any); this is handled by an instance of the `MediumInterface` class,
which is defined in Section 11.3.1.

### 2.10.1 Surface Interaction

The geometry of a particular point on a surface (often a position found by intersecting
a ray against the surface) is represented by a `SurfaceInteraction`. Having this
abstraction lets most of the system work with points on surfaces without
needing to consider the particular type of geometric shape the points lie
on; the `SurfaceInteraction` abstraction supplies enough information about
the surface point to allow the shading and geometric operations in the rest
of `pbrt` to be implemented generically.

In addition to the point `p` and surface normal `n` from the
`Interaction` base class, the `SurfaceInteraction` also stores
coordinates from the parameterization of the surface and the
parametric partial derivatives of the point and . See
Figure 2.19 for a depiction of these values.
It’s also useful to have a pointer to the `Shape` that the point lies
on (the `Shape` class will be introduced in the next chapter) as well
as the partial derivatives of the surface normal.

This representation implicitly assumes that shapes have a parametric
description—that for some range of values, points on the
surface are given by some function such that . Although
this isn’t true for all shapes, all of the shapes that `pbrt` supports do
have at least a local parametric description, so we will stick with the
parametric representation since this assumption is helpful elsewhere (e.g.,
for antialiasing of textures in Chapter 10).

The `SurfaceInteraction` constructor takes parameters that set all of
these values. It computes the normal as the cross product of the partial
derivatives.

`SurfaceInteraction` stores a second instance of a surface normal and
the various partial derivatives to represent possibly perturbed values of
these quantities as can be generated by bump mapping or interpolated
per-vertex normals with triangles. Some parts of the system use this
shading geometry, while others need to work with the original quantities.

The shading geometry values are initialized in the constructor to match the
original surface geometry. If shading geometry is present, it generally
isn’t computed until some time after the `SurfaceInteraction`
constructor runs. The `SetShadingGeometry()` method, to be defined
shortly, updates the shading geometry.

The surface normal has special meaning to `pbrt`, which assumes that, for
closed shapes, the normal is oriented such that it points to the outside of
the shape. For geometry used as an area light source, light is emitted
from only the side of the surface that the normal points toward; the other
side is black. Because normals have this special meaning, `pbrt` provides
a mechanism for the user to reverse the orientation of the normal, flipping
it to point in the opposite direction. The `ReverseOrientation`
directive in `pbrt`’s input file flips the normal to point in the opposite,
non-default direction. Therefore, it is necessary to check if the given
`Shape` has the corresponding flag set and, if so, switch the normal’s
direction here.

However, one other factor plays into the orientation of the normal and must
be accounted for here as well. If the `Shape`’s transformation matrix
has switched the handedness of the object coordinate system from `pbrt`’s
default left-handed coordinate system to a right-handed one, we need to
switch the orientation of the normal as well. To see why this is so,
consider a scale matrix . We would naturally expect
this scale to switch the direction of the normal, although because we have
computed the normal by ,

Therefore, it is also necessary to flip the normal’s direction if the transformation switches the handedness of the coordinate system, since the flip won’t be accounted for by the computation of the normal’s direction using the cross product.

The normal’s direction is swapped if one but not both of these two conditions is met; if both were met, their effect would cancel out. The exclusive-OR operation tests this condition.

When a shading coordinate frame is computed, the `SurfaceInteraction`
is updated via its `SetShadingGeometry()` method.

`shading.n`for

`SurfaceInteraction`>>

`shading`partial derivative values>> }

After performing the same cross product (and possibly flipping the orientation of the normal) as before to compute an initial shading normal, the implementation then flips either the shading normal or the true geometric normal if needed so that the two normals lie in the same hemisphere. Since the shading normal generally represents a relatively small perturbation of the geometric normal, the two of them should always be in the same hemisphere. Depending on the context, either the geometric normal or the shading normal may more authoritatively point toward the correct “outside” of the surface, so the caller passes a Boolean value that determines which should be flipped if needed.

`shading.n`for

`SurfaceInteraction`>>=

`shading`partial derivative values>>=

We’ll add a method to `Transform` to transform
`SurfaceInteraction`s. Most members are either transformed directly
or copied, as appropriate, but given the approach that `pbrt` uses for
bounding floating-point error in computed intersection points, transforming
the `p` and `pError` member variables requires special care. The
fragment that handles this, <<Transform `p` and `pError`
in `SurfaceInteraction`>> is defined in Section 3.9,
when floating-point rounding error is discussed.

`p`and

`pError`in

`SurfaceInteraction`>> <<Transform remaining members of

`SurfaceInteraction`>>