1 of 72

Final review

HW #4: due 5 am tomorrow if you have 3 late days!!!

Instructor: Christopher Rasmussen (cer@cis.udel.edu)

Course web page:

http://goo.gl/XXHixg

May 17, 2016 ❖ Lecture 25

2 of 72

Final Notes

  • When/where
    • Thursday, May 19
    • 1-3 pm, ISE 307
  • Worth 20% of your grade, just like the midterm (see 2008 final)
  • Covers second half of term (from spring break to May 10), but of course first-half knowledge may be assumed
    • A few topics not in book (like ambient occlusion, bidirectional ray tracing, photon mapping, k-d trees)
    • A bit less emphasis on topics addressed in HWs
  • Closed book, no calculators, no notes
  • Question format similar to midterm, with perhaps one or two GLSL-related questions

3 of 72

Final topics

  • Textures
  • Ray tracing
  • Global illumination
  • Noise
  • Shape modeling

4 of 72

TEXTURES

5 of 72

What is Texture Mapping?

  • Spatially-varying modification of surface appearance at the pixel level
  • Characteristics
    • Color
    • Shininess
    • Transparency
    • Bumpiness
    • Etc.

from Hill

6 of 72

Texture mapping: Steps

  • Creation: Where does the texture image come from?
  • Geometry: Transformation from 3-D shape locations to 2-D texture image coordinates
  • Rasterization: What to draw at each pixel
    • E.g., bilinear interpolation vs. nearest-neighbor

7 of 72

Texturing Pipeline (Geometry + Rasterization)

  1. Compute object space location (x, y, z) from screen space location (i, j)
  2. Use projector function to obtain object surface coordinates (u, v) (3-D -> 2-D projection)
  3. Use corresponder function to find texel coordinates (s, t) (2-D -> 2-D transformation)
    • Scale, shift, wrap like viewport transform in geometry pipeline
  4. Filter texel at (s, t)
  5. Modify pixel (i, j)

list adapted from Akenine-Moller & Haines

courtesy of R. Wolfe

8 of 72

Projector Functions

  • Want way to get from 3-D point to 2-D surface coordinates as an intermediate step
  • Idea: Project complex object onto simple object’s surface with parallel or perspective projection (focal point inside object)
    • Plane
    • Cylinder
    • Sphere
    • Cube
    • Mesh: piecewise planar

Planar projector

courtesy of R. Wolfe

9 of 72

Projecting in non-standard directions

  • Don’t have to project ray from object center through position (x, y, z)—can use any attribute of that position. For example:
    • Ray comes from another location
    • Ray is surface normal n at (x, y, z)
    • Ray is reflection-from-eye vector r at (x, y, z)
    • Etc.

courtesy of R. Wolfe

10 of 72

Projecting in non-standard directions

  • This can lead to interesting or informative effects

courtesy of R. Wolfe

Different ray directions for a spherical projector

11 of 72

Environment/Reflection Mapping

  • Problem: To render pixel on mirrored surface correctly, we need to follow reflection of eye vector back to first intersection with another surface and get its color
  • This is an expensive procedure with ray tracing
  • Idea: Approximate with texture mapping

from Angel

12 of 72

Environment mapping: Details

  • Key idea: Render 360 degree view of environment from center of object with sphere or box as intermediate surface
  • Intersection of eye reflection vector with intermediate surface provides texture coordinates for reflection/environment mapping

courtesy of R. Wolfe

13 of 72

Texture Rasterization

  • Okay…we’ve got texture coordinates for the polygon vertices. What are (s, t) for the pixels inside the polygon?
  • Use Gouraud-style linear interpolation of texture coordinates, right?
    • First along polygon edges between vertices
    • Then along scanlines between left and right sides

from Hill

14 of 72

Why not?

  • Equally-spaced pixels do not project to equally-spaced texels under perspective projection
    • No problem with 2-D affine transforms (rotation, scaling, shear, etc.)
    • But different depths change things

from Hill

courtesy of

H. Pfister

15 of 72

Magnification and minification

  • Magnification: Single screen pixel maps to area less than or equal to one texel
  • Minification: Single screen pixel area maps to area greater than one texel
    • If texel area covered is much greater than 4, even bilinear filtering isn’t so great

Magnification

Minification

from Angel

courtesy of H. Pfister

16 of 72

Bilinear Interpolation (BLI)

  • Idea: Blend four texel values surrounding source, weighted by nearness

Vertical blend

Horizontal blend

17 of 72

Mipmaps

  • Filtering for minification is expensive, and different areas must be averaged depending on the amount of minification
  • Idea:
    • Prefilter entire texture image at different resolutions
    • For each screen pixel, pick texture in mipmap at level of detail (LOD) that minimizes minification (i.e., pre-image area closest to 1)
    • Do nearest or linear filtering in appropriate LOD texture image

from Woo, et al.

18 of 72

Bump Mapping

  • So far we’ve been thinking of textures modulating color and transparency only
    • Billboards, decals, lightmaps, etc.
  • But any other per-pixel properties are fair game...
  • Pixel normals usually smoothly varying
    • Computed at vertices for Gouraud shading; color interpolated
    • Interpolated from vertices for Phong shading
  • Textures allow setting per-pixel normal with a bump map

19 of 72

Bump mapping: Why?

  • Can get a lot more surface detail without expense of more object vertices to light, transform

courtesy of Nvidia

20 of 72

Bump Mapping: How?

  • Idea: Perturb pixel normals n(u, v) derived from object geometry to get additional detail for shading
  • Compute lighting per pixel (like Phong)

from Hill

21 of 72

Bump mapping: Issues

  • Bumps don’t cast shadows
  • Geometry doesn’t change, so silhouette of object is unaffected
  • Textures can be used to modify underlying geometry with displacement maps

courtesy of Nvidia

22 of 72

Displacement Mapping

courtesy of spot3d.com

Bump mapping

Displacement mapping

23 of 72

Shadow Maps

  • Idea: If we render scene from point of view of light source, all visible surfaces are lit and hidden surfaces are in shadow
    • “Camera” parameters here = spotlight characteristics
  • When rasterizing scene from eye view, transform each pixel to get 3-D position with respect to the light
    • Project pixel to (i, j, depth) with respect to light
    • Compare depth to value in shadow buffer (aka light’s z-buffer) at (i, j) to see if it is visible to light = not shadowed

View from light

View from camera

24 of 72

RAY TRACING

25 of 72

Illumination models

  • Interaction between light sources and objects in scene that results in perception of intensity and color at eye
  • Local vs. global models
    • Local illumination: Perception of a particular primitive only depends on light sources directly affecting that one primitive
      • Geometry
      • Material properties
    • Global illumination: Also take into account indirect effects on light of other objects in the scene
      • Shadows cast
      • Light reflected/refracted

26 of 72

Backward Ray “Following”: Types

  • Ray casting: Compute illumination at first intersected surface point only
    • Takes care of hidden surface elimination
  • Ray tracing: Recursively spawn rays at hit points to simulate reflection, refraction, etc.

Angel

27 of 72

Does Ray Intersect any Scene Primitives?

  • Test each primitive in scene for intersection individually
  • Different methods for different kinds of primitives
    • Polygon
    • Sphere
    • Cylinder, torus
    • Etc.
  • Make sure intersection point is in front of eye and nearest one

from Hill

28 of 72

Ray-Sphere Intersection I

  • Combine implicit definition of sphere

with ray equation

(where d is a unit vector) to get:

29 of 72

Ray-Sphere Intersection II

  • Substitute and use identity

to solve for t, resulting in a quadratic equation with roots given by:

  • Notes
    • Real solutions mean there actually are 1 or 2 intersections
    • Negative solutions are behind eye

30 of 72

Shadow Rays

  • For point p being locally shaded, only add diffuse & specular components for light l if light is not occluded (i.e., blocked)
  • Test for occlusion of l for p:
    • Spawn shadow ray for l with origin p, direction l(l)
    • Check whether shadow ray intersects any scene object
    • Intersection only “counts” if:

  • More details in Shirley, Chap. 10.5

from Hill

31 of 72

Ray Tracing

  • Model: Perceived color at point p is an additive combination of local illumination (e.g., Phong), reflection, and refraction effects
  • Compute reflection, refraction contributions by tracing respective rays back from p to surfaces they came from and evaluating local illumination at those locations
  • Apply operation recursively to some maximum depth to get:
    • Reflections of reflections of ...
    • Refractions of refractions of ...
    • And of course mixtures of the two

from Hill

32 of 72

Ray Tracing Reflection Formula

  • The formula used for Phong illumination is not what we want here because our incident ray v is pointing in toward the surface, whereas the light direction l was pointed away from the surface
  • So just negate the formula to get:

33 of 72

Refraction

  • Definition: Bending of light ray as it crosses interface between media (e.g., air → glass or vice versa)
  • Index of refraction (IOR) n for a medium: Ratio of speed of light in vacuum to that in medium (wavelength-dependent ⇒ prisms)
    • By definition, n ≥ 1
    • Examples: nair (1.00) < nwater (1.33) < nglass (1.52)

θ1: Angle of incidence

θ2: Angle of refraction

courtesy of

Wolfram

34 of 72

Basic Ray Tracing: Notes

  • Global illumination effects simulated by basic algorithm are shadows, purely specular reflection/transmission
  • Some outstanding issues
    • Aliasing, aka jaggies
    • Shadows have sharp edges, which is unrealistic
    • No diffuse reflection from other objects
  • Intersection calculations are expensive, and even more so for more complex objects
    • Not currently suitable for real-time (i.e., games)

35 of 72

Distributed (aka “distribution”) Ray Tracing (DRT)

  • Basic idea: Use multiple eye rays for each pixel rendered or multiple recursive rays at intersections
  • Application #1: Improving image quality via anti-aliasing
    • Supersampling: Shoot multiple nearby eye rays per pixel and combine colors
    • Uniform vs. adaptive: Constant number of rays or change in areas where image is changing more quickly

36 of 72

Supersampling

  • Rasterize at higher resolution
    • Regular grid pattern around each “normal” image pixel
    • Irregular jittered sampling pattern reduces artifacts
  • Combine multiple samples into one pixel via weighted average
    • “Box” filter: All samples associated with a pixel have equal weight (i.e., directly take their average)
    • Gaussian/cone filter: Sample weights inversely proportional to distance from associated pixel

from Hill

Regular supersampling

with 2x frequency

Jittered supersampling

37 of 72

Adaptive Supersampling (Whitted’s method)

  • Shoot rays through 4 pixel corners and collect colors
  • Provisional color for entire pixel is average of corner contributions
    • If you stop here, the only overhead vs. center-of-pixel ray-tracing is another row, column of rays
  • If any corner’s color is too different, subdivide pixel into quadrants and recurse on quadrants
  • Details
    • Subdivide if any corner is more than 25% different from average (try experimenting with different thresholds here)
    • Maximum depth of 2 subdivisions sufficient

from Hill

38 of 72

DRT: Soft Shadows

  • For point light sources, sending a single shadow ray toward each is reasonable
    • But this gives hard-edged shadows
  • Simulating soft shadows
    • Model each light source as sphere
    • Send multiple jittered shadow rays toward a light sphere; use fraction that reach it to attenuate color

39 of 72

DRT: Ambient Occlusion

  • Extension of shadow ray idea—not every point should get full ambient illumination
  • Cast random rays from each surface point to estimate percent of sky hemisphere that is visible (i.e., is there any intersection within a certain distance)
    • May use cosine weighting/distribution for foreshortening

40 of 72

DRT: Glossy Reflections

  • Analog of hard shadows are “sharp reflections”—every reflective surface acts like a perfect mirror
  • To get glossy or blurry reflections, send out multiple jittered reflection rays and average their colors

Why is the reflection

sharper at the top?

41 of 72

Bounding Volumes

  • Idea: enclose complex objects (i.e., .obj models) in simpler ones (i.e., spheres) and test simple intersection before complex
  • Want bounds as tight as possible

42 of 72

Bounding Boxes as Volumes: Multiple Objects

  • With multiple objects in the scene, how to arrange bounding boxes?

Nested boxes impose a hierarchy...

...that allow a more efficient recursive tree search

43 of 72

Uniform Spatial Subdivision

  • Another approach is to divide space equally, such as into boxes
  • Each object "belongs" to every box it intersects
  • Trace ray through boxes sequentially, check every object belonging to current box

44 of 72

GLOBAL ILLUMINATION

45 of 72

Light Paths

  • Consider the path that a light ray might take through a scene between the light source L and the eye E
  • It may interact with multiple diffuse (D) and specular (S) objects along the way
  • We can describe this series of interactions with the regular expression L (D | S)* E
    • (If a surface is a mix of D and S, the combination is additive so it is still OK to treat in this manner)

from Sillion & Puech

46 of 72

Light Paths: Examples

  • Direct visualization of the light: LE
  • Local illumination: LDE, LSE
  • Ray tracing: LS*E, LDS*E

from Hill

Ray tracing light paths

General light paths

from Sillion & Puech

47 of 72

Caustics

  • Definition: (Concentrated) specular reflection/refraction onto a diffuse surface
    • In simplest form, follow an LSDE path
  • Standard ray tracing cannot handle caustics—only paths described by LDS*E

courtesy of H. Wann Jensen

from Sillion & Puech

48 of 72

Bidirectional Ray Tracing (P. Heckbert, 1990)

  • Idea: Trace forward light rays into scene as well as backward eye rays
  • At diffuse surfaces, light rays additively “deposit” photons in radiosity textures, or “rexes”, where they are accessed up by eye rays
    • Summation approximates integral term in radiance computation
    • Light rays carry information on specular surface locations—they have no uncertainty

from P. Heckbert

49 of 72

Photon Mapping (H. Jensen, 1996)

  • Two-pass algorithm somewhat like bidirectional ray tracing, but photons stored differently
  • 1st pass: Build photon map
    • Shoot random rays from light(s) into scene
    • Each photon carries fraction of light’s power
    • Follow specular bounces, but store photons in map at each diffuse surface hit (or scattering event)
  • 2nd pass: Render scene
    • Modified ray tracing: follow eye rays into scene
    • Use photons near each intersection to compute light

50 of 72

Lighting Components, Reconsidered

  • Break rendering equation into parts: L = Ldirect + Lspecular + Lindirect + Lcaustic
  • Can get Ldirect and Lspecular using ray-casting, ray-tracing respectively
  • Lindirect is main reason we’re looking at photon mapping—it’s our LD*E paths
  • Lcaustic from special “caustic” photon map

51 of 72

LD*E paths

from http://gurneyjourney.blogspot.com

Also known as diffuse reflectance,

color bleeding

52 of 72

kd-trees

  • Each point parametrizes axis-aligned splitting plane; rotate which axis is split
  • But balance is important to get O(log N) efficiency for nearest-neighbor queries
  • Example kd tree for k = 2 and N = 6:

53 of 72

NOISE

54 of 72

Noise as a Texture Generator

  • Easiest texture to make: Random values for texels
    • noise(x, y) = random()
  • If random() has limited range (e.g., [0, 1]), can control maximum value via amplitude
    • a * noise(x, y)
  • But the results usually aren’t very exciting visually

55 of 72

3-D Noise

  • 3-D or solid texture has value at every point (x, y, z)
  • This makes texture mapping very easy :)
  • Simple solid texture generator is noise function on lattice:
    • noise(x, y, z) = random()
  • For points in between, we need to interpolate
  • Technically, this is "value" noise -- "Perlin" noise is based on random gradients

courtesy of L. McMillan

Perlin noise is implemented in GLM library -- see this page

56 of 72

Fractal Noise (aka "turbulence", aka "fractional Brownian motion" (FBM))

  • Many frequencies present, looks more natural
  • Can get this by summing noise at different magnifications
  • turb(x, y, z) = Σi ai * noisei(x, y, z)
  • Typical (but totally adjustable) parameters:
    • Magnification doubles at each level (octave)
    • Amplitude drops by half

+ 0.5 x

+ 0.25 x

+ 0.125 x

1 x

=

courtesy of H. Elias

57 of 72

2-D Noise: Applications

  • Traditional “wrappable” textures
    • Clouds, water, fire, etc.
    • Bump, specularity, blending maps

  • Height maps—e.g., fractal terrain

58 of 72

3-D and 4-D Noise: Applications

  • 3-D
    • Solid textures such as marble, wood, 3-D clouds
    • Animated 2-D textures (flesh, slime, copper, mercury, stucco,amber, sparks)
  • 4-D
    • Animated solid textures

59 of 72

SHAPE MODELING

60 of 72

Parametric Lines

  • Parametric definition of a line segment:

p(t) = p0 + t(p1 - p0), where t ∈ [0, 1]

= p0 - t p0 + t p1

= (1 - t)p0 + t p1

from Akenine-Möller & Haines

like a “blend” of

the two endpoints

61 of 72

Linear Interpolation as Blending

  • Consider each point on the line segment as a sum of control points pi weighted by blending functions Bi :

Blending functions for

linear interpolation

(2 control points)

from Akenine-Möller & Haines

• Here we have n=1, B0 = 1 - t, and B1 = t

62 of 72

Interpolating interpolants: Quadratic Bezier curves

p(t) = (1 - t)d + te

= (1 - t)[(1 - t)a + tb] + t[(1 - t)b + tc]

from Akenine-Möller & Haines

63 of 72

Bézier Curves

  • Curve approximation through recursive application of linear interpolations
    • Linear: 2 control points, 2 linear Bernstein polynomials
    • Quadratic: 3 control points, 3 quadratic Bernstein polynomials
    • N control points = N - 1 degree curve
  • Notes
    • Only endpoints are interpolated (i.e., on the curve)
    • Curve is tangent to linear segments at endpoints
    • Every control point affects every point on curve
      • Makes modeling harder

Cubic Bernstein polynomials

for 4 control points

from Akenine-Möller & Haines

64 of 72

Interpolating Splines

  • Idea: Use key frames to indicate a series of positions that must be “hit”
  • For example:
    • Camera location
    • Path for character to follow
    • Animation of walking, gesturing, or facial expressions
      • Morphing
  • Use splines for smooth interpolation
    • Must not be approximating!

65 of 72

Catmull-Rom spline

  • Different from Bezier curves in that we can have arbitrary number of control points, but only 4 of them at a time influence each section of curve
    • And it’s interpolating (goes through points) instead of approximating (goes “near” points)
  • Four points define curve between 2nd and 3rd

from Hearn & Baker

66 of 72

Catmull-Rom spline

  • Want cubic polynomial curve defined parametrically over interval t ∈ [0, 1] with following constraints:
    • Starts at P(0) = Pi , ends at P(1) = Pi+1
    • Starting slope P(0) = Pi+1 Pi-1 , ending slope P(1) = Pi+2 Pi

  • Also require that it be cubic polynomial:

67 of 72

Curve Subdivision

  • Goal: Algorithmically obtain smooth curves starting from small number of line segments
  • One approach: Corner-cutting subdivision
    • Repeatedly chop off corners of polygon
    • Each line segment is replaced by two shorter segments
    • Limit curve is shape that would be reached after an infinite series of such subdivisions

from Shirley

68 of 72

Bézier curves and B-splines via subdivision

  • The midpoint corner-cutting algorithm is a subdivision definition of quadratic Bézier curves
  • Chaikin’s subdivision scheme defines quadratic B-splines
    • Make new edge joining point ¾ of way to next control point pi with point ¼ of way after pi

Chaikin’s scheme

from Akenine-Möller & Haines

69 of 72

Surface Subdivision

  • Analogous to curve subdivision:
    1. Refine mesh: Choose new vertices to make smaller polygons, update connectivity
    2. Smooth mesh: Move vertices to fit underlying object

from Akenine-Möller & Haines

70 of 72

Loop subdivision

  • Smooths triangle mesh
  • Subdivision replaces 1 triangle with 4

  • Approximating scheme
    • Original vertices not guaranteed to be in subdivided mesh

from Akenine-Möller & Haines

71 of 72

Subdivision: Example (Catmull-Clark)

from http://graphics.stanford.edu/courses/cs468-10-fall/LectureSlides/10_Subdivision.pdf

72 of 72

Other surface subdivision schemes