1 of 33

Texturing

Instructor: Christopher Rasmussen (cer@cis.udel.edu)

Course web page:

http://goo.gl/tJ4Kn4

April 6, 2017 ❖ Lecture 15

2 of 33

Outline

  • HW #3 will go out next Tuesday, April 11 and be due April 27
  • Environment maps
  • Shadow maps, light maps
  • Perspective-correct texture coordinate interpolation
  • Magnification/minification

3 of 33

Projecting in non-standard directions

  • Texture projector function doesn’t have to project ray from object center through position (x, y, z)—you can use any attribute of that position. For example:
    • Ray comes from another location
    • Ray is surface normal n at (x, y, z)
    • Ray is reflection-from-eye vector r at (x, y, z)
    • Etc.

courtesy of R. Wolfe

4 of 33

Projecting in non-standard directions

  • This can lead to interesting or informative effects

courtesy of R. Wolfe

Different ray directions for a spherical projector

5 of 33

Environment/Reflection Mapping

  • Problem: To render pixel on mirrored surface correctly, we need to follow reflection of eye vector back to first intersection with another surface and get its color
  • This is an expensive procedure with ray tracing
  • Idea: Approximate with texture mapping

from Angel

6 of 33

Environment mapping: Details

  • Key idea: Render 360 degree view of environment from center of object with sphere or box as intermediate surface
  • Intersection of eye reflection vector with intermediate surface provides texture coordinates for reflection/environment mapping

courtesy of R. Wolfe

7 of 33

Making environment textures: Cube

  • Cube map straightforward to make: Render/ photograph six rotated views of environment
    • 4 side views at compass points
    • 1 straight-up view, 1 straight-down view

8 of 33

Making environment textures: Sphere

  • Or construct from several photographs of mirrored sphere

courtesy of P. Debevec

9 of 33

Environment mapping: Example

courtesy of G. Miller

~1982

10 of 33

Environment mapping: Example

From “Terminator II” (1991)

11 of 33

Environment mapping example: Same scene, different lighting

courtesy of P. Debevec

12 of 33

Environment mapping: Issues

  • Only physically correct under assumptions that object shape is convex and radiance comes from infinite distance
    • Object concavities mean self-reflections, which won’t show up
    • Other objects won’t be reflected
    • Parallel reflection vectors access same environment texel, which is only a good approximation when environment objects are very far from object

v

from Angel

13 of 33

Environment Bump Mapping

  • Idea: Bump map perturbs eye reflection vector

from Akenine-Moller

& Haines

14 of 33

Shadow Maps

  • Idea: If we render scene from point of view of light source, all visible surfaces are lit and hidden surfaces are in shadow
    • “Camera” parameters here = spotlight characteristics

View from light

View from camera

15 of 33

Shadow Maps

  • Idea: If we render scene from point of view of light source, all visible surfaces are lit and hidden surfaces are in shadow
    • “Camera” parameters here = spotlight characteristics
  • When rasterizing scene from eye view, transform each pixel to get 3-D position with respect to the light
    • Project pixel to (i, j, depth) with respect to light
    • Compare depth to value in shadow buffer (aka light’s z-buffer) at (i, j) to see if it is visible to light = not shadowed

View from light

View from camera

16 of 33

Shadow Maps

  • Shadow edges have aliasing depending on shadow map resolution and scene geometry
  • Shadow edges are “hard” by default

from GPU Gems

17 of 33

Shadow Maps

  • Shadow edges have aliasing depending on shadow map resolution and scene geometry
  • Shadow edges are “hard” by default
  • Solutions to both problems typically involve multiple offset shadow buffer lookups

18 of 33

Texture mapping applications: Lightmaps

courtesy of K. Miller

+

=

Idea: Precompute expensive static lighting effects (such as ambient occlusion or diffuse reflectance) and “bake” them into color texture. Then scene looks more realistic as camera moves without expense of recomputing effects

19 of 33

Texture mapping application: Lightmaps

Tenebrae Quake screenshot

20 of 33

Lightmap example: Diffuse lighting only

21 of 33

Lightmap example: Light configuration

22 of 33

Lightmap example: Diffuse + lightmap

23 of 33

Texture Rasterization

  • Okay…we’ve got texture coordinates for the polygon vertices. What are (s, t) for the pixels inside the polygon?
  • Use Gouraud-style linear interpolation of texture coordinates, right?
    • First along polygon edges between vertices
    • Then along scanlines between left and right sides

from Hill

24 of 33

Linear texture coordinate interpolation

  • But this doesn’t work!

courtesy of H. Pfister

25 of 33

Why not?

  • Equally-spaced pixels do not project to equally-spaced texels under perspective projection
    • No problem with 2-D affine transforms (rotation, scaling, shear, etc.)
    • But different depths change things due to foreshortening

from Hill

courtesy of

H. Pfister

26 of 33

Perspective-Correct Texture Coordinate Interpolation

  • Compute at each vertex after perspective transformation
    • “Numerators” s/w, t/w
    • “Denominator” 1/w
  • Linearly interpolate s/w, t/w, and 1/w across triangle
  • At each pixel, perform perspective division of interpolated texture coordinates (s/w, t/w) by interpolated 1/w (i.e., numerator over denominator) to get (s, t)
  • GPU takes care of this for us :)

27 of 33

Perspective-Correct Texture Coordinate Interpolation

28 of 33

Perspective-Correct Interpolation: Notes

  • But we didn’t do this for the colors in Gouraud shading…
    • Actually, we should have, but the error is not as obvious
  • Alternative: Use regular linear interpolation with small enough polygons that effect is not noticeable
  • Linear interpolation for Z-buffering is correct

29 of 33

Magnification and minification

  • Magnification: Single screen pixel maps to area less than or equal to one texel
  • Minification: Single screen pixel area maps to area greater than one texel
    • If texel area covered is much greater than 4, even bilinear filtering isn’t so great

Magnification

Minification

from Angel

courtesy of H. Pfister

30 of 33

Filtering for minification

  • Aliasing problem much like line rasterization
    • Pixel maps to quadrilateral (pre-image) in texel space

image courtesy of D. Cohen-Or

31 of 33

32 of 33

Supersampling: Using more than BLI’s 4 texels

  • Rasterize at higher resolution
    • Regular grid pattern around each “normal” image pixel
    • Irregular jittered sampling pattern reduces artifacts
  • Combine multiple samples into one pixel via weighted average
    • “Box” filter: All samples associated with a pixel have equal weight (i.e., directly take their average)
    • Gaussian/cone filter: Sample weights inversely proportional to distance from associated pixel

from Hill

Regular supersampling

with 2x frequency

Jittered supersampling

33 of 33

Mipmaps

  • Filtering for minification is expensive, and different areas must be averaged depending on the amount of minification
  • Idea:
    • Prefilter entire texture image at different resolutions
    • For each screen pixel, pick texture in mipmap at level of detail (LOD) that minimizes minification (i.e., pre-image area closest to 1)
    • Do nearest or linear filtering in appropriate LOD texture image

from Woo, et al.