1 of 94

Unit-5

Three Dimensional Graphics

PREPARED BY: SUSHANT BHATTARAI

WWW.NOTEDINSIGHTS.COM

2 of 94

Representation scheme for solid objects are divided into following

categories:

Boundary Representation(B-res)

  • Boundary Surface Representation (BSR) is a computer graphics technique used for modeling and rendering three-dimensional objects in computer- aided design (CAD), computer graphics, and various other fields.
  • BSR focuses on describing the surfaces of 3D objects using mathematical functions or data structures, allowing for the accurate representation and visualization of complex shapes.
  • Represents a 3D object as a set of surfaces that separate the object interior from environment

2

3 of 94

Space Partitioning Representation

  • Spatial partition representation, also known as space partitioning, is a

fundamental concept in computer science and computer graphics.

  • It refers to the technique of dividing a two- or three-dimensional space into smaller, non-overlapping regions or cells to efficiently organize and manage data or objects within that space.
  • This approach is widely used in various applications, including computer graphics, physics simulations, collision detection, and more.
  • Describes the interior properties by partitioning the spatial region containing an object into set of small , non-overlapping contiguous solid(usually cubes).

3

4 of 94

BSR vs SPR

Aspect

Boundary Space Representation

Spatial Partition Representation

Basic Concept

Encodes geometric object's boundary elements (vertices, edges, faces) explicitly.

Divides the space into smaller regions or cells to organize objects based on spatial proximity.

Information Storage

Stores detailed geometric information (vertex coordinates, edges, faces).

Stores information about how space is divided using data structures like grids, quadtrees, or octrees.

Use Cases

Suitable for applications requiring precise rendering and manipulation (e.g., 3D modeling, CAD).

Ideal for spatial indexing and efficient spatial queries (e.g., collision detection, ray tracing).

Storage Efficiency

Typically requires more storage space due to explicit geometric data storage.

More storage-efficient as it focuses on organizing objects by spatial location.

Performance Trade-offs

Provides accurate geometric information but may be less efficient for spatial queries.

Optimizes spatial queries but may provide less detail about individual object geometry.

4

5 of 94

Polygon Surface

It is the most common representation for 3D graphics object

In this representations, a 3D object is represented by a set of surfaces that enclose the object interior

Set of polygons are stored for object description

Polygon surface representation refers to the method of describing the surface of 3D objects using

polygons.

These polygons are typically planar and consist of straight edges connected at vertices.

Each polygon represents a flat facet of the object's surface and is used to approximate its shape.

The most common type of polygon used for this purpose is the triangle, but other polygons like

quadrilaterals may also be used.

A polygon surface is specified with a set of vertex co-ordinates and associated attribute

parameters

5

6 of 94

Polygon Table

The polygon table is a data structure used to organize and store

information about individual polygons in a 3D object.

Each entry in the table corresponds to a single polygon, such as a

triangle or quadrilateral.

Polygon data tables can be organized into two groups:

  1. Geometric Tables
  2. Attribute Tables

6

7 of 94

Projection Concept

Projection is 'formed' on the view plane (planar geometric

projection)

Rays (projectors) projected from the center of projection pass

through each point of the models and intersect projection plane.

Since everything is synthetic, the projection plane can be in front of the models, inside the models, or behind the models

7

8 of 94

Types of Projection

Two main types of projection

  1. Parallel projection
  2. Perspective projection

8

9 of 94

Taxonomy of Projection

9

10 of 94

Parallel Projection

Center of projection infinitely far from view plane

Projectors will be parallel to each other

Need to define the direction of projection (vector)

Better for drafting / CAD applications

Two sub-types

  1. Orthographic :Direction of projection is normal to view plane
  2. Oblique Direction of projection not normal to view plane

10

11 of 94

Parallel Projection

In a parallel projection, the transformation from 3D to 2D can be

represented using a projection matrix.

The projection matrix for a parallel projection is usually a diagonal matrix, where the diagonal elements control the scaling factors along each axis.

1

0

0

0

0

1

0

0

0

0

0

0

0

0

0

1

The first two rows and columns correspond to the x, y coordinates of the 3D points, while the last row and column are for the z coordinate (depth).

The third row (or column) being all zeros indicates that the z-coordinate remains unchanged, thus maintaining parallelism.

11

12 of 94

Orthographic Projection

If one direction of projection is perpendicular to the projection

plane then it is an orthographic projection

Often used to produce front , side and top view of an object.

Engineering and architectural drawing commonly employ orthographic projection

12

13 of 94

Perspective Projection

Center of projection finitely far from view plane

Projectors will not be parallel to each other

Need to define the location of the center of projection (point)

Classified into 1, 2, or 3-point perspective

More visually realistic

has perspective foreshortening (objects further away appear smaller)

13

14 of 94

Perspective Projection

Perspective projection mimics how the human eye perceives depth

in the real world.

It creates the illusion of depth by converging lines that are parallel in

3D space towards a single point called the vanishing point.

This projection is widely used in computer graphics and visual arts to create realistic images.

14

15 of 94

Perspective Projection

The matrix representation of a perspective projection involves transforming points based on their distance from the viewer. The perspective projection matrix includes elements that account for foreshortening along with depth.

1

0

0

0

0

1

0

0

0

0

𝑃

−1

0

0

𝐷

1

P controls the perspective foreshortening effect, while D is a scaling factor applied to the depth coordinate.

This combination results in the projection of points towards the

vanishing point, creating a sense of depth in the projected image.

15

16 of 94

Image Space Techniques

Image space techniques, also known as screen space techniques, are a category of computer graphics and image processing methods that operate directly on the pixel data of an image or a rendered frame.

These techniques are typically applied after the rendering of a 3D scene is completed and the 2D image is available for post- processing.

Image space techniques are commonly used to enhance or modify

the final rendered image.

Image space techniques operate directly on the pixel values of an image or a rendered frame.

16

17 of 94

Image Space Techniques

These techniques are typically applied after the scene has been

rendered, and the image is available for post-processing.

Common operations in image space include image filtering, color correction, image compositing, and post-processing effects (e.g., blurring, sharpening, and tone mapping).

Image space techniques are often computationally efficient because they work on the final image rather than individual 3D objects or geometry.

They are well-suited for global effects that need information from the entire frame, such as depth-of-field, motion blur, and image- based lighting.

17

18 of 94

Object Space Techniques

Object space techniques, also known as 3D space techniques, refer to a category of computer graphics and computer vision methods that operate on the 3D representation of objects and scenes.

These techniques involve manipulating and processing objects within their three-dimensional space before or during the rendering process.

Object space techniques operate on the 3D objects or scene

geometry before rendering or during the rendering process.

These techniques are typically used for tasks that involve changing the geometry or appearance of objects in the scene, such as transformations, deformation, or material properties.

18

19 of 94

Object Space Techniques

Object space techniques can be computationally expensive because they require modifying the 3D data for each object in the scene.

Common operations in object space include transformations (translation, rotation, scaling), skeletal animation, and procedural generation of geometry.

They are well-suited for tasks that involve modifying or manipulating

objects individually or collectively within the 3D scene.

19

20 of 94

Back face Detection

A fast and simple object-space method for identifying the back

faces of a polyhedron is based on the "inside-outside" tests.

A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and D if When an inside point is along the line of sight to the surface, the polygon must be a back face (we are inside that face and cannot see the front of it from our viewing position).

A fast and simple object-space method for identifying the back

faces of a polyhedron.

In this approach no faces on the back of the object are displayed.

It is based on the performing "inside-outside test".

It involves two steps:

20

21 of 94

Back face Detection

Basic Procedure:

Determination of whether the point is inside, outside, or on the

surface.

A point (x, y, z) is "inside" the polygon surface with plane parameters A,

B, C, and D if Ax+By+Cz+D<0.

Determination of the back face.

When an inside point is along the line of sight to the surface, the

polygon must be a back face.

For back-face test, let N be a normal vector to a polygon surface, which has Cartesian components (A, B, C). And let Vview is a vector in the viewing direction from the eye (or "camera") position.

Then this polygon surface is a back face if Vview.N > 0.

21

22 of 94

Back face Detection

22

23 of 94

Back face Detection

Pros and Cons

It is simple and easy implement.

No pre-sorting of polygon surfaces is needed.

Cannot address partial visibility.

Not suitable for complex scenes.

23

24 of 94

Depth Buffer Method(Z-Buffer Method)

A commonly used image-space approach for detecting visible

surfaces.

Also called Z-buffer method since depth usually measured along z-

axis.

This approach compares surface depths at each pixel position on the projection plane. The depth values for a pixel are compared and the closest surface determines the color to be displayed in the frame buffer.

Each surface of a scene is processed separately, one point at a time across the surface. And each (x, y, z) position on a polygon surface corresponds to the orthographic projection point (x, y) on the view plane.

24

25 of 94

Depth Buffer Method(Z-Buffer Method)

It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used.

Depth buffer is used to store depth values for (x, y) position, as

surfaces are processed (0 ≤ depth ≤ 1).

The frame buffer is used to store the intensity value of color value at each position (x, y).

The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates back clipping plane and 1 value for z-coordinates indicates front clipping plane.

25

26 of 94

Depth Buffer Method(Z-Buffer Method)

This method requires two buffers:

A z-buffer or depth buffer: Stores depth values i.e., z-values for each pixel position (x, y). Remember, smallest depth means maximum z- coordinate. Since, viewing direction is along +ve, Z-axis.

Frame buffer (Refresh buffer): Stores the surface-intensity values or color

values for each pixel position (x, y).

As surfaces are processed, the image buffer is used to store the color values of each pixel position and the z-buffer is used to store the depth values for each (x, y) position.

26

27 of 94

Depth Buffer Method(Z-Buffer Method)

27

28 of 94

Depth Buffer Method(Z-Buffer Method)

Algorithm:

Step-1 − For all buffer positions (x, y), initialize the buffer values;

depth_buffer (x, y) =-∞ or minimum z-coordinate.

frame_buffer (x, y) = background color

Step-2 − Process each polygon surface P (One at a time)

For each projected (x, y) pixel position of polygon P, calculate depth(x, y)=z.

If z > depth_buffer (x, y) then

Compute surface color

Set depth_buffer (x, y) = z,

frame_buffer (x, y) = surface_color (x, y), where surface_color (x, y) is the intensity value

for the surface at pixel position (x, y).

28

29 of 94

Depth Buffer Method(Z-Buffer Method)

After all polygon surfaces have been processed, the depth buffer contains depth values for the visible surfaces and each pixel of the image buffer represents the color of a visible surface at that pixel.

Depth Calculation:

29

30 of 94

Depth Buffer Method(Z-Buffer Method)

Pros and Cons:

It is simple and easy implement, no specific hardware is needed.

No pre-sorting of polygons is needed.

No object-object comparison is required. Can be applied to non-

polygonal objects.

Good for animation.

Additional memory buffer (z-buffer) is required.

It only deals with opaque surfaces.

30

31 of 94

A-Buffer Method(Accumulation Buffer Method)

A-buffer method is an extension of depth-buffer method.

The A-buffer method represents an area-averaged, accumulation- buffer method.

A drawback of the depth-buffer method is that it can only find one visible surface at each pixel position.

In other words, it deals only with opaque surfaces and cannot accumulate intensity values for more than one surface, as is necessary if transparent or translucent surfaces are to be displayed.

31

32 of 94

A-Buffer Method(Accumulation Buffer Method)

32

33 of 94

A-Buffer Method(Accumulation Buffer Method)

The A-buffer method expands the depth buffer so that each position in the buffer can

reference a linked list of surfaces.

It maintains a data structure of background surfaces that are behind the foreground

transparent surface. This special data structure is called accumulation buffer.

Each position in the A-buffer has two fields: Depth field and Intensity Field or Surface Data

Field.

Depth field: Stores a positive or negative depth value.

Intensity Field: stores surface-intensity information or a pointer value. It includes:

RGB intensity components

Opacity Parameter

Depth

Percent of area coverage

Surface identifier

33

34 of 94

A-Buffer Method(Accumulation Buffer Method)

If depth value positive, then that surface is opaque and intensity field stores surface intensity at that position. If depth >= 0, It indicates that there is a single surface overlapping the corresponding pixel area.

If it is negative, this indicates multiple-surface contribution to the pixel intensity. The intensity field stores pointer to a linked list of background surfaces.

34

35 of 94

Depth Sorting Algorithm(Painter’s Algorithm)

Another widely used object space method.

This method for solving the hidden-surface problem is often referred to as the Painter's algorithm or list priority algorithm.

This algorithm is also called "Painter's Algorithm" as it simulates how a painter typically produces his/her painting by starting with the background and then progressively adding new (nearer) objects to the canvas.

This method requires sorting operation of surfaces in both the image and object space.

35

36 of 94

Depth Sorting Algorithm(Painter’s Algorithm)

Basic Procedure:

Sort all surfaces of polygon in order of decreasing depth.

The intensity values for farthest surface are then entered into the refresh buffer. That is farthest polygon is displayed first, then the second farthest polygon, so on, and finally, the closest polygon surface.

After all surfaces have been processed, the refresh buffer stores the

intensity values for all visible surfaces.

When there are only a few objects in the scene, this method can be very fast. However, as the number of objects increases, the sorting process can become very complex and time consuming.

36

37 of 94

Depth Sorting Algorithm(Painter’s Algorithm)

37

38 of 94

Scan Line Method

Image-space method for identifying visible surfaces. It deals with

multiple polygon surfaces.

The scan line method solves the hidden surface problem one scan line at a time. Processing of the scan line start from the top to the bottom of the display.

This method calculates the depth values for only the overlapping surfaces which are tested by the scan line.

Each scan line is processed; all polygon surfaces intersecting that line are examined to determine which are visible.

Across each scan line, depth calculations are made for each overlapping surface to determine which surface is nearest to the view plane.

38

39 of 94

Scan Line Method

When the visible surface has been determined, the intensity value for

that position is entered into the refresh buffer.

It requires an edge table, polygon table, active edge list and flag.

Edge table contains: Coordinate end points for each scan line, pointers into the polygon table to identify the surfaces bounded by each line.

Polygon table contains: Coefficients of plane equations for each surfaces, pointers into the edge table, and intensity information of the surfaces.

Active edge list contains: Edges that cross the current scan line, shorted in order of increasing x.

Flag is defined for each surface that is set ‘ON’ or ‘OFF’ to indicate whether a scan line is inside or outside of the surface. At the left most boundary surface flag is ‘ON’ and at right most boundary flag is ‘OFF’.

39

40 of 94

Scan Line Method

40

41 of 94

Scan Line Method

Algorithm:

Step1: Establish and initialize data structure.

i) Edge table with line endpoints.

ii) Polygon table with surface information and pointer to the edge table.

iii) Initially empty active edge list.

i.e., AEL = { }.

iv) A flag, initially flag is "off" for each surface.

41

42 of 94

Scan Line Method

Algorithm

Step 2: Repeat for each scan line.

a) Update active edge list AEL.

For each pixel (x, y) on scan line

(1) Update flag for each surface.

(2) When flag is ON for only one surface, enter intensity of that surface to refresh buffer.

(3) When two or more flags are ON, calculate depth and store the intensity of the surface which is nearest to the view plane

42

43 of 94

Scan Line Method

Pros and Cons:

Any number of polygon surfaces can be processed with this method. Depth calculations are performed only when there are overlapping polygons.

Deals with transparent, translucent, and opaque surfaces.

Can be applied to non-polygonal objects.

Complex.

43

44 of 94

Introduction

Realistic displays of a scene are obtained by perspective projections and applying natural lighting effects to the visible surfaces of object.

For realistic displaying of 3d scene it is necessary to calculate

appropriate color or intensity for that scene.

The realism of a raster scan image of a 3D scene depends upon the successful stimulation of shading effects.

Once visible surface has been identified by hidden surface algorithm, a shading model is used to compute the intensities and color to display for the surface.

44

45 of 94

Introduction

Illumination model or a lighting model or shading model

It is a model for the interaction of light with a surface.

It is the model for calculating light intensity at a single surface point.

Sometimes also referred to as a shading model.

An illumination model is used to calculate the intensity of the light

that is reflected at a given point on a surface.

45

46 of 94

Surface Rendering

Rendering/Shading is the process of creating a high quality realistic images or pictures.

In 3-D graphic design, rendering is the process of add shading (how the color and brightness of a surface varies with lighting), color in order to create life-like images on a screen.

Surface rendering is the process of calculating intensity values for all pixel

positions for the various surfaces in a scene.

A rendering method uses intensity calculations from the illumination model

to determine the light intensity at all pixels in the image.

It is the process of applying illumination model to obtain the pixel intensities for all the projected surface positions in a scene.

Surface rendering can be performed by applying the illumination model to every visible surface point, or the rendering can be accomplished by interpolating intensities across the surface

46

47 of 94

Light Source

Object that radiates energy are called light sources, such as sun,

lamp, bulb, fluorescent tube etc.

Point Light Source

The rays emitted from a point light radially diverge from the source.

Approximation for sources that are small compared to the size of objects in the scene. Radiates equal intensity in all directions. For example sun.

47

48 of 94

Light Source

Distributed Light Source

A nearby source, such as the long fluorescent light. All of the rays from a directional/distributed light source have the same direction, and no point of origin. All light rays are parallel.

48

49 of 94

Light Source

Interaction of Light Source with Surfaces

When light is incident on opaque surface part of it is reflected and part of it is absorbed.

For transparent surfaces, some of the incident light will be reflected and some will be transmitted through the material.

49

50 of 94

Illumination Model

Illumination models are used to calculate light intensities that we

should see at a given point on the surface of an object.

Intensity calculations are based on the optical properties of surfaces

such as:

Reflectivity, opaque/transparent/translucent, shiny/dull, the background lighting conditions and the light source specifications.

Some of the illumination models are listed below:

Ambient Light

Diffuse Reflection

Specular Reflection or Phong Model

50

51 of 94

Illumination Model

Ambient Light

Surface that is not exposed directly to a light source still will be visible if

nearby objects are illuminated. This light is called ambient light.

This is a simple way to model the combination of light reflections from various surfaces to produce a uniform illumination called the ambient light, or background light.

The amount of ambient light incident on each object is a constant for

all surfaces and over all directions.

If a surface is exposed only to ambient light, then the intensity of the

diffuse reflection at any point on the surface is;

𝐼 = 𝐾𝑎 𝐼𝑎. Where 𝐼𝑎 is the intensity of the ambient light, and 𝐾𝑎 is the

ambient reflection coefficient.

51

52 of 94

Illumination Model

Ambient Light

52

53 of 94

Illumination Model

Diffuse Reflection

It is a reflection due to even scattering of light by uniform, rough surfaces.

Rough surface tends to scatter the reflected light in all direction. The scattered light is called diffuse reflection. So surface appears equally bright from all viewing directions.

Diffuse reflections are constant over each surface in a scene, independent of the viewing direction. Surfaces appear equally bright from all viewing angles since they reflect light with equal intensity in all directions.

Color of an object is determined by the color of the diffuse reflection of the incident light. If any object surface is red then there is a diffuse reflection for red component of light and all other components are absorbed by the surface.

53

54 of 94

Illumination Model

The intensity of diffuse reflection due to ambient light is;

𝐼𝑎𝑑𝑖𝑓𝑓 = 𝐾𝑎𝐼𝑎 ………………1

If surface is exposed to a point source, then intensity of diffuse

reflection can be calculated by using Lambert’s Cosine Law.

Lambert’s Cosine Law: The radiant energy from any small surface

dA in any direction relative to surface normal is proportional to cos

𝜃. That is, brightness depends only on the angle θ between the light

direction L and the surface normal N.

Light intensity 𝛼cos𝜃

54

55 of 94

Illumination Model

If 𝐼1 is the intensity of the point light source and 𝐾𝑑 is the diffuse reflection coefficient, then the diffuse reflection for single point- source can be written as;

𝐼𝑝𝑑𝑖𝑓𝑓 = 𝐾𝑑𝐼1cosθ

𝐼𝑝𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼1(Ν ∙L) ……………….2

Total diffuse reflection (𝐼𝑑𝑖𝑓𝑓 ) = Diff (due to ambient light)+Diff. due to

pt.source.

𝐼𝑑𝑖𝑓𝑓 = 𝐼𝑎𝑑𝑖𝑓𝑓 + 𝐼𝑝𝑑𝑖𝑓𝑓

𝐼𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑎 + 𝐾𝑑 𝐼1(Ν ∙L)

55

56 of 94

Illumination Model

Specular reflection and Phong model

In shiny surface we see high light or bright spot from certain viewing directions

called specular reflection. Also referred to as an irregular reflection.

Light source creates high lights or bright spots called specular reflection. However this effect is seen more on shiny surface then dull surfaces. Example: Persons forehead.

56

57 of 94

Illumination Model

Light is strongly reflected in one particular direction. This is due to total or

nearly total reflection of light.

For an ideal reflector, such as a mirror, the angle of incidence equals the angle of specular reflection (perfect mirror) = φ = 0.

The empirical formula for calculating the specular reflection is given by

Phong Model.

Phong Model: This is an empirical model, which is not based on physics, but physical observation. It sets the intensity of the specular reflection proportional to cosηsφ, where ηs is specular reflection parameter and is determined by the type of surface. For very shiny surface ηs is set 100 and for dull surface ηs is set 1. The intensity of specular reflection can be modeled using specular reflection coefficient W(θ).

Ispec = W(θ) I1cosηsφ.

Where I1 is the intensity of the light source, θ angle of incidence. General variation of W(θ)is over the range 0o-90o. At θ = 90o, W(θ) = 1.

57

58 of 94

Intensity Attenuation

As radiant energy from a point light source travels through space, its amplitude is attenuated by the factor l/d2, where d is the distance that the light has traveled.

This means that a surface close to the light source (small d) receives

higher incident intensity from the source than a distant surface (large d).

Therefore to produce realistic lighting effects, illumination model should take intensity attenuation into account. Otherwise we are likely to illuminate all surfaces with same intensity.

For a point light source attenuation factor is 1/d2.

And for a distributed light source attenuation factor is given by inverse quadratic attenuation function,

f(d) =1/a0 + a1d+a2d2.

58

59 of 94

Transparency

A transparent surface, in general, produces both reflected and

transmitted light.

The relative contribution of the transmitted light depends on the degree of transparency of the surface and whether any light sources or illuminated surfaces are behind the transparent surface.

When light is incident upon a transparent surface, part of it is reflected and part is refracted

According to Snell's law,

59

60 of 94

Shadows

Shadow can help to create realism. Without it, a cup, e.g., on a

table may look as if the cup is floating in the air above the table.

By applying hidden-surface methods with pretending that the position of a light source is the viewing position, we can find which surface sections cannot be "seen" from the light source => shadow areas.

We usually display shadow areas with ambient-light intensity only.

60

61 of 94

Surface Rendering

Surface-rendering procedures are also called surface-shading methods.

It is the process of applying illumination model to obtain the pixel intensities for all the projected surface positions in a scene.

Each surface can be rendered using:

Rendering entire surface with a single intensity: very fast but does not

produce realistic surfaces.

Interpolation Scheme: intensity values are interpolated to render the surfaces. Widely used approach, produces more realistic object surfaces than first method. Still suffers from Mach Band Effect.

By applying the illumination model to every visible surface point: best option, widely used approach, produces best quality surfaces, but requires large computations, so comparatively slow.

61

62 of 94

Surface Rendering

Three widely used approaches:

Constant Intensity shading Method (Flat Shading)

Gouraud Shading method (Intensity Interpolation)

Phong Shading Method (Normal Vector Interpolation).

62

63 of 94

Constant Intensity shading Method (Flat Shading)

The fast and simplest model for shading/rendering a polygon is constant intensity shading also called faceted Shading or flat shading.

In this approach, the illumination model is applied only once for each

polygon to determine a single intensity value.

The entire polygon is then displayed with the single intensity value.

It does not provide realistic displaying.

It provides an accurate rendering for an object if all of the following

assumptions are valid:

Polygon surface must be one face of a polyhedron and is not a section of a

curved-surface.

The light source is sufficiently far so that 𝑁. 𝐿 is constant across the polygon

face.

The viewing position is sufficiently far from the surface so that 𝑉.𝑅 is constant

over the surface.

63

64 of 94

Constant Intensity shading Method (Flat Shading)

Algorithm:

Divide polygon surface into polygon meshes.

Determine surface unit normal vectors for each polygon.

Calculate intensity value for a point for each surface (usually at the center).

Apply this intensity value to all the points of that surface.

64

65 of 94

Gouraud Shading method (Intensity Interpolation)

It is an intensity interpolating shading or color interpolating shading

introduced by Henri Gouraud.

The polygon surface is displayed by linearly interpolating intensity

values across the surface.

Idea is to calculate intensity values at polygon vertices. Then, linearly interpolate these intensities across polygon surfaces of an object.

65

66 of 94

Gouraud Shading method (Intensity Interpolation)

Algorithm

Determine the average unit normal vector at each polygon vertex.

At each polygon vertex, we obtain a normal vector by averaging

the surface normals of all polygons sharing that vertex.

Therefore, average unit normal vector at vertex V, is given by-

66

67 of 94

Gouraud Shading method (Intensity Interpolation)

Apply an illumination model to each vertex to calculate the vertex

intensity.

Linearly interpolate the vertex intensities over the surface of the

polygon.

Interpolation of intensities can be calculated as follows:

67

68 of 94

Gouraud Shading method (Intensity Interpolation)

Here in the figure, intensity of vertices 1, 2, 3 are I1 , I2, I3 which are obtained by averaging normals of each surface sharing vertices & applying an illumination model.

For each scan line, intensity at intersection of the line with Polygon

edge is linearly interpolated from intensities at edge end point.

So Intensity at intersection point A, Ia is obtained by the linearly interpolating intensities of I1 and I2 as

68

69 of 94

69

70 of 94

Gouraud Shading method (Intensity Interpolation)

Advantages:

It provides more realistic graphics than constant intensity shading.

It eliminates intensity discontinuities that occur in flat shading.

Disadvantages:

It can cause bright or dark intensity streaks to appear on the surface

called Mach banding.

Involves additional computation.

70

71 of 94

Phong Shading

Best known shading algorithm, developed by Phong Bui Tuong, is

called Phong shading or normal vector interpolation shading.

A more accurate method for rendering a polygon surface.

Idea here is to interpolate normal vectors instead of the light intensity and then apply the illumination model to each surface point.

Basic Idea:

Phong shading calculates the average unit normal vector at each of the polygon vertices and then interpolates the vertex normal over the surface of the polygon.

71

72 of 94

Phong Shading

72

73 of 94

Phong Shading

Advantages:

It provides more realistic highlights on a surface.

It reduces the Mach-Band effect.

It gives more accurate results.

Disadvantages:

It requires more calculations.

73

74 of 94

Chapter-5 Introduction to Virtual Reality

Prepared By: Sushant Bhattarai

75 of 94

Introudction

Artificial environment created with software.

Presented to user in such a way that user suspend belief and accept it as a real environment

Experienced via two of all five sense on a computer (i.e. sight and sound)

VR is the use of computer to create simulated environment

76 of 94

Introduction

VR can be generally divided into two parts:

  • Simulation of real environment for training and education
  • Development of imaginable environment for entertainment

77 of 94

Components of VR

Dimensionality Motion or animation Interaction Viewpoint

Immersion or embodiment through enhanced multi sensory experiences

78 of 94

Advantages of VR

Imaginable

Great social leveler: find common ground across difference in age,culture,etc

Easier communication Effective training Creating Interest

Improves Educational Value

79 of 94

Disadvantages of VR

Lacks Flexibility

Ineffective Human Connections

Getting Addicted

80 of 94

Application of VR

Military Education Healthcare Entertainment Dashion Engineering Sports Films,etc.

81 of 94

Types of VR system

Non-immersive Semi-immersive Fully-immersive

82 of 94

Non-immersive

Often forgotten as an actual type of VR It’s very common in our everyday lives

Average video game is technically considered a non-immersive virtual reality experience

Think about it, you’re sitting in a physical space, interacting with a virtual one.

83 of 94

Semi-Immersive

Provide users with a partially virtual environment to interact with

This type of VR is mainly used for educational and training purposes

The experience is made possible with graphical computing and large projector systems.

Example:the instruments in front of the pilot are real and the windows are screens displaying virtual content

84 of 94

Semi-immersive

85 of 94

Fully-Immersive

Chances are when you think of VR, you’re picturing a fully-immersive experience

Complete with head-mounted displays, headphones, gloves, and maybe a treadmill or some kind of suspension apparatus

This type of VR is commonly used for gaming and other entertainment purposes in VR arcades or even in your home (empty, non-fragile room advised.)

86 of 94

Fully-immersive

Give users the most realistic experience possible, complete with sight and sound

87 of 94

Component of VR system

PC(Personal Computer ) Head-mounted display Input Devices

  • Joysticks
  • Tracking Balls
  • Treadmills,etc.

88 of 94

3D user interaction

3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space

3D space used for interaction can be the real physical space, a virtual space representation simulated in the computer, or a combination of both

89 of 94

3D Position Trackers

based primarily on motion tracking technologies, to obtain all the necessary information from the user through the analysis of their movements or gestures

Trackers detect or monitor head, hand or body movements and send that information to the computer.

3D trackers have been identified as mechanical, magnetic, ultrasonic, optical, and hybrid inertial.

Examples of trackers include motion trackers, eye trackers, and data gloves

90 of 94

3D Position Trackers

The ideal system for this type of interaction is a system based on the tracking of the position, using six degrees of freedom (6-DOF)

Example:Microsoft KINECT, Leap Motion,etc.

91 of 94

3D Navigation

Navigation is the most used by the user in big 3D environments

These techniques, navigation tasks, can be divided into two components: travel and way finding.

Travel involves moving from the current location to the desired point

Way finding refers to finding and setting routes to get to a travel goal within the virtual environment.

92 of 94

3D Manipulation

Manipulation techniques for 3D environments must accomplish at least one of three basic tasks:

  • object selection
  • object positioning
  • object rotation.

93 of 94

3D Manipulation

Manipulation tasks involve selecting and moving an object.

Sometimes,the rotation of the object is involved as well.

Direct-hand manipulation is the most natural technique because manipulating physical objects with the hand is intuitive for humans.

A virtual hand that can select and re-locate virtual objects will work as well.

94 of 94

3D Manipulation

3D widgets can be used to put controls on objects

These are usually called 3D Gizmos or Manipulators