Computer Graphics
LAB 4: Introduction to the GPU
LAB5: 3D Shader Lighting
Objective of this lab
Index
Introduction to OpenGL
Recap: Graphics Processing Unit (GPU)º
The GPU is the responsible drawing to the framebuffer and submitting it to the screen.
Recap: What is OpenGL?
OpenGL (Open Graphics Library) is a standard specification that defines an API multi-language and multi-platform, to write applications that render 2D and 3D graphics.
From this specification (or API), manufacturers implemented every function of this API transforming the calls to their internal GPU calls, and published this with their drivers.
There are several versions of OpenGL ( OpenGL 1.1, OpenGL 3, OpenGL 4.4 and OpenGL ES).
Graphics APIs
We are using OpenGL in this lab for simplicity, but it’s important to know that there’s more graphics APIs besides it, for example:
Benefits of using OpenGL
What is NOT OpenGL?
OpenGL is just used as a way to raster triangles very fast.
Common OpenGL actions
Architecture of a CPU+GPU app
When you code an application, it must communicate with the GPU via OpenGL functions.
These functions will be translated from the driver to the actions that are sent to the GPU and the GPU will execute them changing the pixels on the framebuffer.
CPU
GPU
Application
OpenGL Library
OpenGL Driver
GPU Hardware
VRAM Memory
OpenGL primitives
When you want to raster you must specify the primitive type and according to the vertex array it will raster one shape or another.
They can be divided in three groups:
Depending on the number of vertex provided it will draw more or less primitives.
GL_TRIANGLES is used by default in your framework
Framebuffer
The GPU has its own framebuffer, stored in the VRAM (GPU’s RAM) so we do not have direct access from the CPU.
The amount of memory used by the framebuffer comes from this formula (the same as our previous framebuffer):
Screen Width x Screen Height x number of channels x bytes per channel = image size in bytes
Double Buffer
One common problem when rendering the scene is that you do not want the user to see the image until all objects have been rendered, but if we render to the framebuffer the user would see how the image gets build constantly.
To solve this problem we use double buffering, where we use two buffers, the one being displayed and the one where we render the image.
Once we finish rendering the image we swap both buffers so the user can see always the final image and not the process.
How to render in OpenGL (step by step)
The GPU will read all the vertices and according to the primitive type it will render on the screen the appropriate shapes.
Basic render function
void render()
{� // Clear the framebuffer and the depth buffer� glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );�� // Draw scene
// ...�� // Swap between front and back buffer� // This method name changes depending on the platform� glSwapBuffers();
}
OpenGL States
Sometimes you want to disable some parts of the rendering pipeline.
OpenGL stores lots of flags to control which steps are performed during the rasterization.
To change those flags you can call the glEnable and glDisable, and other functions of openGL.
Once we change a state it will be applied to all the draw calls that are sent after that.
void drawTriangles() {�
// Disable Z-buffer testing
glDisable( GL_DEPTH_TEST );
// This triangle wont take into account occlusions
renderTriangle();
// Enable Z-buffer testing
glEnable( GL_DEPTH_TEST );
// This triangle will take into account occlusions
renderTriangle();
}
Z-Buffer and Occlusions
The GPU is in charge of processing occlusions using its own depth buffer (Z-buffer). The only thing we have to do is clearing the depth buffer at the beginning of each frame:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);�
and enable it before rendering an object:
glEnable( GL_DEPTH_TEST );
GPU Rendering Pipeline
CPU vs GPU rasterization
CPU
Optimized for serial tasks
GPU
Optimized for many parallel tasks
Coding inside the GPU: Shaders
A shader is a piece of code meant to be executed inside the GPU, designed to overwrite the rendering pipeline used when rasterizing primitives.
Shaders have limitations (mostly due to memory constraints):
Shaders: Coding language
There are several languages for coding shaders, depending on the API used:
There is not much difference between them in terms of syntax or performance, as they are all compiled to the assembly code of the GPU.
Shaders: Coding language
From now on, we will use GLSL to explain shaders as it is the one used by OpenGL:
Shader types
The mandatory shaders that every rendering pipeline must have are:
GLSL variable types
Some of the basic types and classes in GLSL:
GLSL variable types
Access to the components of a vector (swizzling). E.g. A Vector4 (vec4):
GLSL variable types
vec4 a = v1 + v2;
vec4 b = v1 * v2;
GLSL functions
Here is a list of common GLSL functions that you can use:
GLSL reserved keywords
From the Vertex Shader:
From the Fragment Shader:
*: as it was stored in the vertices array
Global variables: attribute
Shaders have 3 types of global variables, depending on where the variables comes from:
Global variables: Uniform and Varying
Is it possible to have a regular global variable? Yes, without specifying a type, but it will be available only in the current shader stage!
Vertex Shader
Fragment Shader
Application
varying
(interpolated)
Mesh Data
attribute
gl_Vertex�gl_Normal�gl_TexCoord
uniform
GPU
CPU
gl_Position
gl_FragColor
Executed per vertex
Executed per pixel
Framebuffer
V0,V1,V2,... N0,N1,N2,...
uniform
Using Shaders
Using shaders
The most common case would be to replicate how the light behaves in the real world by coding an algorithm to paint every pixel of a primitive in 3D.
Same mesh rendered with different shaders
Using shaders
First step: 2D shaders; e.g. create images or apply effects to already existing images
Using shaders: Draw 2D formulas
The result of the equation defines the color of the pixel:
R = f(x,y) G = f(x,y) B = f(x,y)
Examples: https://tamats.com/apps/mathnimatics/
Using shaders
To render meshes (from quads to other 3D meshes) we have get some things clear first:
// Enable shader and upload needed variables from CPU
// ...
mesh->Render();
Using shaders
In case of rendering 3D meshes, we need some variables coming from the CPU: the model and the view-projection matrices.
Using shaders
The framework encapsulates the lower level OpenGL calls, so your code for creating a shader in your application will be VERY simplified:
Shader * shader = Shader::Get("shader.vs","shader.fs");
When rendering, there are some steps that must be followed once the shader is created:
shader->Enable();
shader->SetMatrix44(...);
mesh->Render();
shader->Disable();
Clearing buffers
Since OpenGL controls now the framebuffer, we must call some of its functions before starting to draw anything (already being called in the framework).
// Set the background color of the framebuffer (0..1 range!)
glClearColor(r, g, b, alpha);
// Clear the window and the depth buffer (occlusions)!
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
On window creation
Main loop, before Application::Render
Depth Occlusions
Since OpenGL controls now the framebuffer, we must call some of its functions before starting to draw anything.
// Enable depth testing for occlusions
glEnable( GL_DEPTH_TEST );
// Z will pass if the Z is LESS or EQUAL to the Z of the pixel
glDepthFunc(GL_LEQUAL);
Uploading data to the GPU
Again, the framework encapsulates the lower level OpenGL calls to upload data to the GPU (that will be used in your shader):
// Call variable specific method to upload data
shader->SetFloat("u_value", 1.0);
shader->SetMatrix44("u_viewprojection", viewprojection);
shader->SetTexture("u_texture", texture);
Once the data is at the right place, you can render the mesh (Mesh class has a method to render itself using the GPU, check the framework!).
Full Screen Quad Shader
Rendering a full screen quad is very simple since we can avoid the projection of its vertices.
The 6 vertices composing both triangles are in the [-1..1] range, so we already have it in clip-space → No projection in the vertex shader!
-1, 1
1, 1
1, -1
-1, -1
-1, 1
1, -1
Full Screen Quad Shader
VS:
// Store the UVs to use later interpolated in the fragment shader�varying vec2 v_uv;
void main()�{
// Set vertex uvs� v_uv = ...;
// Output clip-space� gl_Position = ...;�}
FS:
varying vec2 v_uv;
void main()�{� gl_FragColor = ...;�}
Reading textures from shaders
You can read pixels from an image passed to the shader (texture) using the function texture2D in GLSL:
vec4 color = texture2D( u_texture, uv );
Quad texture shader
FS:
// Receive the uvs interpolated from the vertex
shader
varying vec2 v_uvs;
// Receive the texture as a sampler2D from our application
uniform sampler2D u_texture;
void main()�{
// Fetch sampler
vec4 texture_color = texture2D( u_texture, v_uvs );
// Assign the color to the pixel� gl_FragColor = texture_color;�}
Applying effects
You can add visual effects before sending the final color to the framebuffer:
Chromatic aberration
Applying effects and transformations
FAQ
Useful resources