1 of 72

Ambient Occlusion

Javi Agenjo 2020

2 of 72

Introduction

Until now we didn't pay much attention to the ambient component of the light.

We just assume it is constant and add it to the amount of light of our scene.

But this simplification does not work well in many situations.

Let's check how can we improve it.

3 of 72

Ambient light

The Ambient light is all that light that doesn't come from a known specific position of our scene, but from all directions, simulating how the light bounces around multiple times.

It is used to fill the scene, otherwise some areas will remain pitch black.

The problem with ambient light is that because it is constant it doesn't convey much information about the scene, nor depth, nor material information.

4 of 72

Interior

Using a constant ambient make renders feel very flat as areas outside direct light will have the same amount of illumination.

And we will always have areas lit only by ambient in our scenes.

5 of 72

Corners

In the real world the areas with only ambient light do not have all pixels with the same amount of illumination, as pixels in the corners should have less light (they are more occluded) that pixels in the middle of a wall.

This darker areas allow us to perceive better the shape of objects, even in situations without direct light.

6 of 72

Floor contact

When talking about shadows we insisted on the importance of shadows to ground objects to the floor.

If an object is not in direct light, the missing shadows make it feel like he is floating.

Even object that are not in direct light have some sort of shadow around.

7 of 72

Overcast exteriors

And in exteriors when it is cloudy and the sun light is scattered through the clouds there is no way we can solve this using regular phong as there is not one point of light.

And using only ambient wouldn't seem realistic.

8 of 72

Self Occlusions

Also check this picture, there are some areas that are darker because they are self-occluded occluded (creases, wrinkles, corners, etc) or occluded from touching objects.

We cannot assume that the ambient is the same in those areas than the rest.

9 of 72

constant ambient

10 of 72

ambient occlusion

11 of 72

Ambient Occlusion

12 of 72

Ambient Occlusion

The reason why some points of our environments are darker is because the amount of ambient light they can receive is lower due to the morphology of its surroundings.

If the point is surrounded by other surfaces blocking its view, it is less likely that ambient light will reach this position.

We call the amount of ambient light received by a point its Ambient Occlusion factor.

13 of 72

Ray-tracing Ambient Occlusion

Solving this problem using a raytracer is trivial.

We just cast rays in all directions with a maximum length from the point where we want to evaluate it, and check if they collide with other objects.

The more rays collide the less ambient light this pixel should have.

The amount of ambient light at the end is computed like:

num. of rays not colliding / total num. of rays.

14 of 72

Ray distance

When checking occlusions throwing rays we can define a max distance. The further the rays can collide the bigger the area in occlusion will be.

The problem with big AO shadows is that they tend to make the scene look dusty so it is important to control the AO ray length.

AO generated with long rays

AO generated with short rays

15 of 72

Number of Rays

But how many rays we should cast? The area around a point has infinite points.

We could compromise with a number, but that means the lower that number is, the bigger the amount of information that our rays are missing.

And that could produce spatial discrepancies that will end up being shown as noise in our image.

But if we increase the number of rays that adds computational cost which will affect our rendering performance.

16 of 72

AO + Direct Light

Ambient occlusion is important because usually there are big areas of our scene where direct light never reaches and having a solution for ambient occlusion makes them look more realistic.

17 of 72

Ambient Occlusion

18 of 72

Ambient Occlusion

19 of 72

Casting rays from the GPU?

Casting rays (also known as raytracing) is a feature still not available in everyone's GPU (only the latest RTX support it), so if we plan to have a solution that works in any GPU we must consider other approaches.

Keep in mind that if we are going to precompute and store the scene Ambient Occlusion, then we could rely on using a raytracing solution as final users won't have to compute it.

20 of 72

Precomputed AO

For many years games relied on precomputing the Ambient Occlusion using ray-tracing solutions with 3D editors like Blender (mostly using only CPU).

The process will take several minutes and the result can be stored using textures with the AO factor. The bigger the resolution of the texture the more detail will it have, but AO tends to have low-frequency so it is not necessary to use high-res textures.

As always, it only works for static geometry, the texture resolution would have an impact, and it won't take into account if dynamic objects are near to other surfaces.

21 of 72

Baked AO for scene objects

This is still in use by many games to solve the self-ambient occlusion (how objects occlude themselves). This require a special set of UVs per mesh to store independent texture regions per triangle.

Still, this is not perfect as this texture doesn't take into account occlusion from other objects in the scene. It focuses on occlusions that come from its own mesh, which are important, but occlusions from other objects are important too.

Check how the floor below the car in the right image is not occluded by the car as it is a different object in the scene than the car.

22 of 72

Negative Lights and AO Decals

Some video-games fake ambient occlusion by creating negative lights (lights that subtract color instead of adding it) or using decals.

This way they can darken some specific areas (like the contact between the character and the floor) by carefully placing them in the scene.

This is used mostly for characters and dynamic objects where other solutions would be too costly.

23 of 72

Realtime AO

It would be great if there was some sort of data structure in the GPU accessible from the shader that give us information about the scene structure, so we could use it to compute occlusions per pixel, at least of the area inside our view...

But we do have that structure! (sort of), it is called depth buffer. It is an approximation of the scene geometry only for the visible area, but it comes for free as we already created it while rendering the scene.

And it is independent from the number of objects in our scene. Let's see how can we use it.

24 of 72

Current Pipeline

Collect Renderables

Generate Shadowmaps

Generate GBUFFERS

Illumination Pass

Tonemapper &

Gamma

SSAO

25 of 72

Screen Space Ambient Occlusion

26 of 72

Screen Space Ambient Occlusion

The first implementation of Ambient Occlusion in real-time was presented by Vladimir Kajalin in 2007 developed for the game Crysis (by Crytek).

His idea was to use the depth-buffer as an approximation to compute occlusions in the scene, hence its name, Screen-Space AO.

By comparing the Z of a pixel with the Z of pixels surrounding it, we can estimate if there are other objects occluding this pixel.

And because surrounding pixels could come from other objects it solves the inter-object occlusion.

27 of 72

SSAO

28 of 72

Insufficient Information

The main problem using the Z buffer as an scene geometry approximation is that it doesn't contain all the necessary information to solve occlusions of nearby pixels.

Check the depth-buffer in the top image, we see there is something close to the camera, but it could be a pole or it could be a wall that extends far away. But we only can see the area facing the camera, so that's the only depth we have.

In some cases we may have enough information to solve some occlusions, but in others what we have is not enough so the occlusions won't be correct.

29 of 72

Missing Occlusions

If we don't have all the depth information we end up creating occlusions in areas that shouldn't be occluded.

To mitigate this we can limit the difference in depth that we can accept to define a pixel is occluding, but that could produce unoccluded pixels in areas that should be occluded.

30 of 72

Border Occlusions

Another problem with using the Z buffer is that we at the edges of the frame we do not have surrounding information.

So the algorithm will have discernable wrong occlusions in the borders of the frame.

This can be solved by rendering the scene to a slightly bigger FBO size with bigger camera FOV, and cropping the edges at the end, so we will have information about the depth outside of our view.

31 of 72

Sampling

For every pixel we must read the pixels around it to compute the occlusion, but that area could be too big to read every single pixel, and if we make the area too small, we lose occlusions from far away objects.

One way to mitigate this is by sampling some pixels, not all, using some random distribution.

But this could cause visible noise in the AO results.

32 of 72

Sampling Kernels

Another problem when fetching pixels around the current pixel to compute the AO is that if we use pure noise the image will have temporal noise. If we use a screen position dependant noise then when the camera moves the AO will change creating ugly flickerings.

So the best solution is to use a fixed pattern kernel to fetch samples, but this could produce visible patterns in the resulting buffer.

33 of 72

Rotating Kernel

One way to avoid patterns produced by the points distribution is by rotating the kernel points in every pixel by a different angle over the front axis.

Here is a way to construct a rotation matrix on an arbitrary axis.

//random value from uv�float rand(vec2 co)

{

return fract(sin(dot(co, vec2(12.9898, 78.233))) * 43758.5453123);

}

//create rotation matrix from arbitrary axis and angle

mat4 rotationMatrix( vec3 axis, float angle )

{

axis = normalize(axis);

float s = sin(angle);

float c = cos(angle);

float oc = 1.0 - c;

return mat4(oc * axis.x * axis.x + c, oc * axis.x * axis.y - axis.z * s, oc * axis.z * axis.x + axis.y * s, 0.0, oc * axis.x * axis.y + axis.z * s, oc * axis.y * axis.y + c, oc * axis.y * axis.z - axis.x * s, 0.0,oc * axis.z * axis.x - axis.y * s, oc * axis.y * axis.z + axis.x * s, oc * axis.z * axis.z + c, 0.0, 0.0, 0.0, 0.0, 1.0);

}

34 of 72

Blurring the SSAO

We can solve the noisiness and the patterns of AO by computing the AO in a separate texture and applying a post-processing blur.

Blur tend to be slow, depending on the size of the kernel, but for small kernels is fine.

Also blurring could produce AO spills, as the AO will pass to areas where it shouldn't. This can be fixed using depth-aware blurs, but for now let's stick with a regular blur.

In this link there are some tricks related to blurring the SSAO results.

35 of 72

Self-occlusions

The problem with checking only random points inside a sphere around the pixel is that usually half of them will fall inside the surface. This is like saying that a point could occlude itself.

This produces a grey area in almost every pixel reducing considerably the amount of ambient.

It is important to fix this issue.

36 of 72

SSAO+

So the next step to improve SSAO is to take into account not only the depth of the pixels around but also the orientation of the pixel where we are computing the AO.

To do this we need to know the normal of the pixel. This is why it comes in handy to use deferred rendering as it stores the normal per pixel in a separate buffer.

The algorithm is the same as SSAO but samples rotating the hemisphere samples according to the normal.

SSAO

SSAO+

37 of 72

Horizon Based AO

HBAO is a better implementation of SSAO+ developed by NVidia.

The problem with SSAO is that it is not very accurate in the way it computes the occlusion among nearby samples, as two aligned samples will add occlusion twice when should be just once.

HBAO introduces the concept of horizon, which is the line that separates the occluded light from the non occluded light for a given pixel.

It also uses raymarching to find a better approximation, so the cost is higher from previous SSAO.

38 of 72

HBAO algorithm

The algorithm separates the process in vertical splits centered around the normal.

For every one of them, compute the horizon angle using ray-marching along the tangent. This ensures that the amount of received light is correct.

Finally using the angle of every split compute the average occlusion.

Better explanation in this slides.

39 of 72

Downsampled buffer

SSAO and HBAO require lots of computational cost.

If we are going to apply a blur, maybe makes more sense to compute the SSAO into a lower resolution texture.

This way we will compute less pixels which will improve the performance, and when being applied to the final frame the difference won't be noticeable due to the low frequencies of the AO.

40 of 72

Cache Efficiency

Using fixed kernels when sampling the depth has some benefits, because you use aligned memory the cache will be more consistent and efficient, as adjacent pixels are likely to share the same data.

One problem when using fixed kernels is that they produce visible patterns as adjacent pixels share similar information.

We can fix this by changing the order of the samples between adjacent pixels (using a 90 degrees rotation) but this will reduce the cache efficiency as GPUs process pixels in parallel, and each pixel will require different data.

We could apply a small jitter to every sample so the samples are still aligned but not use the same.

41 of 72

HBAO+

NVidia improved their own HBAO version by taking into account how the hardware handles caches.

It separates the pixels in four different buffers of half resolution, and applies different kernels to every one of them.

Then at the end it combines the buffers into the final SSAO texture.

More info in this presentation.

42 of 72

Temporal Reprojection

Another important feature of AO is that because it is a world space property, it must be coherent between frames (same XYZ coordinate should have same AO between frames).

This is great as it means we can reuse information from previous frames to compute the current pixel AO and save some computations.

This can be done by projecting current XYZ position into the previous frame (using the viewproj from the previous frame) and reprojecting it using the depth buffer from previous frame. If resulting Pi-1 is far from current Pi, then ignore, otherwise reuse AO.

More info in this slides.

43 of 72

Using Signed Distance Fields

Some engines use Signed Distance Fields to compute the ambient occlusion.

This works because AO has low frequencies and can be approximated with low resolution SDFs.

Engines like Unreal allow to generate a SDF of the whole scene, while games like The Last of Us used an analytic approximation of character bodies to add AO to their characters and their surroundings.

Interactive tutorial here.

44 of 72

Voxel Ambient Occlusion

To avoid all the problems related to missing information from the Depth Buffer, Nvidia proposed using a voxelization of the scene instead of the depth buffer, this approach is called Voxel Ambient Occlusion (VXAO).

Raymarching a volume requires bigger computational cost, but that could be reduced by using an optimized data structure.

This will include be much more costly but it will solve the problems of incorrect occlusions.

45 of 72

Raytraced AO

With the arriving of raytracing GPUs the AO problem can be easily solved (at a much higher computational cost).

One benefit from AO is that it has temporal coherence as scene positions do not change its AO usually, so raytraced AO can take advantage of this by reprojecting samples from previous frames.

46 of 72

NNAO

And the final solution just uses Neuronal Networks to compute the AO.

Not much to say here, just train a network using raytraced AO as ground truth, using as input the Normal and Depth buffers, and use the resulting weights to compute the AO.

Here is a paper explaining it.

47 of 72

Bent Normals

Some engines add the option to use an extra map that contains a special normalmap called Bent Normalmap.

The idea is to store for every pixel of the surface the direction of least occlusion (the direction in which most ambient light will be received).

Generating this maps require special tools, while using it is quite simple.

More info here.

48 of 72

Implementing SSAO+

49 of 72

General Idea

The algorithm works like this:

For each fragment on a screen-filled quad, calculate an occlusion factor based on the fragment’s surrounding depth values:

• Occlusion factor is used to reduce or nullify the fragment’s ambient lighting component

• Occlusion factor obtained by taking multiple depth samples in a sphere (sample kernel surrounding the fragment position and compare samples with the current fragment’s depth value)

• Number of samples that have a higher depth value than the fragment’s depth represents the occlusion factor.

50 of 72

Depth + Normal

To implement SSAO we will need to have the depth of every pixel.

If we plan to code SSAO+ we will need also its normal.

We already have this information stored in the GBuffers of our deferred renderer.

Because we will be fetching the world position of every pixel maybe it would be good to create a temporary buffer and store the world position (using float buffers) of every pixel to avoid having to reconstruct it everytime, but let's use the reprojection for now.

51 of 72

AO Buffers

Because we will need the final AO when computing the ambient illumination in our deferred illumination pass, we must store it in a temporary buffer, a texture.

This buffer could have less resolution than the screen as it won't contain high frequency information.

We do not need 3 channels, we could use a single color channel texture if we wanted, but for now let's stick with 3 to avoid problems with old GPUs that do not support to render to one single channel.

//let's create an FBO to render the AO inside

ssao_fbo = new FBO();

ssao_fbo->create( window_width, window_height);

//maybe we want to create also one for the blur, in this case just create a texture

ssao_blur = new Texture();

ssao_blur->create( window_width, window_height );

52 of 72

Uniform Samples

We will need to have some random positions to fetch points around the worldpos of our pixel.

We will need this function that generates random points inside a sphere with a better distribution than using pure random values.

There are better distribution functions but for now let's use this one.

std::vector<Vector3> generateSpherePoints(int num,

float radius, bool hemi)

{

std::vector<Vector3> points;

points.resize(num);

for (int i = 0; i < num; i += 1)

{

Vector3& p = points[i];

float u = random();

float v = random();

float theta = u * 2.0 * PI;

float phi = acos(2.0 * v - 1.0);

float r = cbrt( random() * 0.9 + 0.1 ) * radius;

float sinTheta = sin(theta);

float cosTheta = cos(theta);

float sinPhi = sin(phi);

float cosPhi = cos(phi);

p.x = r * sinPhi * cosTheta;

p.y = r * sinPhi * sinTheta;

p.z = r * cosPhi;

if (hemi && p.z < 0)

p.z *= -1.0;

}

return points;

}

53 of 72

Concentrated Samples

Some people like to use an un-evenly distributed cloud of points where more points are closed to the center.

This can help improve the quality of the SSAO at the cost of more noise.

float scale = float(i) / num;

scale = lerp(0.1f, 1.0f, scale * scale);

p *= scale;

54 of 72

Texture filtering

Because we are going to read from random positions in the depth buffer (due to the random vectors offset) we need to enable the bilinear filtering.

Otherwise some strange artifacts could appear in the edges.

Also disable the use of mipmaps as they won't be necessary.

//bind the texture we want to change

gbuffers_fbo->depth_texture->bind();

//disable using mipmaps

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

//enable bilinear filtering

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );

55 of 72

Calling the shader

Now we must create the shader and pass all the required info.

Besides sending the depth buffer and the inverse viewproj (necessary to reconstruct the worldpos of every pixel) we also pass the viewproj (required to now the uv in the depth buffer of any given worldpos) and the random positions.

We also can pass a radius to have more control over the SSAO area.

//start rendering inside the ssao texture

ssao_fbo->bind();

//get the shader for SSAO (remember to create it using the atlas)

Shader* shader = Shader::Get("ssao");

shader->enable();

//send info to reconstruct the world position

shader->setUniform("u_inverse_viewprojection", invvp);

shader->setTexture("u_depth_texture", gbuffers_fbo->depth_texture, 0);

//we need the pixel size so we can center the samples shader->setUniform("u_iRes", Vector2(1.0/(float)depth_texture->width,

1.0/(float)depth_texture->height));

//we will need the viewprojection to obtain the uv in the depthtexture of any random position of our world

shader->setUniform("u_viewprojection", camera->viewprojection_matrix);�shader->setUniform("u_radius", ssao_radius ); //our sphere radius

//send random points so we can fetch around

shader->setUniform3Array("u_points", (float*)&random_points[0],

random_points.size());

//render fullscreen quad

quad->render(GL_TRIANGLES);

//stop rendering to the texture

ssao_fbo->unbind();

56 of 72

From Depth to Worldpos

Inside our SSAO shader that will be executed for every pixel of the screen we need to compute the AO factor.

First we need to know for every pixel what was its world_position.

We can use the same approach we used for deferred.

In this case for the screenpos we are using directly the uvs but we could use the gl_FragCoord that give us the position of the pixel in viewport coordinates.

//we want to center the sample in the center of the pixel

vec2 uv = v_uv + u_iRes * 0.5;

//read depth from depth buffer

float depth = texture( u_depth_texture, uv ).x;

//ignore pixels in the background

if(depth >= 1.0)

{

FragColor = vec4(1.0);

return;

}

//create screenpos with the right depth�vec4 screen_position = vec4(uv*2.0 - vec2(1.0), depth*2.0 - 1.0,1.0);

//reproject

vec4 proj_worldpos = u_inverse_viewprojection * screen_position;

vec3 worldpos = proj_worldpos.xyz / proj_worldpos.w;

57 of 72

Compute AO factor

Now the idea is to check how many points around this one are inside or outside the depth buffer.

For this we check their depth in the zbuffer and compare it with the point depth, if the true depth is smaller than the point depth, its inside, so it reduce the amount of AO.

Finally we can use this AO factor as the color of the pixel as it is a factor between 0 and 1.

//lets use 64 samples

const int samples = 64;

int num = samples; //num samples that passed the are outside

//for every sample around the point

for( int i = 0; i < samples; ++i )

{

//compute is world position using the random

vec3 p = worldpos + u_points[i] * u_radius;

//find the uv in the depth buffer of this point

vec4 proj = u_viewprojection * vec4(p,1.0);

proj.xy /= proj.w; //convert to clipspace from homogeneous

//apply a tiny bias to its z before converting to clip-space

proj.z = (proj.z - 0.005) / proj.w;

proj.xyz = proj.xyz * 0.5 + vec3(0.5); //to [0..1]

//read p true depth

float pdepth = texture( u_depth_texture, proj.xy ).x;

//compare true depth with its depth

if( pdepth < proj.z ) //if true depth smaller, is inside

num--; //remove this point from the list of visible

}

//finally, compute the AO factor as the ratio of visible points

float ao = float(num) / float(samples);

58 of 72

Results

If we show into the screen our SSAO buffer it should look like this.

It is not a great SSAO but we are getting closer.

Clearly we are darkening a lot our scene with this approach, this is due to using spheres instead of hemispheres.

Also some pixels are darkening areas too far.

59 of 72

Oriented Hemispheres

Now instead of using spheres we will be using hemispheres oriented according to the normal of the pixel.

For that purpose we must send to the shader also the normal buffer of our scene and be sure that all the points have its Z positive.

Then we will rotate every point according to the normal before adding it to the world position.

60 of 72

Orienting Hemispheres

The tricky part is to orient every point in our hemisphere according to the normal of the pixel.

The simplest way is to check in which side of the oriented hemisphere falls the random point, using a plane oriented by the normal.

If the point in in the positive side of the plane then it is good, otherwise we invert it so it falls in the positive side of the hemisphere.

We can do this because points and normal are all in world space.

vec3 random_point = u_points[i];

//check in which side of the normal

if(dot(N,random_point) < 0.0)

random_point *= -1.0;

61 of 72

Rotating points

The other option is to create a rotation matrix to convert from tangent space to world space, and send an hemisphere of points, then orient them based on this rotation matrix.

We can built the rotation matrix using the normal and the position, but there is a problem, this function requires a tangent vector.

We used the uvs of the mesh when we did the same for the normalmap but here we do not have the uvs of the mesh, we have the uvs of the quad but those don't align with the scene, we can use them for now.

//from this github repo

mat3 cotangent_frame(vec3 N, vec3 p, vec2 uv)

{

// get edge vectors of the pixel triangle

vec3 dp1 = dFdx( p );

vec3 dp2 = dFdy( p );

vec2 duv1 = dFdx( uv );

vec2 duv2 = dFdy( uv );

// solve the linear system

vec3 dp2perp = cross( dp2, N );

vec3 dp1perp = cross( N, dp1 );

vec3 T = dp2perp * duv1.x + dp1perp * duv2.x;

vec3 B = dp2perp * duv1.y + dp1perp * duv2.y;

// construct a scale-invariant frame

float invmax = inversesqrt( max( dot(T,T), dot(B,B) ) );

return mat3( T * invmax, B * invmax, N );

}

//to create the matrix33 to convert from tangent to world

mat3 rotmat = cotangent_frame( normal, worldpos, uv );

//rotate a point is easy

vec3 rotated_point = rotmat * point_to_rotate;

62 of 72

Result

This looks better as it takes into account the normal.

63 of 72

Blur results

Although our SSAO image wasnt very noise, here is an example of the result if we blur it.

64 of 72

Problem with lowres and blur

The problem with using a lower resolution texture for the SSAO or blurring it is that you will get this ugly artifacts in the edges.

To avoid them you need to use a depth-aware blur and upsampling shader, that reads scene depth and lowres depth and compare them, if they are similar the texel is valid.

More info here

65 of 72

Combine with ambient

To combine with ambient is easy, we can pass the texture to our final render and fetch the AO factor for that pixel from the AO texture using as UVs the pixel position on the screen.

If we want to have more control of the dark areas, we could add a function to manipulate the factor intensity so it is less linear.

//we need the uv for the pixel in screen position

vec2 screenuv = gl_FragCoord.xy * u_iRes;

//read the ao_factor for this pixel

float ao_factor = texture( u_ao_texture, screenuv ).x;

//we could play with the curve to have more control

ao_factor = pow( ao_factor, 3.0 );

//weight the ambient light by it

final_light = u_ambient_light * ao_factor;

66 of 72

Improvements: Range check

There are improvements to do, like controlling if an occluder is too far from the pixel and applying less occlusion in that case, to avoid the dark areas around objects closer to the camera.

To do so we should check the distance between both depths, but because they are not in linear space we cannot do it directly.

So we first linearize the depth of both values, then we can subtract to see how far it is and reject only if the distance is below some threshold.

float depthToLinear(float z)

{

return near * (z + 1.0)

/ (far + near - z * (far - near));

}

//linearize the depth

pdepth = depthToLinear(pdepth) ;

float projz = depthToLinear(proj.z);

float diff = pdepth - projz;

//check how far it is

if( diff < 0.0 && abs(diff) < 0.001)

num--;

67 of 72

Improvements: Better sampling

Right now all samples are aligned with the camera as we rotate them according to the front vector.

This could produce visible artifacts that could be easily reduced by rotating the samples by a random rotation.

This will introduce noise but that could be easily fixed with the blur.

68 of 72

NO AO

69 of 72

SSAO

70 of 72

Raytraced AO

71 of 72

Conclusions

Having Ambient Occlusion is important to give depth and realism to our scenes.

But it has a big cost, depending on the number of samples.

If we use a Screen Space solution there will be many artifacts due to the lack of relevant information.

72 of 72

References

Next Chapter: Irradiance ▶️