One of the biggest disadvantages of Monte Carlo methods is a relatively slow convergence rate.

One of the fundamental goals in researching Monte Carlo methods is to find or design efficient estimators.

Variance reduction techniques and include importance sampling & Stratified Sampling, control variates, and adaptive sampling.

1. Multiple importance sampling  

Direct Illumination:

Generating uniform random directions is not the best approach for approximating the illumination integral:

For instance, if we are reflecting an environment on a glossy material, it seems intuitively more effective to sample directions around the specular direction (i.e. the mirror reflection direction), since much of the reflected light originates from these directions.

In practice, p can be designed by doing some of the following:

   1. Discard or approximate some parts of f (x) such that function g(x) = f (x)p(x)

   can be integrated analytically.

   2. Construct a low dimensional discrete approximation of f (x).

   3. Approximate f (x) by using Taylor expansion.

   

BRDFS -

   Ward BRDF - visible edge darkening

   Ashikhmin-Shirley (aniso-Phong like brdf)- small visible edge darkening

   Halway Vector Disk BRDF - conserving energy, aniso, no edge darkening

The variance of the estimator depends on the density p(x) from which random samples are drawn. If we choose the density p(x) intelligently, the variance of the estimator is reduced. This is called importance sampling. p(x) is called the importance density and wi = f(Xi)/p(Xi) is the importance weight.

The best possible sampling density is  where c is proportionality constant:

If we choose an importance density p(x) that has a similar shape to f(x), the variance can be reduced. It is also important to choose an importance density p such that it is simple and efficient to evaluate. In practice, p can be designed by doing some of the following:

  1. Discard or approximate some parts of f(x) such that function g(x) = f(x)p(x) can be integrated analytically.
  2. Construct a low dimensional discrete approximation of f(x).
  3. Approximate f(x) by using Taylor expansion.

BRDF-based Importance Sampling

While various ways exist to create uniformly random samples, such as pseudo-random or quasi-random

numbers generators, we must convert the uniformly random values to be proportional or nearly proportional to the BRDF.

Inverse Transform Sampling

As one method for warping the uniform distribution, we can convert the PDF into a Cumulative Distribution Function (CDF - describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to x -http://en.wikipedia.org/wiki/Cumulative_distribution_function). Intuitively, think of a CDF as a mapping between a uniform distribution to a PDF-proportional distribution.

In the discrete case, where there are only a finite number of samples, we can define the CDF by stacking each sample. For instance, if we divide all possible sampling directions for rendering into 4 discrete directions, where one of the sample directions, S2, is known a priori to be more important, then the probabilities of the 4 samples can be stacked together as depicted:

 

PDF proportional mapping. In this case, if a random number is chosen, where there exists an equal likelihood that any value between 0 and 1 will be produced, then more numbers will map to the important sample S2 and thus that direction will be sampled more often.

For continuous one-dimensional PDFs, we must map a uniform distribution of numbers to an infinite set of possible PDF proportional samples. This way, we can generate a random number from a uniform distribution and be more likely to obtain an important sample direction, just as more random values would map to the important sample S2. We can obtain the position of a sample Θ within the CDF by stacking all the previous samples before Θ on top of each other via integration,

To obtain the PDF-proportional sample from a random number, we set P (Θ) equal to a random value ξ and solve for Θ. In general, we denote the mapping from the random value to the sample direction distributed according to the PDF as P −1 (ξ).

Phong BRDF Sampling

We can represent the glossy component mathematically as cos(θs)^n , where θs is the angle between the sample direction and the specular direction and n is the shininess of the surface,

To convert this material function into a PDF, we separate out the exponentiated cosine lobe from the illumination integral. The other components of the integrand are ignored for computational efficiency and mathematic simplicity. This results in a PDF that is in terms of the angles around the specular direction, p(θs , φs ). Here, θs and φs are the spherical coordinates of the sample direction in a coordinate frame where the specular direction is the z-axis. To formulate this PDF, we must first normalize the

cosine lobe to integrate to one,

The extra sine appears when converting to spherical coordinates from solid angles. Often times, this normalization term is also included in the BRDF to ensure the reflectance model is energy conserving.

Sample Directions

To generate the two-dimensional sample direction (θ s , φ s ) according to this PDF, it is best to find each dimension of the sample direction separately.

We need a PDF for each dimension so we can apply:

 

to convert the PDF into a CDF and create a partial sample direction (The CDF of a continuous random variable X can be expressed as the integral of its probability density function http://en.wikipedia.org/wiki/Cumulative_distribution_function).

To accomplish this, the marginal probability of the θ s direction, p(θ s ), can be separated from p(θs , φs ) by integrating the

PDF across the entire domain of φs ,

This one-dimensional PDF can now be used to generate θs . Given the value of θs , we find the PDF for φs using the conditional probability p(φs | θs ) from Bayes’ theorem defined as,