1 of 12

Neural Differentiable Rendering

CMSC838B: Project checkpoint, Nov 22

2 of 12

What has been done

  • Developed the system in Mitsuba2
    • Automatic differentiation (AD)
    • C++ with Python Bindings
    • Using Cuda wavefronts: No for loops except for the training process
    • 2K lines of code
  • Generation of ground truth
  • Training on a toy example: Cornell box with diffuse surfaces
  • Example of an inverse problem using our method

  • Initial results: the hypothesis works. A neural network can represent the gradient field.

3 of 12

Cornell box: Primal Rendering

Path tracing ground truth

4 of 12

Cornell box: Derivatives - Ground Truth

Forward mode AD

Finite Differences

5 of 12

Cornell box: Derivatives - Ground Truth

Forward mode AD

Finite Differences

6 of 12

Results

Truth

Res

LHS

RHS

  • We are a bit brighter than truth
  • Should be a bug somewhere

7 of 12

Inverse problem

Initial state

Target

8 of 12

9 of 12

Optimization process: Ours vs. Mitsuba AD

Log - Error of the parameter

10 of 12

Performance

Method

Error of param

Time ms/it

Biased AD

0.0593399

76.9

Unbiased AD

1.53764e-04

129.1

Ours large net

2.01029e-05

217.9

Ours small net

4.63818e-05

92.6

11 of 12

Next step: non-linear gradients for specular surfaces

Primal rendering

Derivatives-FD

12 of 12

Next steps: long term

Also:

  • Comparison to more baselines: Radiative backpropagation
  • Fine-tuning our model after every update and make it performance optimal
    • Importance sampling the residual
  • Scenes with more complex geometry and materials

  • Thanks. Questions?