1 of 7

Speaker: Jingwei Ma

2 of 7

Why so fast?

  • MLP tailored for GPUs (paper) (API)
  • Multiresolution hash encoding

3 of 7

Why so fast?

  • MLP tailored for GPUs (paper) (API)
    • Written in CUDA
    • [Resolves] memory traffic bottleneck
      • Slower “global” caches: I/O of MLP input/output
      • Faster on-chip caches: intermediate activations

4 of 7

Why so fast?

  • MLP tailored for GPUs (paper) (API)
    • Written in CUDA
    • [Resolves] memory traffic bottleneck
      • Slower “global” caches: I/O of MLP input/output
      • Faster on-chip caches: intermediate activations
  • Multiresolution hash encoding
    • Hash tables
      • Fast lookups
      • Parallelize querying of multiresolution hash table
    • Smaller MLPs
      • Density MLP: 1 hidden layer, size = 64
      • Color MLP: 2 hidden layers, size = 62
    • Trainable embeddings + grid arrangement -> fewer updates

5 of 7

[Method] 2D version

Coarse-to-fine hash encoding�(x →y)

Small MLPs�(y → c, σ)

6 of 7

[Method] 2D version

Coarse-to-fine hash encoding�(x →y)

Small MLPs�(y → c, σ)

7 of 7

Comparison to prior work

  • Instant NGP vs. (paper, Niessner et al. 2013 -> fuse depth maps to 3D model)
    • Multiresolution vs. predefined voxel grid
    • Hashing voxel vertices + interpolation vs. hashing voxels
    • Implicit vs. explicit handling of collision
      • Less collisions for low-res grid (vice versa)
      • Hash function
        • Simultaneous collision in several resolution is unlikely
      • When collision -> average the gradients
        • More important samples dominate the collision average (e.g. an occupied point)