1 of 18

Master’s Practical Course:

3D Scanning and Spatial Learning

Tobias Kirschstein, Simon Giebenhain

SS 2024

2 of 18

Our Team

Tobias Kirschstein

Simon Giebenhain

3 of 18

3D Scanning

& Spatial Learning

4 of 18

What we do: Photorealistic Avatars

5 of 18

What we do: Human Head Geometry

6 of 18

What we do: 3D Scanning Setup

7 of 18

What we do: Multi-view Video Capture Setup

8 of 18

Organization

9 of 18

Course Format

  • Teams of 2-4 students
  • Project assignment at the KickOff meeting at semester start
  • Weekly meeting where teams present progress�Thursday 14:00-16:00, R. 01.07.023 (in person)
  • Communication via Slack
  • At semester end: Project Presentation + Report

10 of 18

Grading

  • Individual evaluation (for each student)

  • Project work – 70%
  • Final presentation – 15%
  • Final report – 15%

11 of 18

Application

  • Requirements:
    • Introduction to Deep Learning (IN2346)
    • Experience in 3D Vision / Graphics (e.g., IN2354)

  • Apply through the matching system�matching.in.tum.de
    • 09.02. - 14.02.

  • Send grade sheets + motivation to get higher priority (optional)
    • Fill this form until 17.02.

12 of 18

Possible Projects

13 of 18

  1. Fitting of 3D Morphable Models (3DMMs)
  • Fit 3DMM (e.g., FLAME) to multi-view videos
  • Use tracked mesh as basis for avatar reconstruction

[1] Li et al., Learning a model of facial shape and expression from 4D scans (2017)

[2] Lombardi et al.: Mixture of Volumetric Primitives for Efficient Neural Rendering (2021)

14 of 18

2. Intuitive Animation

  • User interaction via landmarks / user-strokes
  • Audio-driven

[1] Tena et al.: Interactive region-based linear 3d face models (2011)

[2] Neumann et al.: Sparse localized deformation components (2013)

[3] Cudeiro et al.: Capture, Learning, and Synthesis of 3D Speaking Styles (2019)

15 of 18

3. Multi-View Stereo via Inverse Rendering

  • MVS with 3D laser scans as prior

Differentable Rendering: nvdiffrast [1]

Neural Surface Rendering: NeuS [2]

[1] Laine et al.: Modular Primitives for High-Performance Differentiable Rendering (2020)

[2] Wang et al.: NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction (2021)

16 of 18

4. Hair Reconstruction

  • Reconstruction of single hair strands
  • Physical simulation for movements

[1] Nam et al., Strand-accurate Multi-view Hair Capture (2019)

[2] Rosu et al., Neural Strands: Learning Hair Geometry and Appearance from Multi-View Images (2022)

17 of 18

X. Your own ideas

  • Any 3D topic you are super excited about?
  • Maybe you want to scan / record something in our setups?

18 of 18

Hope to see you in the group!

3D Scanning and Spatial Learning

If interested: Presentations of last semester’s projects are on �Thursday, 08.02.2024 at 10:00 - 12:00, room 01.07.014