Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling
Hyunbae Kim
Animatable human avatar
Limitation of meshes and points clouds
Limitation of implicit representations(NeRF)
3DGS
Contribution of Animatable Gaussians
Contribution of Animatable Gaussians
Contribution of Animatable Gaussians
Preliminary: 3DGS
Overview
Learning Parametric Template
Goal : Reconstruct a canonical geometric model as the template
Represent canonical character as SDF and color field instantiated by an MLP
Learning Parametric Template
Goal : Reconstruct a canonical geometric model as the template
Learning Parametric Template
Goal : Reconstruct a canonical geometric model as the template
Template-guided Parameterization
Pose-dependent Gaussian Maps
: StyleUNet, StyleGAN-based CNN
: front and back pose-dependent Gaussian maps
: View direction map
LBS of 3D Gaussians
Training: Loss
(ϕₗ is a layer of the pretrained CNN, e.g., VGG16)
Results
Results: Comparison with body-only avatars
Results: Comparison with AvatarReX
Results: Quantitative comparison
Ablation: Parametric Template
Ablation: Backbones
Ablation: Pose Projection
w/o
w/o
w/
w/
Limitation