1 of 34

Learning Controllable Adaptive Simulation for Multi-resolution Physics

1

Tailin Wu

Postdoctoral Scholar, Computer Science @ Stanford University

Stanford Data for Sustainability Conference 2023

Collaborators: Takashi Maruyama, Qingqing Zhao, Gordon Wetzstein, Jure Leskovec

2 of 34

Simulation is a core task in science

2

3 of 34

Problem definition and significance

3

Problem definition: Simulate a multi-resolution system in a accurate and efficient way.

Significance: Many physical systems in science and engineering are multi-resolution: parts of the system is highly dynamic and need to resolve to fine-grained resolution, while other parts are more static.

Weather prediction

Galaxy formation

Laser-plasma particle acceleration

4 of 34

Preliminaries: Classical Solvers

4

Classical solvers:

Based on Partial Differential Equations (PDEs)

  • Pros: (1) Based on first principles and interpretable, (2) accurate, (3) have error guarantee.
  • Challenges: Slow and computational expensive, due to

(1) Small time interval to ensure numerical stability, or use implicit method.

(2) For multi-resolution systems, typically need to resolve to the lowest resolution

Pros and challenges:

Discretize the PDE, then use finite difference, finite element, finite volume, etc. to evolve the system.

mesh

grid

discrete time index

discrete cell id

5 of 34

Preliminaries: Classical Solvers

5

Classical solvers:

Based on Partial Differential Equations (PDEs)

Limitation of today’s method for multi-resolution physics:

  • Many solvers are based on fixed grid/mesh
  • Adaptive Mesh Refinement (AMR) can adaptively update the local spatial resolution. Since it is based on classical solvers, it shares similar drawbacks (e.g. slow), and it is based on heuristics without directly optimizing the computational cost

Discretize the PDE, then use finite difference, finite element, finite volume, etc. to evolve the system.

mesh

grid

discrete time index

discrete cell id

6 of 34

Preliminaries: Deep learning-based surrogate models

6

Recently, deep learning based surrogate modeling has emerged as attractive alternative to replace or complement classical solvers. They:

  • Offer speedup via:
    • Larger spatial resolution
    • Larger time intervals
    • Use explicit forward

Limitation of today’s method for multi-resolution physics:

  • focus on learning the evolution with low prediction error, without directly optimizing the computational cost
  • Almost all works are learning the evolution model on a fixed grid or mesh
  • A few exceptions, e.g. MeshGraphNet [1] using supervision from the classical AMR

7 of 34

Our contribution

7

We introduced the first deep learning-based surrogate model, which jointly learns the evolution and optimize computational cost.

(Wu et al., ICLR 2023, spotlight)

Key component is a GNN-based reinforcement learning (RL) agent, which learns to coarsen or refine the mesh, to achieve a controllable tradeoff between prediction error and computational cost.

8 of 34

Outline

8

  • Background: multi-resolution physics simulation and prior methods
  • Method: Learning Controllable Adaptive Simulation for Multi-resolution Physics (LAMP)
  • Experiments

9 of 34

Architecture

9

The policy network predicts the number of edges to refine or coarsen , and which edges to refine or coarsen .

10 of 34

Architecture

10

The policy network predicts the number of edges to refine or coarsen , and which edges to refine or coarsen .

11 of 34

Architecture

11

The evolution model evolves the system while keeping the mesh topology

The policy network predicts the number of edges to refine or coarsen , and which edges to refine or coarsen .

The evolution model evolves the system while keeping the mesh topology

12 of 34

Architecture

12

The policy network predicts the number of edges to refine or coarsen , and which edges to refine or coarsen .

The evolution model evolves the system while keeping the mesh topology

13 of 34

Architectural backbone: MeshGraphNets [1]

13

[1] Pfaff et al. ICLR 2021

14 of 34

Action space: refinement and coarsening

14

(1) Refining an edge

(2) Coarsening an edge

There are also constraints that need to be satisfied, e.g. if two edges are on the same face, they cannot be both refined or coarsened.

15 of 34

Learning the evolution model

The loss is based on the multi-step prediction error of the evolution model, compared with the ground-truth.

15

=

Ground-truth mesh

Predicted mesh

Time step

16 of 34

Learning the policy : reward

16

Reward:

Reward is based on the improvement of both error and computational cost.

    • Error is the multi-step prediction error
    • Computational cost is measured by number of vertices in the mesh

[1] Sutton, et al. NIPS 1999

is also an input to the policy

for S steps

for S steps

17 of 34

Learning the policy : actor objective

17

Objective for the actor :

advantage

log prob. for taking

the action

entropy regularizer

sg: stop gradient

REINFORCE

18 of 34

Learning the policy : critic objective

18

Objective for the critic (value function) :

value target

Here for the value target, we do not use a bootstrapped version of which assumes infinite horizons. Instead we use the reward defined as improvement error and computation within S steps of rollout.

19 of 34

Experiment 1

(1) Burgers’ equation (from the benchmark in [1])

19

[1] Brandstetter, Johannes, Daniel Worrall, and Max Welling. "Message passing neural PDE solvers." ICLR 2022

20 of 34

Experiment 1

Example rollout:

20

  • Refines more near the shock front, and coarsens more in static regions
  • With larger that focuses more on reducing computation, it refines less and coarsens more

Added cells

removed cells

21 of 34

Experiment 1

Example rollout:

21

  • Refines more near the shock front, and coarsens more in static regions
  • With larger that focuses more on reducing computation, it refines less and coarsens more

22 of 34

22

Result table:

Compared to state-of-the-art (FNO, MP-PDE) models and strong baselines, our model achieves large error reduction (average of 33.4%)

Ours

23 of 34

23

Result table:

Compared to ablation LAMP (no remeshing), our full model can reduce error (49.1%), by only modest increase computational cost.

Ours

24 of 34

Experiment 1

24

Fig. 3 shows the average error and # nodes over full test trajectories. We see that

With increasing , LAMP improves the Pareto frontier over other models

25 of 34

Experiment 2: results

25

26 of 34

Experiment 2: example visualization

26

ground-truth (fine-grained)

MeshGraphNets + GT remeshing

MSE: 5.91e-4

27 of 34

Experiment 2: example visualization

27

LAMP + heuristic remeshing

ground-truth (fine-grained)

MSE: 6.38e-4

28 of 34

Experiment 2: example visualization

28

LAMP + no remeshing

ground-truth (fine-grained)

MSE: 6.13e-4

29 of 34

Experiment 2: example visualization

29

ground-truth (fine-grained)

LAMP

MSE: 5.80e-4

30 of 34

Summary

  • Multi-resolution is a key characteristics in physical simulations.
    • Challenge: how simulate accurately and efficiently
  • We introduced LAMP, the first deep learning-based surrogate model (LAMP) which jointly learns the evolution and optimizes:

30

31 of 34

Summary

3. Experiments in 1D PDE and mesh-based simulation demonstrate LAMP’s capability,

outperform previous state-of-the-art.

31

Paper:

Code:

32 of 34

Future opportunities

32

  • Larger simulations

2. Inverse design for real-world engineering

Up to millions to billions of nodes

Welcome collaborations (email Tailin Wu, tailin@cs.stanford.edu)!

33 of 34

33

34 of 34

34

Other examples: