1 of 41

ICRA Workshop 2021

Legged Robot Autonomy

Jessy Grizzle & Maani Ghaffari

1

2 of 41

Outline

  • Cassie Autonomous Locomotion

  • Multi-Task Learning Multi-Layer Bayesian Mapping Framework

  • MIT mini-Cheetah

  • UM Campus Woodlands

2

3 of 41

Why Autonomy?

3

Replace Humans in Dangerous Environments

Scientific Exploration in the Wild and Space

Collaborate with Humans in Our Daily Life

4 of 41

Cassie with Torso

  • Custom torso
  • 32 Beam Velodyne LiDAR
  • Intel RGB-D Camera
  • Jetson GPU
  • Power supply and battery
  • Approx 11 Kg

  • Cassie alone: 33 Kg

4

5 of 41

5

Cassie: Fully Autonomous

(summer and fall 2019, sidewalk)

6 of 41

6

Summer 2019

7 of 41

Cassie: Fully Autonomous

(summer and fall 2019, sidewalk)

Simple navigation: no right turns, avoid objects (Spin Scooter)

No Loop Closures

Pure Odometry

(not bragging, just an unfortunate fact. Full SLAM pipeline coming soon)

8 of 41

8

Zero Dynamics, Pendulum Models, and Angular Momentum in Feedback Control of Bipedal Locomotion

Arxiv 2021

One of our super powers: feedback control for agile motions

9 of 41

Autonomous Operation with Cassie

(More challenging terrain)

  • June 2021

9

10 of 41

Autonomy Stack

10

11 of 41

Autonomy Stack

11

12 of 41

Autonomy Stack

12

13 of 41

Autonomy Stack

13

14 of 41

Autonomy Stack

14

15 of 41

Autonomy Stack

15

16 of 41

Autonomy Stack

16

17 of 41

Autonomy Stack

17

18 of 41

Multi-Task Learning Multi-Layer Bayesian Mapping Framework

  • A Unified Segmentation Model via MTL
  • Multi-Layer Bayesian Map Inference

18

Lu Gan

UMich

Youngji Kim

KAIST

19 of 41

A Unified Segmentation Model via MTL

  • Traditional robotic maps containing only geometric information usually directly take sensor (LiDAR, sonar, radar, cameras) data as inputs.

19

20 of 41

A Unified Segmentation Model via MTL

  • With semantic mapping emerges a reasoning block that interprets sensor data for higher-level scene understanding.

20

21 of 41

A Unified Segmentation Model via MTL

  • Design a unified deep neural network (DNN) model as reasoning block for multi-layer mapping system via MTL.

21

22 of 41

Multi-Task Network with Attention Mechanisms

  • Architecture Design
    • Hard-parameter sharing
    • Soft-parameter sharing

  • Works w/ any ResNet-based feed-forward neural network

22

23 of 41

A Unified Segmentation Model via MTL

  • A Multi-Task Network with Attention Mechanisms
    • Model Objective in general

    • Model Objective in segmentation

23

24 of 41

A Unified Segmentation Model via MTL

  • Our MTL for Semantic and Traversability Segmentation
    • Implementation via DeepLabv3+ architecture with WideResNet38 backbone.

24

shared layers

traversability-specific layers

semantic-specific layers

25 of 41

A Unified Segmentation Model via MTL

  • Our MTL for Semantic and Traversability Segmentation:
    • Supervised semantic segmentation task.
    • Self-supervised pixel-wise traversability ground truth image.

25

26 of 41

Multi-Layer Bayesian Map Inference

  • Closed-form Bayesian Inference via conjugate distributions

26

27 of 41

Multi-Layer Bayesian Map Inference

  • Continuous Semantic-Traversability Mapping
    • Reduce semantic Categorical posterior to traversability measurements following the same Bernoulli distribution

    • Assuming measurements are independent:

    • Final posterior:

27

28 of 41

Bayesian Spatial Kernel Semantic Mapping

  • Categorical likelihood.

  • The training set (data).

  • Posterior over.

  • Conjugate prior. Dirichlet prior leads to closed-form recursive Bayesian inference.
  • Continuous semantic map via Bayesian kernel inference. Exploits local correlations present in the environment; queries at arbitrary resolutions.

28

29 of 41

Open Source Software

  • Paper will be on arXiv soon!
  • Implementation Details
    • MTL network: PyTorch implementation, pretrained on Cityscapes dataset

https://github.com/ganlumomo/semantic-segmentation

    • MLM algorithm: C++ implementation with Robotic Operating System (ROS)

https://github.com/ganlumomo/MultilayerMapping

29

30 of 41

Lu’s mapping video here

30

31 of 41

Mini Cheetah with Sensor Suite

  • Custom sensor suite
  • Jetson AGX Xavier
  • Intel Depth Camera D455
  • Power supply and battery

31

32 of 41

Mini Cheetah - UMich Forest

32

33 of 41

UMich Forest - ORB SLAM2 Lost Track

33

34 of 41

UMich Forest - ORB SLAM2 Lost Track

  • State of the art SLAM systems fail in perceptually degraded situations.

  • Hence, the need for accurate proprioceptive tracking to support state estimation.

34

35 of 41

Contact Aided Invariant EKF on Mini Cheetah

  • Deep Multi-Modal Contact Estimation
  • Lightweight convolutional neural network for classification

35

36 of 41

Contact Aided Invariant EKF on Mini Cheetah

36

37 of 41

Contact Data sets

37

  • MIT Mini Cheetah
  • 500,000 data points
  • Grass and concrete (more terrains in progress)
  • Ground truth trajectory:
    • Motion capture system
    • RGB-D SLAM

Concrete

Grass

38 of 41

Invariant EKF on Concrete Data set

38

39 of 41

Invariant EKF on Uneven Grass Data set

39

40 of 41

UM Campus Woodlands

  • New semantic classes: “walkability”, tree limbs, dense brush vs sparse, etc.
  • Integrate Multi-Layer Semantic Map with Feedback Controller.

40

41 of 41

Acknowledgment

The minicheetah is sponsored by MIT Biomimetic Robotics Lab & NAVER LABS. This work was supported by the Toyota Research Institute (TRI) and by NSF.

41