1 of 14

Large Mars Model

Umaa Rebbapragada (JPL PI)

Hannah Kerner (ASU, PI), Mirali Purohit (ASU Grad Student), Steven Lu (Co-I, JPL), Serina Diniega (Co-I, JPL)

March 26, 2024

NASA SMD AI Workshop 2024

2 of 14

Team

Jet Propulsion Laboratory

Umaa Rebbapragada

Data Scientist & Group Supervisor

Machine Learning & Instrument autonomy (MLIA)

Steven Lu

Data Scientist & Planetary Data Service (PDS) Imaging Node Technologist

Hannah Kerner

Assistant Professor, School of Computing and Augmented Intelligence

Mirali Purohit

Ph.D. Candidate, School of Computing and Augmented Intelligence

Arizona State University

Serina Diniega

Planetary Geologist

jpl.nasa.gov

3 of 14

Machine Learning (ML) Applied to Martian Datasets

3

Science Task

Year

Datasets

Paper / Datasets

Datasets

DL-Based Terrain Classification for Rover Missions (SPOC)

2016

MSL Navcam

Rothrock et al., AIAA SPACE

Surface Change Detection using Conv. AutoEncoders and Transfer Learning

2019

CTX, HiRISE

Kerner et al., IEEE Journal of Selected Topics in Applied Earth Obs and Remote Sens.

Novelty Detection in Multi-spectral Data

2019

MSL Mastcam

Kerner et al., AAAI

Zenodo

DoMars16K for Landform Classification on Planetary Surfaces

2020

CTX

Wilhelm et al., Remote Sensing

Zenodo

Deep Mars: CNN Classification of rover and orbital images

2018, 2021

HiRISE, Mastcam, MAHLI

Wagstaff et al., AAAI (2 papers)

Zenodo

AI4Mars: CNN Mars terrain classification

2021

MSL Navcam & Mastcam & MER Navcam

Swan et al., CVPRW

NASA Open Data Portal

Global Map of Martian Frost

2022

CTX, HiRISE, MCS, THEMIS, CRISM

Doran et al. PSIDA

JPL Dataverse

S5Mars for Semantic Segmentation on Mars

2023

MSL Mastcam

Zhang et al., arXiv:2207.01200

S5Mars.github.io

Cone Segmentation

2023

CTX

Purohit et al. arXiv:2311.08657

Zenodo

Map of Martian Frost Cap

2023

MARCI

Archarya et al., Icarus

jpl.nasa.gov

4 of 14

PDS Imaging Node Content-based Search

  • Enable fast community access to surface / rover features
  • CNNs adapted from AlexNet
  • HiRISENet (crater, dark dune, slope streak, impact ejecta, etc.)
  • MSLNet (float rock, layered rock, drill target, etc.)
  • Published datasets; benchmarked performance
  • 65K HiRISE image tiles; 7K MSL image tiles
  • Labeling done via crowdsourcing or expert volunteers, minimum 3 labels per image

4

MSL Rover Data

HiRISE Data

jpl.nasa.gov

5 of 14

Global Map of Martian Frost

Thermal Data

Visible Data

MCS

HiRISE

CTX

Adjusted�Posterior

GPR

CNN

Posterior probability

In [0,1]

Uniform Temperature Grid

Global Frost Map

Spectra

CRISM

Landforms

THEMIS

CNN

Posterior probability

Each frost map = (lat, lon, Ls, frost confidence)

Convolutional Neural Net

Gaussian Process Regression

jpl.nasa.gov

6 of 14

Global Map of Martian Frost

  • Inception V3 CNNs fine-tuned
  • Published datasets; benchmarked performance
  • ~30K labeled HiRISE tiles
  • ~88K labeled CTX tiles
  • Labeling done via expert volunteers, minimum 3 labels per image
  • Several months of labeling per dataset
  • 2nd round labeling after poor performance discovered in Southern hemisphere
  • CTX data to be released in 2024

6

Labelbox interface for annotating images

HiRISE Data

jpl.nasa.gov

7 of 14

Current Workflow for Mars Models

  • Each team works independently on their use cases

  • Must curate their own catalogs typically OOM 10K images

  • High cost and effort to produce training catalogs

7

MRO

Archive

fine-tuning

Team 1 wants impact-excavated sub-surface ice anywhere on Mars

Team 3 wants defrosted gullies

Team 2 wants new dust devil tracks in hi-res Mars imagery

Catalog 1

Catalog 2

Catalog 3

DL Network 1

Fine-tuned model

Fine-tuned model

Fine-tuned model

DL Network 2

DL Network 3

test images

jpl.nasa.gov

8 of 14

Driving Questions: Can a Custom Foundation Model…

  • Improve performance over current benchmarks

  • Significantly reduce number of training examples needed on downstream tasks

  • Enable few or zero-shot learning

8

jpl.nasa.gov

9 of 14

Potential of Foundation Model

  • Eliminates hours of painstaking labeling

  • Eliminate need for custom classifiers and workflow

  • Science users directly upload images for fine-tuning

  • Allow users to customize image retrieval

  • PDS can accommodate an arbitrary number of content-based searches vs. static set currently offered

9

MRO

Archive

Pre-training

Team 1 wants impact-excavated sub-surface ice anywhere on Mars

Team 3 wants defrosted gullies

Team 2 wants new dust devil tracks in hi-res Mars imagery

Image set 1

Image set 2

Image set 3

Large Mars Model

Fine-tuned model

Fine-tuned model

Fine-tuned model

test images

jpl.nasa.gov

10 of 14

Large Mars Model

  • JPL Strategic University Research Partnership (SURP) FY24 Award

  • Leverage expertise of ASU Prof. Hannah Kerner’s Lab

  • Use benchmarked use cases to evaluate effectiveness of Large Mars Model

10

jpl.nasa.gov

11 of 14

Masked Auto-Encoder (MAE)

  • Self-supervised learning

  • Learn through deletion of patches minimizing reconstruction error

  • Using a vision transformer (ViT) backbone

11

He et al. CVPR 2022

jpl.nasa.gov

12 of 14

Planned Work

12

Bring current benchmarks to SOTA

Self-supervised Pre-training

Validation: ML & science-specific

Label Efficiency & Zero-shot Performance

jpl.nasa.gov

13 of 14

Applying SOTA to Benchmarks

Model

Pre-

training

Finetuning

Accuracy

Precision

Recall

F1-Score

Inception V3

-

Random initialization of weights

0.9112

0.9168

0.9112

0.9088

ImageNet

Using pre-trained model as a feature extractor

0.9067

0.909

0.9067

0.9048

ImageNet

End-to-end fine-tuning

0.9577

0.9594

0.9577

0.9572

ViT

-

Random initialization of weights

0.8167

0.8328

0.8167

0.8197

DoMars16

End-to-end fine-tuning

0.9015

0.901

0.9015

0.9009

ImageNet

End-to-end fine-tuning

0.9859

0.986

0.9859

0.986

Model

Pre-

training

Finetuning

Accuracy

Precision

Recall

F1-Score

Inception V3

-

Random initialization of weights

0.6626

0.8598

0.6626

0.7233

ImageNet

Using pre-trained model as a feature extractor

0.7485

0.8526

0.7485

0.7844

ImageNet

End-to-end fine-tuning

0.7172

0.899

0.7172

0.7723

ViT

-

Random initialization of weights

0.431

0.4579

0.4236

0.4401

DoMars16

End-to-end fine-tuning

0.4646

0.7995

0.4646

0.5475

ImageNet

End-to-end fine-tuning

0.8728

0.9218

0.8728

0.8847

Martian Frost

HiRISENet

  • Upgrading from Inception V3 to a ViT increased performance

  • Plan to explore other supervised pre-training variants before we move onto our self-supervised MAE strategy

jpl.nasa.gov

14 of 14

14

jpl.nasa.gov