1 of 41

Introduction and Workshop Overview

Ujaval Gandhi

ujaval@spatialthoughts.com

GEE Dynamic World Workshop

2 of 41

Introduction to Dynamic World

3 of 41

What is Dynamic World?

4 of 41

5 of 41

Dynamic World: Near Real Time land cover data

Global Land Cover “Bands”

10m resolution based on ESA Sentinel-2

Near Real Time: 2-5 day globally�for seasonal and recent events

Per-pixel probabilities across 9 classes

Free, Open License model & dataset

01

02

03

04

05

Dataset and AI Model

Dynamic World, Near real-time global 10m land use land cover mapping

Christopher F. Brown, …Rebecca Moore, Alex Tait

Data Descriptor | 15 April 2022

6 of 41

Dynamic World

  • A freely available and openly licensed global Landcover dataset by Google and WRI, launched June 2022.
  • Based on Sentinel-2 dataset
  • Landcover predictions are generated by a neural network trained on large amounts of global training data.
  • The model classifies input images into 9 landcover classes.
    • The output is per-pixel probabilities in each of the 9 classes
  • Entire Sentinel-2 archive from 2015 onwards has been classified.
  • Model runs in near real-time on new scenes and results are available in the data catalog.

7 of 41

Near Real-time Data Production

Dynamic World data available from June 23, 2015 to….2-5 days ago.

8 of 41

Bare Ground

Built-up Areas

Snow/Ice

Water

Crops

Flooded Vegetation

Shrub/Scrub

Trees

Grass

9 of 41

Dynamic World Dataset

14.8 PB

In Google Earth Engine Data Catalog

17.8M

Total Dynamic World Assets and counting

1M+ �CPU Hours to produce

5640+

New Dynamic World Assets per day�

10 of 41

Per-pixel probabilities across 9 classes

Water

0.03810164391890996

Trees

0.4943684768923386

Grass

0.038150769779113705

Flooded vegetation

0.029620675753836555

Crops

0.0342250186820618

Shrub/Scrub

0.07107832384968384

Built

0.03546816520651595

Bare Ground

0.06788426184060949

Snow/Ice

0.19106612095816267

11 of 41

Time-Series of Class Probabilities

12 of 41

“map with you, not for you”

A good mental model to use for Dynamic World is to NOT think of it as landcover product, but as a dataset that provides 9 additional bands of landcover related information for each Sentinel-2 image that can be refined to build a locally relevant landcover map or change detection model.

13 of 41

Dynamic World Training Data

14 of 41

Spatial Context

  • The receptive field of the Dynamic World model is ~21 pixels across.
  • Reflectance values from up to 100m away from a target 10m pixel are used for prediction.
  • Pros:
    • This greatly reduces the “salt-and-pepper” effect typically seen when pixels are classified in isolation.
    • Captures phenomena such as urbanization
  • Cons:
    • Certain classes lose fine-grained details (a.k.a. they appear blobby)

15 of 41

Sentinel-2 Composite (2019)

16 of 41

Sentinel-2 Composite (2020)

17 of 41

ESA WorldCover 2020 (Bare/Sparse Vegetation)

18 of 41

Sentinel-2 Composite (2019)

19 of 41

Sentinel-2 Composite (2020)

20 of 41

Dynamic World (built)

21 of 41

Temporal Context

  • The Dynamic World model needs to generate near-realtime predictions.
  • Each image is classified without knowledge of previous or subsequent images.
  • It does NOT have any temporal context.
  • Cons:
    • ‘crops’ are often labeled as ‘bare’ or ‘grass’ or vice-versa.
    • Users need to add their own temporal context by aggregating multiple dynamic world images over time. [Xu, P. et al. (2024)]

22 of 41

23 of 41

24 of 41

25 of 41

26 of 41

27 of 41

Summary

  • Dynamic World is different from other pixel-based classification datasets.
    • ‘urban’ not ‘built’
    • ‘crops’ not ‘cropland’
  • Use Dynamic World probability bands as inputs to your models.
  • Class probabilities have much closer correlation with landcover than spectral bands or spectral indices.
  • You can process and aggregate dynamic world probability time-series to derive products.

28 of 41

Module Overview

Javascript Basics

Creating Composites

Importing Data

Computation with Images

Raster to Vector Conversion

Export

Module 1

Change Detection

Introduction to Machine Learning

Collecting training data

Classifying images

Accuracy Assessment

Exporting Results

Module 2

Supervised Classification

Processing Time-Series

Creating Charts

Building User Interfaces in Earth Engine

Publishing your first Earth Engine App

Module 3 and 4

Time-Series and Earth Engine Apps

29 of 41

Module 1: Change Detection

30 of 41

2020

2022

New Urban Areas

31 of 41

Module 2: Supervised Classification

32 of 41

Sentinel-2 Image

Training Samples

Classified Image

33 of 41

Module 3: Exploring Time Series

34 of 41

35 of 41

Module 4: Earth Engine Apps

36 of 41

37 of 41

38 of 41

Let’s get coding

39 of 41

What is your favorite programming language?

40 of 41

Image Source: Reddit

41 of 41

Javascript vs. Python

  • You are learning the Earth Engine API
  • The API is exactly the same regardless of the language you choose
    • With a few small caveats
  • Javascript API is the most mature and easiest to get started.
    • No installation required
    • No need to worry about authentication
    • Very easy to share scripts, ask for help
    • Building and deploying apps is very easy
  • Python API is much more powerful
    • Integrate with other data science libraries for data processing and plotting
    • Automate launching and managing Exports
  • You can easily convert any Javascript code to Python when needed