A.I. + X.R. for Garden Design
Lance Legel � CEO
Workshop on A.I. + X.R. for Gardens
INTELLIGENT CULTIVATION OF URBAN ECOSYSTEMS
DECEMBER 1, 2023@ UNIVERSITY OF FLORIDA
Develop A.I. models for selecting and placing plants anywhere around the world�
TECHNICAL GOALS
Review: iNaturalist Research-Grade Observations (29 million in-situ photos of plants with GPS coordinates, worldwide)
Review: iNaturalist Research-Grade Observations (“Research-Grade”: 2 out of 3 reviewers agree on a taxon)
Review: Global Biodiversity Information Facility, https://GBIF.org (1000s of databases on plant properties)
Review: Global Biodiversity Information Facility, https://GBIF.org (1000s of databases on plant properties)
e.g. University of Florida Herbarium
Review: Global Biodiversity Information Facility (Query any geography for local plants)
Results: Dataset for 3co A.I. Species Recommender (2.6 million images from 33,000 most common species worldwide)
Results: Dataset for 3co v. 0.1 Garden Designer (100 thousand+ garden photos from 250 largest urban areas in U.S.)
Results: Dataset for 3co v. 0.1 Garden Designer (100 thousand+ garden photos from 250 largest urban areas in U.S.)
Results: Dataset for 3co v. 0.1 Garden Designer (100 thousand+ garden photos from 250 largest urban areas in U.S.)
Results: Dataset for 3co v. 0.1 Garden Designer (100 thousand+ garden photos from 250 largest urban areas in U.S.)
Results: Dataset for 3co v. 0.1 Garden Designer (100 thousand+ garden photos from 250 largest urban areas in U.S.)
Results: Dataset for 3co v. 0.1 Garden Designer (100 thousand+ garden photos from 250 largest urban areas in U.S.)
Demo: DinoV2 Vision Transformer from Meta (depth estimation on test image)
Demo: DinoV2 Vision Transformer from Meta (depth estimation on test image)
https://dinov2.metademolab.com
Review: DinoV2 Vision Transformer (2023) from Meta (Visual A.I. model pre-trained on 142 million images)
Bigger is Better – Bigger models (up to 1.1 billion trainable parameters) get better and better…
Figure from “DINOv2: Learning Robust Visual Features without Supervision”
Results: Plant A.I. Model from 3co (Part 1 of 2, DinoV2 with data augmentation…)
DinoVisionTransformerClassifier(
(data_augmentation): Compose(
RandomApply([
RandomResizedCrop(size=(image_dimension, image_dimension)),
RandomZoomOut(side_range=(1.0, 2.0))
], p=0.5),
ResizeAndPad(target_size, 14),
ColorJitter(brightness=.3, hue=.04),
RandomRotation(360),
RandomHorizontalFlip(),
RandomVerticalFlip(),
ToTensor()
)
(transformer): DinoVisionTransformer(
(patch_embed): PatchEmbed(
(proj): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14))
(norm): Identity()
)
(blocks): ModuleList(
(0-23): 24 x NestedTensorBlock(
(norm1): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
(attn): MemEffAttention(
(qkv): Linear(in_features=1024, out_features=3072, bias=True)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=1024, out_features=1024, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
)
(ls1): LayerScale()
(drop_path1): Identity()
(norm2): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=1024, out_features=4096, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=4096, out_features=1024, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
(ls2): LayerScale()
(drop_path2): Identity()
)
)
...
Results: Plant A.I. Model from 3co (Part 2 of 2, … with GPS encoding and classifier layers for 33,000 species)
...
(norm): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
(head): Identity()
)
(geo_encoder_1): Sequential(
(0): Linear(in_features=3, out_features=348, bias=True)
(1): ReLU()
(2): LayerNorm((348,), eps=1e-05, elementwise_affine=True)
(3): ResNormLayer(
(nonlin1): ReLU()
(nonlin2): ReLU()
(norm_fn1): LayerNorm((348,), eps=1e-05, elementwise_affine=True)
(norm_fn2): LayerNorm((348,), eps=1e-05, elementwise_affine=True)
(w1): Linear(in_features=348, out_features=348, bias=True)
(w2): Linear(in_features=348, out_features=348, bias=True)
)
)
(classifier): Sequential(
(0): Linear(in_features=1372, out_features=1372, bias=True)
(1): Sigmoid()
(2): Linear(in_features=1372, out_features=33701, bias=True)
)
)
Demo: Plant A.I. Model from 3co (~80% accurate in testing after training for 5 days on an NVIDIA RTX 3090 GPU)
1 Lavandula angustifolia 95.2%
2 Lavandula stoechas 2.24%
3 Lavandula pedunculata 0.98%
4 Lavandula latifolia 0.89%
5 Lavandula dentata 0.47%
6 Salvia officinalis 0.04%
7 Calluna vulgaris 0.04%
8 Salvia yangii 0.01%
9 Lavandula buchii 0.01%
10 Muscari botryoides 0.00%
Demo: Plant A.I. Model from 3co (~80% accurate in testing after training for 5 days on an NVIDIA RTX 3090 GPU)
1 Cotinus coggygria 83.88%
2 Cenchrus alopecuroides 6.19%
3 Elsholtzia ciliata 1.88%
4 Agastache scrophulariifolia 1.40%
5 Miscanthus sinensis 0.56%
6 Phragmites australis 0.52%
7 Solidago sempervirens 0.42%
8 Veronicastrum virginicum 0.394%
9 Zizania aquatica 0.37%
10 Reynoutria japonica 0.33%
http://files.thehighline.org.s3.amazonaws.com/pdf/High_Line_Full_Plant_List.pdf
= correct genus / species