Segmentation of
challenging microscopy
images
Yaroslava Lochman
Acknowledgement
Acknowledgement
Acknowledgement
Dmytro Hrishko
Yaroslava Lochman
Illya Stolpakov
Matus Chladek
Dmytro Fishman
Motivation
Automated medical image processing & analysis
Fast & accurate results
Fast disease diagnostics
Presentation pipeline
- Final model details
- Evaluation scores
- Microscopy images overview
- Processing steps
- Architecture details
- Losses
- Evaluation metrics
- Statement
- Approaches
Semantic segmentation
Problem
Problem statement
?
Model
400
750
Approaches
“Classic” approaches
Single-Histogram Class Models (2006)
TextonForest (2008)
Random Forest (2008)
Deep learning era (CNNs)
FCN (2014)
UNet (2015)
SegNet (2015)
PSPNet (2016)
LinkNet (2017)
DeepLab (2017)
FC-DenseNet or Tiramisu (2017)
TernausNet (2018)
Approaches
Before deep learning
Single-Histogram Class Models (2006)
TextonForest (2008)
Random Forest (2008)
Deep learning era (CNNs)
FCN (2014)
UNet (2015)
SegNet (2015)
PSPNet (2016)
LinkNet (2017)
DeepLab (2017)
FC-DenseNet or Tiramisu (2017)
TernausNet (2018)
Data insights
Processing microscopy images
Microscopy images
...
27
train
val
8
Train-time augmentation
Motivation:
Libraries used:
(alternate input image during training)
Final: random crop 256x256
✅ Flip
✅ Affine transformations
✅ Translation
✅ Contrast / Brightness
Grayscale
✅ HSV
✅ Gaussian Noise
Median / Gaussian Blur
✅ Sharpen / Emboss
Train-time augmentation. Examples
256
256
Model
Architecture overview & training
Semantic segmentation
Encoder (downsampling)
Pooling
Convolutions with different
Blue maps are inputs, and cyan maps are outputs.
padding ‘same’
stride 2, no dilation
no stride, dilation 2
no padding
no stride, no dilation
pooling 2x2,
stride 2
“Simple”
convolution
Decoder (upsampling)
no padding
no stride, no dilation
“Simple”
convolution
Transposed convolutions (deconvolutions)
padding
stride 2 (transposed)
More about architecture
Final 1x1 convolution + softmax
Skip connection
More about architecture
Final 1x1 convolution + softmax
Skip connection
TernausNet
TernausNet
TernausNet
Metrics. Loss function
Pixel-wise binary cross entropy loss
Metrics. Evaluation
Precision & Recall
Dice:
F-score (generalization):
Jaccard:
Metrics. Evaluation
Jaccard index
Y
P
Y
P
P
Y
Y
P
Y
P
Y
P
Y
P
Y
P
2
or Intersection over Union (IoU)
Precision Recall Dice
Metrics
e.g. soft jaccard as addition to BCE (used in TernausNet)
Dice & Jaccard are often used in addition to BCE
Dice & Jaccard losses
and further improvements
Results
Baseline results
Mean IoU: 0.79
Mean precision: 0.87
Mean recall: 0.91
Final results
Mean IoU: 0.88
Mean precision: 0.94
Mean recall: 0.93
Accumulated results
Final training details
Architecture: vanilla UNet
Loss: BCE
Adam optimizer
Learning rate 0.0002
Batch size 16
Different augmentations described earlier
200 epochs
Thank you!