A | B | C | D | E | F | |
---|---|---|---|---|---|---|
1 | This spreadsheet is an attempt to compile the many open source ML tools available for pathology images: code, Jupyter notebooks, pretrained models, and datasets (see the tabs across the bottom). Have you open sourced your own project? Have you used someone else's publicly accessible code or data that others might benefit from? I would greatly appreciate your help in expanding and updating this list. Additions or corrections? This spreadsheet is editable by all. Please add new resources or correct info in existing rows. Questions or comments? Email heather@pixelscientia.com | |||||
2 | Category | Name & Link | Framework | License | Description | Comments |
3 | annotation | QuickAnnotator | PyTorch | BSD 3-Clause Clear | Rapidly bootstrap annotation creation for digital pathology projects by helping identify images and small regions | |
4 | annotation | NuClick | PyTorch | CNN-based approach to speed up collecting annotations for microscopic objects requiring minimum interaction from the annotator | ||
5 | annotation | HistomicsUI | Apache-2.0 | |||
6 | annotation | MONAI Label | PyTorch | Apache 2.0 | MONAI Label already implemented three different interction models for AI assited annotation: Segmentation Nuclei, DeepEdit, and NuClick | |
7 | anomaly detection | P-CEAD | Apache-2.0 | Anomaly detection using a progressive autoencoder for inpainting | ||
8 | augmentation | he-auto-augment | TensorFlow | H&E tailored Randaugment: automatic data augmentation policy selection for H&E-stained histopathology. | ||
9 | augmentation | Stain Mix-up | Stain Mix-Up: Domanin Generalization for Histopathology Images as an image augmentation technique | |||
10 | augmentation | style-transfer-for-digital-pathology | PyTorch | Learning domain-agnostic visual representation for computational pathology using medically-irrelevant style transfer augmentation | ||
11 | augmentation stain normalization | stainlib | MIT | Augmentation & normalization of H&E images | ||
12 | cell segmentation | FewShotCellSegmentation | PyTorch | MIT | Few-shot microscopy image cell segmentation | |
13 | cell segmentation | Cell-DETR | PyTorch | MIT | Attention-Based Transformers for Instance Segmentation of Cells in Microstructures | |
14 | co-registration | HistoReg | Framework for registration of sequential digitized histology slices | |||
15 | co-registration | DeepHistReg | PyTorch | Apache-2.0 | ||
16 | domain adversarial training | H&E-adversarial CNN | PyTorch | |||
17 | graph NN | Histocartography | PyTorch | AGPL-3.0 | graph-based computational pathology | |
18 | graph NN | Patch-GCN | PyTorch | GPL-3.0 | Context-Aware Survival Prediction using Patch-based Graph Convolutional Networks | |
19 | graph NN | DeepLIIF | PyTorch | Deep Learning Inferred Multiplex ImmunoFluorescence for IHC Image Quantification | https://deepliif.org ; AI-ready datasets, code, imageJ/Qupath plugin available on GitHub | |
20 | MIL | MONAI | PyTorch | Apache 2.0 | ||
21 | MIL | DeepMed | Fastai | Use AI for prediction of any "label" directly from digitized pathology slides. Common use cases which can be reproduced by this pipeline include prediction of microsatellite instability in colorectal cancer, mutations in lung cancer, subtypes of renal cell carcinoma | ||
22 | MIL | Slideflow | Pytorch & Tensorflow | GPL-3.0 | Slideflow provides a unified API for building and testing deep learning models for digital pathology, supporting both Tensorflow and PyTorch. For MIL models, Slideflow uses CLAM (described separately below) but with a separate/optimized slide-reading & data processing framework. | |
23 | MIL | CLAM | PyTorch | GPL-3.0 | Weakly-supervised method that uses attention-based learning to automatically identify sub-regions of high diagnostic value in order to accurately classify the whole slide, while also utilizing instance-level clustering over the representative regions identified to constrain and refine the feature space | |
24 | MIL | SparseConvMIL | PyTorch | AGPL-3.0 | Sparse Convolutional Context-Aware Multiple Instance Learning for Whole Slide Image Classification | |
25 | MIL | DT-MIL | PyTorch | Apache-2.0 | Apache-2.0 | |
26 | MIL | HIA | PyTorch | Histopathology Image Analysis | ||
27 | MIL | TransMIL | PyTorch | TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification | ||
28 | MIL | Additive MIL: Intrinsically Interpretable MIL | PyTorch | A simple formulation of MIL models, which enables interpretability while maintaining similar predictive performance. Additive MIL models enable spatial credit assignment such that the contribution of each region in the image can be exactly computed and visualized. | ||
29 | MIL | TransPath | PyTorch | Apache-2.0 | Transformer-based Unsupervised Contrastive Learning for Histopathological Image Classification | |
30 | MIL survival | MCAT | PyTorch | GPL-3.0 | Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images | |
31 | multiplex | SIMPLI | Platform agnostic pipeline for the analysis of highly multiplexed histological imaging data | |||
32 | nuclei segmentation | MONAI HoVerNet | PyTorch | Apache 2.0 | Nuclei segmentationa and classification based on HoVerNet, MONAI Bundle and Tutorial | |
33 | nuclei segmentation | MONAI UNet | PyTorch | Apache 2.0 | Nuclei segmentationa based on UNet | |
34 | nuclei segmentation | Triple U-net | PyTorch | Triple U-net: Hematoxylin-aware Nuclei Segmentation with Progressive Dense Feature Aggregation | ||
35 | nuclei segmentation | NucleiSegNet | TensorFlow | Apache-2.0 | Robust deep learning architecture for the nuclei segmentation of liver cancer histopathology images | |
36 | nuclei segmentation + classification | HoVer-Net | PyTorch | MIT | Simultaneous Nuclear Instance Segmentation and Classification in H&E Histology Images | |
37 | nuclei segmentation | StarDist | TensorFlow | BSD-3-Clause | Object Detection with Star-convex Shapes | |
38 | nuclei segmentation + classification | Sonnet | TensorFlow | A self-guided ordinal regression neural network for segmentation and classification of nuclei in large-scale multi-tissue histology images | ||
39 | QC | HistoQC | BSD 3-Clause Clear | HistoQC is an open-source quality control tool for digital pathology slides | ||
40 | QC | PathProfiler | PyTorch | GPL-3.0 | PathProfiler: Quality Assessment of Histopathology Whole-Slide Image Cohorts | |
41 | QC | histopath_failure_modes | Failure Mode Analysis of Deep Learning Histopathology Models | |||
42 | QC | Artifact | MIT | Generation of synthetic artefacts / digital pathology | ||
43 | QC | Histopathology Artifact Detection | PyTorch | Artifact detection in hematoxylin and eosin histopathology images using a GAN-inspired classifier | ||
44 | QC | PathProfiler | BSD 3-clause | Identify and delineate artifacts; discover cohort level outliers | ||
45 | QC | Slideflow | PyTorch & Tensorflow | GPL-3.0 | Basic QC with Gaussian blur filtering and Otsu's thresholding, also supports arbitrary boolean masks for slide-level filtering. Slide reading performed with Libvips backend. | |
46 | Tissue segmentation | Deep Multi-Magnification Network | PyTorch | Deep Multi-Magnification Networks for multi-class breast cancer image segmentation | ||
47 | segmentation | TCGA Segmentation | PyTorch | AGPL-3.0 | Software system containing an end-to-end Whole Slide Imaging pre-processing pipeline from The Cancer Genome Atlas download documents, as well as a complete implementation of deep learning tumor segmentation from WSI binary labels as detailed in "Weakly supervised multiple instance learning histopathological tumor segmentation". | |
48 | segmentation | HookNet | TensorFlow | MIT | multi-resolution convolutional neural networks for semantic segmentation | |
49 | semantic segmentation | HistoSegNet | Keras | MIT | Semantic Segmentation of Histological Tissue Type in Whole Slide Images | |
50 | semantic segmentation | HookNet | TensorFlow | MIT | HookNet - multi-resolution convolutional neural networks for semantic segmentation | |
51 | SSL | Self-Supervised-ViT-Path | PyTorch | GPL-3.0 | Self-supervised vision transformer | |
52 | SSL | self-supervised | PyTorch | MIT | PyTorch Lightning implementation of the following self-supervised representation learning methods: MoCo, MoCo v2, SimCLR, BYOL, EqCo, VICReg | |
53 | SSL | CS-CO | PyTorch | MIT | Self-supervised visual representation learning for histopathological images | |
54 | SSL | SSL CR Histo | PyTorch | MIT | Self-Supervised driven Consistency Training for Annotation Efficient Histopathology Image Analysis | |
55 | SSL | HISSL | PyTorch, VISSL | MIT | HISSL stands for Histology Self-supervised learning. Self-supervised learning using VISSL and DLUP. Easy efficient SSL pre-training on tiles or WSIs with common SSL methods. Includes docker image and step-by-step execution to easily reproduce DeepSMILE. | |
56 | SSL | Self-Supervised-ViT-Path | PyTorch | GPL-3.0 | Self-supervised vision transformer for histopathology | |
57 | stain normalization | stainTransfer using CycleGAN | PyTorch | CycleGAN for image-to-image translation | ||
58 | stain normalization | StainGAN | PyTorch | StainGAN implementation based on Cycle-Consistency Concept | ||
59 | stain normalization | DSCSI-GAN | PyTorch | Stain Style Transfer of Histopathology Images Via Structure-Preserved Generative Learning on histopathology images | ||
60 | stain normalization | Stain-to-Stain Translation | Keras | Pix2Pix-based Stain-to-Stain Translation: A Solution for Robust Stain Normalization in Histopathology Images Analysis | ||
61 | stain normalization | torchstain | PyTorch | MIT | Stain normalization tools for histological analysis and computational pathology | implements Macenko |
62 | stain normalization | Slideflow | PyTorch & Tensorflow | GPL-3.0 | End-to-end deep learning toolkit, includes PyTorch-native,Tensorflow-native, and numpy stain normalization implementations for Reinhard (Fast, Fast-Mask, Mask) and Macenko normalizers, and sklearn and SPAMS implementation of the Vahadane normalizer, with benchmark comparisons of all stain normalization methods in the documentation. | |
63 | stain normalization augmentation | StainTools | MIT | Tools for tissue image stain normalisation and augmentation | ||
64 | stain separation | Tissue-Dependent Stain Separation | PyTorch | Unsupervised Deep Learning for Stain Separation and Artifact Detection in Histopathology Images | ||
65 | validation | REEToolbox | PyTorch | Measuring and improving the robustness of ML models. REEToolbox uses adversarial transforms - data transforms that are adversarially optimised to fool a model - to generate challenging transformations of input data | ||
66 | WSI Processing | MONAI | PyTorch | Apache 2.0 | MONAI (Medical Open Network for Artificial Intelligence) is a framework for AI in medical imaging based on PyTorch. | |
67 | WSI processing | TIAToolbox | PyTorch | BSD 3-clause | CPath tools for data loading, pre-processing, model inference, post-processing and visualization | |
68 | WSI processing | Histo-fetch | TensorFlow | GPL-3.0 | Histo-fetch samples stochastic patch locations from WSI datasets actively during the network training, executing preprocessing and common data augmentation operations of this data on the CPU while the GPU simultaneously executes training operations | |
69 | WSI processing | PathML | PyTorch | GPL-2.0 | PathML is a toolbox to facilitate machine learning workflows for high-resolution whole-slide pathology images. This includes modular pipelines for preprocessing, PyTorch DataLoaders for training and benchmarking machine learning model performance on standardized datasets, support for sharing preprocessing pipelines, pretrained models, and more. | |
70 | WSI processing | PathML | PyTorch | GPL-3.0 | Python library for performing deep learning image analysis on whole-slide images (WSIs), including deep tissue, artefact, and background filtering, tile extraction, model inference, model evaluation and more | |
71 | WSI Processing | DLUP | framework-agnostic | Apache-2.0 | Dlup (Deep Learning Utilities for Pathology) offers a set of of utilities to ease the process of running Deep Learning algorithms on Whole Slide Images. This includes preprocessing, masking, multiple file format backends, multiscale dataset classes, reading tiled dataset from regions of interest directly from a whole-slide image, and more. | |
72 | WSI processing | DigiPathAI | TensorFlow | MIT | A software application built on top of openslide for viewing whole slide images (WSI) and performing pathological analysis | |
73 | WSI processing | Slideflow | PyTorch & Tensorflow | GPL-3.0 | End-to-end deep learning toolkit, including various slide-level processing functions (Otsu's thresholding, Gaussian blur filtering, arbitrary Boolean masks) and tile-level processing functions (brightness and hue filtering, stain normalization, resizing). Images can be stored as raw PNG/JPG images or in binary TFRecord format (cross-compatible with Tensorflow/PyTorch). Slide reading performed with Libvips backend. | |
74 | WSI processing | wholeslidedata | Agnostic | MIT | A package for working with whole-slide data including a fast batch iterator that can be used to train deep learning models. | |
75 | WSI processing augmentation | HistoClean | AGPL-3.0 | HistoClean is a tool for the preprocessing and augmentation of images used in deep learning models | ||
76 | MSINet | PyTorch | Deep learning model for the prediction of microsatellite instability in colorectal cancer | |||
77 | HistomicsTK | Apache-2.0 | A Python toolkit for pathology image analysis algorithms. | |||
78 | qupath | |||||
79 | SSL Segmentation | PyTorch | Self-supervised domain transfer learning model, e.g. from a public to a local dataset. Any header can be added on top, to fine-tune on a specific task (here patch-based classification for tissue segmentation). Only a small annotated dataset is needed. (Oral at MIDL2021) | pre-trained models are available | ||
80 | WSI processing | tia-toolbox | PyTorch | |||
81 | nucleus classification | https://sites.google.com/view/nucls/home | ||||
82 | ||||||
83 | ||||||
84 | ||||||
85 | ||||||
86 | ||||||
87 | ||||||
88 | ||||||
89 | ||||||
90 | ||||||
91 | ||||||
92 | ||||||
93 | ||||||
94 | ||||||
95 | ||||||
96 | ||||||
97 | ||||||
98 | ||||||
99 | ||||||
100 |