Include README.md and add citation infos for architectures, methods, dataset and framework into the checkpoint
Browse files- README.md +52 -3
- adaptation_plan.json +33 -3
- checkpoint_final.pth +2 -2
README.md
CHANGED
@@ -1,3 +1,52 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
datasets:
|
4 |
+
- AnonRes/OpenMind
|
5 |
+
pipeline_tag: image-feature-extraction
|
6 |
+
tags:
|
7 |
+
- medical
|
8 |
+
---
|
9 |
+
|
10 |
+
# OpenMind Benchmark 3D SSL Models
|
11 |
+
|
12 |
+
> **Model from the paper**: [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)
|
13 |
+
> **Pre-training codebase used to create checkpoint**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)
|
14 |
+
> **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)
|
15 |
+
> **Downstream (segmentation) fine-tuning**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet)
|
16 |
+
|
17 |
+
---
|
18 |
+
|
19 |
+

|
20 |
+
|
21 |
+
## 🔍 Overview
|
22 |
+
|
23 |
+
This repository hosts pre-trained checkpoints from the **OpenMind** benchmark:
|
24 |
+
📄 **"An OpenMind for 3D medical vision self-supervised learning"**
|
25 |
+
([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) — the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.
|
26 |
+
|
27 |
+
The models were pre-trained using various SSL methods on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.
|
28 |
+
|
29 |
+
**These models are not recommended to be used as-is.** Instead we recommend using the downstream fine-tuning pipelines for **segmentation** and **classification**, available in the [adaptation repository](https://github.com/TaWald/nnUNet).
|
30 |
+
*While direct download is possible, we recommend using the auto-download of the respective fine-tuning repositories.*
|
31 |
+
|
32 |
+
---
|
33 |
+
|
34 |
+
## 🧠 Model Variants
|
35 |
+
|
36 |
+
We release SSL checkpoints for two backbone architectures:
|
37 |
+
|
38 |
+
- **ResEnc-L**: A CNN-based encoder [[link1](https://arxiv.org/abs/2410.23132), [link2](https://arxiv.org/abs/2404.09556)]
|
39 |
+
- **Primus-M**: A transformer-based encoder [[Primus paper](https://arxiv.org/abs/2503.01835)]
|
40 |
+
|
41 |
+
Each encoder has been pre-trained using the following SSL techniques:
|
42 |
+
|
43 |
+
| Method | Description |
|
44 |
+
|---------------|-------------|
|
45 |
+
| [Volume Contrastive (VoCo)](https://arxiv.org/abs/2402.17300) | Global contrastive learning in 3D volumes |
|
46 |
+
| [VolumeFusion (VF)](https://arxiv.org/abs/2306.16925) | Spatial fusion-based SSL |
|
47 |
+
| [Models Genesis (MG)](https://www.sciencedirect.com/science/article/pii/S1361841520302048) | Classic 3D self-reconstruction |
|
48 |
+
| [Masked Autoencoders (MAE)](https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper) | Patch masking and reconstruction |
|
49 |
+
| [Spark 3D (S3D)](https://arxiv.org/abs/2410.23132) | 3D adaptation of Spark framework |
|
50 |
+
| [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction |
|
51 |
+
| [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Transformer-based pre-training |
|
52 |
+
| [SimCLR](https://arxiv.org/abs/2002.05709) | Contrastive learning baseline |
|
adaptation_plan.json
CHANGED
@@ -54,9 +54,9 @@
|
|
54 |
1
|
55 |
],
|
56 |
"patch_size": [
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
]
|
61 |
}
|
62 |
},
|
@@ -75,5 +75,35 @@
|
|
75 |
"encoder.stem.convs.0.all_modules.0"
|
76 |
],
|
77 |
"key_to_lpe": null,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
"trainer_name": "BaseMAETrainer_BS8"
|
79 |
}
|
|
|
54 |
1
|
55 |
],
|
56 |
"patch_size": [
|
57 |
+
64,
|
58 |
+
64,
|
59 |
+
64
|
60 |
]
|
61 |
}
|
62 |
},
|
|
|
75 |
"encoder.stem.convs.0.all_modules.0"
|
76 |
],
|
77 |
"key_to_lpe": null,
|
78 |
+
"citations": [
|
79 |
+
{
|
80 |
+
"type": "Architecture",
|
81 |
+
"name": "ResEncL",
|
82 |
+
"bibtex_citations": [
|
83 |
+
"@inproceedings{isensee2024nnu,\n title={nnu-net revisited: A call for rigorous validation in 3d medical image segmentation},\n author={Isensee, Fabian and Wald, Tassilo and Ulrich, Constantin and Baumgartner, Michael and Roy, Saikat and Maier-Hein, Klaus and Jaeger, Paul F},\n booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},\n pages={488--498},\n year={2024},\n organization={Springer}\n }"
|
84 |
+
]
|
85 |
+
},
|
86 |
+
{
|
87 |
+
"type": "Pretraining Method",
|
88 |
+
"name": "Masked Auto Encoder",
|
89 |
+
"bibtex_citations": [
|
90 |
+
"@article{wald2024revisiting,\n title={Revisiting MAE pre-training for 3D medical image segmentation},\n author={Wald, Tassilo and Ulrich, Constantin and Lukyanenko, Stanislav and Goncharov, Andrei and Paderno, Alberto and Maerkisch, Leander and J{\"a}ger, Paul F and Maier-Hein, Klaus},\n journal={arXiv preprint arXiv:2410.23132},\n year={2024}\n}"
|
91 |
+
]
|
92 |
+
},
|
93 |
+
{
|
94 |
+
"type": "Pre-Training Dataset",
|
95 |
+
"name": "OpenMind",
|
96 |
+
"bibtex_citations": [
|
97 |
+
"@article{wald2024openmind,\n title={An OpenMind for 3D medical vision self-supervised learning},\n author={Wald, Tassilo and Ulrich, Constantin and Suprijadi, Jonathan and Ziegler, Sebastian and Nohel, Michal and Peretzke, Robin and K{\"o}hler, Gregor and Maier-Hein, Klaus H},\n journal={arXiv preprint arXiv:2412.17041},\n year={2024}\n }\n "
|
98 |
+
]
|
99 |
+
},
|
100 |
+
{
|
101 |
+
"type": "Framework",
|
102 |
+
"name": "nnssl",
|
103 |
+
"bibtex_citations": [
|
104 |
+
"@article{wald2024revisiting,\n title={Revisiting MAE pre-training for 3D medical image segmentation},\n author={Wald, Tassilo and Ulrich, Constantin and Lukyanenko, Stanislav and Goncharov, Andrei and Paderno, Alberto and Maerkisch, Leander and J{\"a}ger, Paul F and Maier-Hein, Klaus},\n journal={arXiv preprint arXiv:2410.23132},\n year={2024}\n}"
|
105 |
+
]
|
106 |
+
}
|
107 |
+
],
|
108 |
"trainer_name": "BaseMAETrainer_BS8"
|
109 |
}
|
checkpoint_final.pth
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eaf4a965c2a3293c502d67013b555dff0bb56e3dfdb1b4c6fca7dcf61b281b95
|
3 |
+
size 491198734
|