File size: 3,370 Bytes
c225c00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2fa30f6
c225c00
 
2fa30f6
c225c00
 
2fa30f6
c225c00
2fa30f6
 
c225c00
 
 
2fa30f6
c225c00
 
 
2fa30f6
c225c00
 
2fa30f6
c225c00
 
 
2fa30f6
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: cc-by-4.0
datasets:
- AnonRes/OpenMind
pipeline_tag: image-feature-extraction
tags:
- medical
---

# OpenMind Benchmark 3D SSL Models

> **Model from the paper**: [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)  
> **Pre-training codebase used to create checkpoint**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)  
> **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)  
> **Downstream (segmentation) fine-tuning**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet)

---

![OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind/resolve/main/assets/OpenMindDataset.png)

## Overview

This repository hosts pre-trained checkpoints from the **OpenMind** benchmark:  
📄 **An OpenMind for 3D medical vision self-supervised learning** (Wald, T., Ulrich, C., Suprijadi, J., Ziegler, S., Nohel, M., Peretzke, R., ... & Maier-Hein, K. H. (2024).)  
([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) — the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.

Each model was pre-trained using a particular SSL method on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.

**These models are not recommended to be used as-is for feature extraction.** Instead we recommend using the downstream fine-tuning frameworks for **segmentation** and **classification** adaptation, available in the [adaptation repository](https://github.com/TaWald/nnUNet).
*While manual download is possible, we recommend using the auto-download feature of the fine-tuning repository by providing the repository URL on Hugging Face instead of a local checkpoint path.*

---

## Model Variants

We release SSL checkpoints for two backbone architectures:

- **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]  
- **Primus-M**: A transformer-based encoder [[Primus paper](https://arxiv.org/abs/2503.01835)]

Each encoder has been pre-trained using one of the following SSL techniques:

| Method        | Description |
|---------------|-------------|
| [Volume Contrastive (VoCo)](https://arxiv.org/abs/2402.17300) | Contrastive pretraining method for 3D volumes |
| [VolumeFusion (VF)](https://arxiv.org/abs/2306.16925)         | Spatial volume fusion-based segmentation SSL method |
| [Models Genesis (MG)](https://www.sciencedirect.com/science/article/pii/S1361841520302048) | Reconstruction and denoising based pretraining method |
| [Masked Autoencoders (MAE)](https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper) | Default reconstruction based pretraining method |
| [Spark 3D (S3D)](https://arxiv.org/abs/2410.23132) | Sparse reconstruction based pretraining mehtod (CNN only) |
| [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
| [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
| [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |