File size: 2,964 Bytes
c225c00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: cc-by-4.0
datasets:
- AnonRes/OpenMind
pipeline_tag: image-feature-extraction
tags:
- medical
---

# OpenMind Benchmark 3D SSL Models

> **Model from the paper**: [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)  
> **Pre-training codebase used to create checkpoint**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)  
> **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)  
> **Downstream (segmentation) fine-tuning**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet)

---

![OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind/resolve/main/assets/OpenMindDataset.png)

## ๐Ÿ” Overview

This repository hosts pre-trained checkpoints from the **OpenMind** benchmark:  
๐Ÿ“„ **"An OpenMind for 3D medical vision self-supervised learning"**  
([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) โ€” the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.

The models were pre-trained using various SSL methods on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.

**These models are not recommended to be used as-is.** Instead we recommend using the downstream fine-tuning pipelines for **segmentation** and **classification**, available in the [adaptation repository](https://github.com/TaWald/nnUNet).
*While direct download is possible, we recommend using the auto-download of the respective fine-tuning repositories.*

---

## ๐Ÿง  Model Variants

We release SSL checkpoints for two backbone architectures:

- **ResEnc-L**: A CNN-based encoder [[link1](https://arxiv.org/abs/2410.23132), [link2](https://arxiv.org/abs/2404.09556)]  
- **Primus-M**: A transformer-based encoder [[Primus paper](https://arxiv.org/abs/2503.01835)]

Each encoder has been pre-trained using the following SSL techniques:

| Method        | Description |
|---------------|-------------|
| [Volume Contrastive (VoCo)](https://arxiv.org/abs/2402.17300) | Global contrastive learning in 3D volumes |
| [VolumeFusion (VF)](https://arxiv.org/abs/2306.16925)         | Spatial fusion-based SSL |
| [Models Genesis (MG)](https://www.sciencedirect.com/science/article/pii/S1361841520302048) | Classic 3D self-reconstruction |
| [Masked Autoencoders (MAE)](https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper) | Patch masking and reconstruction |
| [Spark 3D (S3D)](https://arxiv.org/abs/2410.23132) | 3D adaptation of Spark framework |
| [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction |
| [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Transformer-based pre-training |
| [SimCLR](https://arxiv.org/abs/2002.05709) | Contrastive learning baseline |