AnonRes's picture
Include README.md and add citation infos for architectures, methods, dataset and framework into the checkpoint
c225c00 verified
|
raw
history blame
2.96 kB
metadata
license: cc-by-4.0
datasets:
  - AnonRes/OpenMind
pipeline_tag: image-feature-extraction
tags:
  - medical

OpenMind Benchmark 3D SSL Models

Model from the paper: An OpenMind for 3D medical vision self-supervised learning
Pre-training codebase used to create checkpoint: MIC-DKFZ/nnssl
Dataset: AnonRes/OpenMind
Downstream (segmentation) fine-tuning: TaWald/nnUNet


OpenMind

πŸ” Overview

This repository hosts pre-trained checkpoints from the OpenMind benchmark:
πŸ“„ "An OpenMind for 3D medical vision self-supervised learning"
(arXiv:2412.17041) β€” the first extensive benchmark study for self-supervised learning (SSL) on 3D medical imaging data.

The models were pre-trained using various SSL methods on the OpenMind Dataset, a large-scale, standardized collection of public brain MRI datasets.

These models are not recommended to be used as-is. Instead we recommend using the downstream fine-tuning pipelines for segmentation and classification, available in the adaptation repository. While direct download is possible, we recommend using the auto-download of the respective fine-tuning repositories.


🧠 Model Variants

We release SSL checkpoints for two backbone architectures:

Each encoder has been pre-trained using the following SSL techniques:

Method Description
Volume Contrastive (VoCo) Global contrastive learning in 3D volumes
VolumeFusion (VF) Spatial fusion-based SSL
Models Genesis (MG) Classic 3D self-reconstruction
Masked Autoencoders (MAE) Patch masking and reconstruction
Spark 3D (S3D) 3D adaptation of Spark framework
SimMIM Simple masked reconstruction
SwinUNETR SSL Transformer-based pre-training
SimCLR Contrastive learning baseline