Improve model card links and structure
Browse filesThis PR improves the model card by:
- Adding a link to the Hugging Face Papers page for the MedVAE paper.
- Improving the structure and readability of the model card by using more descriptive headings.
- Incorporating a more detailed usage example from the Github README.
The model card already includes the necessary metadata (library_name, license, pipeline_tag) and a link to the Github repository.
README.md
CHANGED
@@ -1,51 +1,75 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
library_name: medvae
|
|
|
4 |
pipeline_tag: image-to-image
|
5 |
---
|
6 |
|
7 |
-
# MedVAE
|
|
|
|
|
|
|
|
|
8 |
|
9 |
-
MedVAE is a family of six large-scale, generalizable 2D and 3D variational autoencoders (VAEs) designed for medical imaging. It is trained on over one million medical images across multiple anatomical regions and modalities. MedVAE autoencoders encode medical images as downsized latent representations and decode latent representations back to high-resolution images. Across diverse tasks obtained from 20 medical image datasets, we demonstrate that utilizing MedVAE latent representations in place of high-resolution images when training downstream models can lead to efficiency benefits (up to 70x improvement in throughput) while simultaneously preserving clinically-relevant features.
|
10 |
|
11 |
-
|
|
|
12 |
|
13 |
-
## π Model Description
|
14 |
| Total Compression Factor | Channels | Dimensions | Modalities | Anatomies | Config File | Model File |
|
15 |
-
|
16 |
-
| 16 | 1 | 2D | X-ray | Chest, Breast (FFDM) | [medvae_4x1.yaml
|
17 |
-
| 16 | 3 | 2D | X-ray | Chest, Breast (FFDM) | [medvae_4x3.yaml](model_weights/medvae_4x3.yaml) | [vae_4x_3c_2D.ckpt](model_weights/vae_4x_3c_2D.ckpt)
|
18 |
-
| 64 | 1 | 2D | X-ray | Chest, Breast (FFDM) | [medvae_8x1.yaml](model_weights/medvae_8x1.yaml) | [vae_8x_1c_2D.ckpt](model_weights/vae_8x_1c_2D.ckpt)
|
19 |
-
| 64 | 3 | 2D | X-ray | Chest, Breast (FFDM) | [medvae_8x4.yaml](model_weights/medvae_8x4.yaml) | [vae_8x_4c_2D.ckpt](model_weights/vae_8x_4c_2D.ckpt)
|
20 |
-
| 64 | 1 | 3D | MRI, CT | Whole-Body | [medvae_4x1.yaml
|
21 |
-
| 512 | 1 | 3D | MRI, CT | Whole-Body | [medvae_8x1.yaml](model_weights/vae_8x1.yaml) | [vae_8x_1c_3D.ckpt](model_weights/vae_8x_1c_3D.ckpt)
|
22 |
|
23 |
Note: Model weights and checkpoints are located in the `model_weights` folder.
|
24 |
|
25 |
-
##
|
26 |
To install MedVAE, you can simply run:
|
27 |
|
28 |
```python
|
29 |
pip install medvae
|
30 |
```
|
31 |
|
32 |
-
For an editable installation, use the following commands to clone and install this repository
|
33 |
-
|
|
|
34 |
git clone https://github.com/StanfordMIMI/MedVAE.git
|
35 |
-
cd
|
36 |
pip install -e .[dev]
|
37 |
```
|
38 |
|
|
|
|
|
39 |
|
40 |
-
|
|
|
|
|
41 |
|
42 |
-
|
|
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
medvae_inference -i INPUT_FOLDER -o OUTPUT_FOLDER -model_name MED_VAE_MODEL -modality MODALITY
|
46 |
```
|
47 |
|
48 |
-
|
|
|
|
|
|
|
49 |
If you use MedVAE, please cite the original paper:
|
50 |
|
51 |
```bibtex
|
@@ -60,4 +84,4 @@ If you use MedVAE, please cite the original paper:
|
|
60 |
}
|
61 |
```
|
62 |
|
63 |
-
For questions, please
|
|
|
1 |
---
|
|
|
2 |
library_name: medvae
|
3 |
+
license: mit
|
4 |
pipeline_tag: image-to-image
|
5 |
---
|
6 |
|
7 |
+
# MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders
|
8 |
+
|
9 |
+
The model was presented in the paper [MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders](https://huggingface.co/papers/2502.14753).
|
10 |
+
|
11 |
+
**Abstract:** Medical images are acquired at high resolutions with large fields of view in order to capture fine-grained features necessary for clinical decision-making. Consequently, training deep learning models on medical images can incur large computational costs. In this work, we address the challenge of downsizing medical images in order to improve downstream computational efficiency while preserving clinically-relevant features. We introduce MedVAE, a family of six large-scale 2D and 3D autoencoders capable of encoding medical images as downsized latent representations and decoding latent representations back to high-resolution images. We train MedVAE autoencoders using a novel two-stage training approach with 1,052,730 medical images. Across diverse tasks obtained from 20 medical image datasets, we demonstrate that (1) utilizing MedVAE latent representations in place of high-resolution images when training downstream models can lead to efficiency benefits (up to 70x improvement in throughput) while simultaneously preserving clinically-relevant features and (2) MedVAE can decode latent representations back to high-resolution images with high fidelity. Our work demonstrates that large-scale, generalizable autoencoders can help address critical efficiency challenges in the medical domain.
|
12 |
|
|
|
13 |
|
14 |
+
## Model Description
|
15 |
+
MedVAE is a family of six large-scale, generalizable 2D and 3D variational autoencoders (VAEs) designed for medical imaging. It is trained on over one million medical images across multiple anatomical regions and modalities. MedVAE autoencoders encode medical images as downsized latent representations and decode latent representations back to high-resolution images. Across diverse tasks obtained from 20 medical image datasets, we demonstrate that utilizing MedVAE latent representations in place of high-resolution images when training downstream models can lead to efficiency benefits (up to 70x improvement in throughput) while simultaneously preserving clinically-relevant features.
|
16 |
|
|
|
17 |
| Total Compression Factor | Channels | Dimensions | Modalities | Anatomies | Config File | Model File |
|
18 |
+
|---|---|---|---|---|---|---|
|
19 |
+
| 16 | 1 | 2D | X-ray | Chest, Breast (FFDM) | [medvae_4x1.yaml](model_weights/medvae_4x1.yaml) | [vae_4x_1c_2D.ckpt](model_weights/vae_4x_1c_2D.ckpt) |
|
20 |
+
| 16 | 3 | 2D | X-ray | Chest, Breast (FFDM) | [medvae_4x3.yaml](model_weights/medvae_4x3.yaml) | [vae_4x_3c_2D.ckpt](model_weights/vae_4x_3c_2D.ckpt) |
|
21 |
+
| 64 | 1 | 2D | X-ray | Chest, Breast (FFDM) | [medvae_8x1.yaml](model_weights/medvae_8x1.yaml) | [vae_8x_1c_2D.ckpt](model_weights/vae_8x_1c_2D.ckpt) |
|
22 |
+
| 64 | 3 | 2D | X-ray | Chest, Breast (FFDM) | [medvae_8x4.yaml](model_weights/medvae_8x4.yaml) | [vae_8x_4c_2D.ckpt](model_weights/vae_8x_4c_2D.ckpt) |
|
23 |
+
| 64 | 1 | 3D | MRI, CT | Whole-Body | [medvae_4x1.yaml](model_weights/medvae_4x1.yaml) | [vae_4x_1c_3D.ckpt](model_weights/vae_4x_1c_3D.ckpt) |
|
24 |
+
| 512 | 1 | 3D | MRI, CT | Whole-Body | [medvae_8x1.yaml](model_weights/vae_8x1.yaml) | [vae_8x_1c_3D.ckpt](model_weights/vae_8x_1c_3D.ckpt) |
|
25 |
|
26 |
Note: Model weights and checkpoints are located in the `model_weights` folder.
|
27 |
|
28 |
+
## Installation
|
29 |
To install MedVAE, you can simply run:
|
30 |
|
31 |
```python
|
32 |
pip install medvae
|
33 |
```
|
34 |
|
35 |
+
For an editable installation, use the following commands to clone and install this repository:
|
36 |
+
|
37 |
+
```bash
|
38 |
git clone https://github.com/StanfordMIMI/MedVAE.git
|
39 |
+
cd MedVAE
|
40 |
pip install -e .[dev]
|
41 |
```
|
42 |
|
43 |
+
## Usage
|
44 |
+
A simple example using the `medvae` library for inference:
|
45 |
|
46 |
+
```python
|
47 |
+
import torch
|
48 |
+
from medvae import MVAE
|
49 |
|
50 |
+
fpath = "documentation/data/mmg_data/isJV8hQ2hhJsvEP5rdQNiy.png" # Replace with your image path
|
51 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
52 |
|
53 |
+
model = MVAE(model_name="medvae_4_3_2d", modality="xray").to(device)
|
54 |
+
img = model.apply_transform(fpath).to(device)
|
55 |
+
|
56 |
+
model.requires_grad_(False)
|
57 |
+
model.eval()
|
58 |
+
|
59 |
+
with torch.no_grad():
|
60 |
+
latent = model(img)
|
61 |
+
```
|
62 |
+
|
63 |
+
We also provide an easy-to-use CLI inference tool:
|
64 |
+
|
65 |
+
```bash
|
66 |
medvae_inference -i INPUT_FOLDER -o OUTPUT_FOLDER -model_name MED_VAE_MODEL -modality MODALITY
|
67 |
```
|
68 |
|
69 |
+
For more detailed instructions, refer to the [Github repository](https://github.com/StanfordMIMI/MedVAE).
|
70 |
+
|
71 |
+
|
72 |
+
## Citation
|
73 |
If you use MedVAE, please cite the original paper:
|
74 |
|
75 |
```bibtex
|
|
|
84 |
}
|
85 |
```
|
86 |
|
87 |
+
For questions, please open an issue on the [Github repository](https://github.com/StanfordMIMI/MedVAE).
|