File size: 3,844 Bytes
3324de2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
<p align="center">
 <!-- <h2 align="center">📻 DepthFM: Fast Monocular Depth Estimation with Flow Matching</h2> -->
 <h2 align="center"><img src=assets/figures/radio.png width=28> DepthFM: Fast Monocular Depth Estimation with Flow Matching</h2>
 <p align="center"> 
    Ming Gui<sup>*</sup> · Johannes Schusterbauer<sup>*</sup> · Ulrich Prestel · Pingchuan Ma
 </p><p align="center"> 
    Dmytro Kotovenko · Olga Grebenkova · Stefan A. Baumann · Vincent Tao Hu · Björn Ommer
 </p>
 <p align="center"> 
    <b>CompVis Group @ LMU Munich</b>
 </p>
 <p align="center"> 
    <b>AAAI 2025</b>
 </p>
  <p align="center"> <sup>*</sup> <i>equal contribution</i> </p>
</p>

 </p>

[![Website](assets/figures/badge-website.svg)](https://depthfm.github.io)
[![Paper](https://img.shields.io/badge/arXiv-PDF-b31b1b)](https://arxiv.org/abs/2403.13788)


![Cover](/assets/figures/dfm-cover.png)


## 📻 Overview

We present **DepthFM**, a state-of-the-art, versatile, and fast monocular depth estimation model. DepthFM is efficient and can synthesize realistic depth maps within *a single inference* step. Beyond conventional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting and depth conditional synthesis.

With our work we demonstrate the successful transfer of strong image priors from a foundation image synthesis diffusion model (Stable Diffusion v2-1) to a flow matching model. Instead of starting from noise, we directly map from input image to depth map.


## 🛠️ Setup

This setup was tested with `Ubuntu 22.04.4 LTS`, `CUDA Version: 12.4`, and `Python 3.10.12`.

First, clone the github repo...

```bash
git clone [email protected]:CompVis/depth-fm.git
cd depth-fm
```

Then download the weights via

```bash
wget https://ommer-lab.com/files/depthfm/depthfm-v1.ckpt -P checkpoints/
```

Now you have either the option to setup a virtual environment and install all required packages with `pip`

```bash
pip install -r requirements.txt
```

or if you prefer to use `conda` create the conda environment via

```bash
conda env create -f environment.yml
```

Now you should be able to listen to DepthFM! 📻 🎶


## 🚀 Usage

You can either use the notebook `inference.ipynb` or just run the python script `inference.py` as follows

```bash
python inference.py \
   --num_steps 2 \
   --ensemble_size 4 \
   --img assets/dog.png \
   --ckpt checkpoints/depthfm-v1.ckpt
```

The argument `--num_steps` allows you to set the number of function evaluations. We find that our model already gives very good results with as few as one or two steps. Ensembling also improves performance, so you can set it via the `--ensemble_size` argument. Currently, the inference code only supports a batch size of one for ensembling.

## 📈 Results

Our quantitative analysis shows that despite being substantially more efficient, our DepthFM performs on-par or even outperforms the current state-of-the-art generative depth estimator Marigold **zero-shot** on a range of benchmark datasets. Below you can find a quantitative comparison of DepthFM against other affine-invariant depth estimators on several benchmarks.

![Results](/assets/figures/sota-comparison.jpg)



## Trend

[![Star History Chart](https://api.star-history.com/svg?repos=CompVis/depth-fm&type=Date)](https://star-history.com/#CompVis/depth-fm&Date)




## 🎓 Citation

Please cite our paper:

```bibtex
@misc{gui2024depthfm,
      title={DepthFM: Fast Monocular Depth Estimation with Flow Matching}, 
      author={Ming Gui and Johannes Schusterbauer and Ulrich Prestel and Pingchuan Ma and Dmytro Kotovenko and Olga Grebenkova and Stefan Andreas Baumann and Vincent Tao Hu and Björn Ommer},
      year={2024},
      eprint={2403.13788},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```