UniGeo: Taming Video Diffusion for Unified Consistent Geometry Estimation
Abstract
Video generation models leveraging diffusion priors achieve superior global geometric attribute estimation and reconstructions, benefiting from inter-frame consistency and joint training on shared attributes.
Recently, methods leveraging diffusion model priors to assist monocular geometric estimation (e.g., depth and normal) have gained significant attention due to their strong generalization ability. However, most existing works focus on estimating geometric properties within the camera coordinate system of individual video frames, neglecting the inherent ability of diffusion models to determine inter-frame correspondence. In this work, we demonstrate that, through appropriate design and fine-tuning, the intrinsic consistency of video generation models can be effectively harnessed for consistent geometric estimation. Specifically, we 1) select geometric attributes in the global coordinate system that share the same correspondence with video frames as the prediction targets, 2) introduce a novel and efficient conditioning method by reusing positional encodings, and 3) enhance performance through joint training on multiple geometric attributes that share the same correspondence. Our results achieve superior performance in predicting global geometric attributes in videos and can be directly applied to reconstruction tasks. Even when trained solely on static video data, our approach exhibits the potential to generalize to dynamic video scenes.
Community
UniGeo utilizes video diffusion models to jointly estimate geometric properties—such as surface normals and coordinatesfrom either multi-view images or video sequences.
Project page: https://sunyangtian.github.io/UniGeo-web/
Code: https://github.com/SunYangtian/UniGeo
Also, a unified framework for geometry estimation and evaluation has been released in the repo, which provides a convenient interface for various dataset and various methods. It help support a fair comparison with 3r series (Dust3r, etc.) by aligning the output and evaluation scripts.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GeoMan: Temporally Consistent Human Geometry Estimation using Image-to-Video Diffusion (2025)
- NormalCrafter: Learning Temporally Consistent Normals from Video Diffusion Priors (2025)
- Geo4D: Leveraging Video Generators for Geometric 4D Scene Reconstruction (2025)
- Vivid4D: Improving 4D Reconstruction from Monocular Video by Video Inpainting (2025)
- POMATO: Marrying Pointmap Matching with Temporal Motion for Dynamic 3D Reconstruction (2025)
- GaussVideoDreamer: 3D Scene Generation with Video Diffusion and Inconsistency-Aware Gaussian Splatting (2025)
- Mono3R: Exploiting Monocular Cues for Geometric 3D Reconstruction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper