DelinQu commited on
Commit
2f1963e
·
1 Parent(s): b9e9937

Update README

Browse files
Files changed (3) hide show
  1. README.md +129 -0
  2. dataset.png +3 -0
  3. dataset_statistic.png +3 -0
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ size_categories:
3
+ - 100B<n<1T
4
+ ---
5
+
6
+ # Dataset Card for LiveScene
7
+
8
+ <p align="center">
9
+ <img src="dataset.png" width="80%" title="Overview of OmniSim and InterReal Dataset">
10
+ </p>
11
+
12
+ ## Dataset Description
13
+
14
+ The dataset consists of two parts: the **InterReal** dataset, which was captured using the Polycam app on an iPhone 15 Pro, and the **OmniSim** dataset created with the OmniGibson simulator. In total, the dataset provides **28 interactive subsets**, containing 2 million samples across various modalities, including RGB, depth, segmentation, camera trajectories, interaction variables, and object captions. This comprehensive dataset supports a range of tasks involving real-world and simulated environments.
15
+
16
+ <p align="center">
17
+ <img src="dataset_statistic.png" width="80%" title="Statistic of OmniSim and InterReal Dataset">
18
+ </p>
19
+
20
+ ### Dataset Sources
21
+
22
+ - **[Paper](https://arxiv.org/abs/2406.16038)**
23
+ - **[Demo](https://livescenes.github.io/)**
24
+
25
+ ## Uses
26
+
27
+ ### Direct Use
28
+
29
+ To download the entire dataset, follow these steps:
30
+ ```bash
31
+ git lfs install
32
+ git clone https://huggingface.co/datasets/IPEC-COMMUNITY/LiveScene
33
+
34
+ # Merge the parts (if necessary)
35
+ cat {scene_name}_part_* > {scene_name}.tar.gz
36
+
37
+ tar -xzf {scene_name}.tar.gz
38
+ ```
39
+
40
+ If you only want to download a specific subset, use the following code:
41
+ ```python
42
+ from huggingface_hub import hf_hub_download
43
+
44
+ hf_hub_download(
45
+ repo_id="IPEC-COMMUNITY/LiveScene",
46
+ filename="OmniSim/{scene_name}.tar.gz",
47
+ repo_type="dataset",
48
+ local_dir=".",
49
+ )
50
+ ```
51
+ After downloading, you can extract the subset using:
52
+ ```bash
53
+ tar -xzf {scene_name}.tar.gz
54
+ ```
55
+
56
+ ## Dataset Structure
57
+
58
+
59
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
60
+
61
+ ```
62
+ .
63
+ |-- InterReal
64
+ `-- {scene_name}.tar.gz
65
+ |-- depth
66
+ | `-- xxx.npy
67
+ |-- images
68
+ | `-- xxx.jpg
69
+ |-- images_2
70
+ |-- images_4
71
+ |-- images_8
72
+ |-- masks
73
+ | `-- xxx.npy
74
+ |-- key_frame_value.yaml
75
+ |-- mapping.yaml
76
+ `-- transforms.json
77
+ |-- OmniSim
78
+ `-- {scene_name}.tar.gz
79
+ |-- depth
80
+ | `-- xxx.npy
81
+ |-- images
82
+ | `-- xxx.png
83
+ |-- mask
84
+ | `-- xxx.npy
85
+ |-- key_frame_value.yaml
86
+ |-- mapping.yaml
87
+ `-- transforms.json
88
+ ```
89
+
90
+
91
+ ## Dataset Creation
92
+
93
+ ### Curation Rationale
94
+
95
+ To our knowledge, existing view synthetic datasets for interactive scene rendering are primarily limited to a few interactive objects due to necessitating a substantial amount of manual annotation of object masks and states, making it impractical to scale up to real scenarios involving multi-object interactions. To bridge this gap, we construct two scene-level, high-quality annotated datasets to advance research progress in reconstructing and understanding interactive scenes: **OmniSim** and **InterReal**.
96
+
97
+ ### Data Collection and Processing
98
+
99
+ #### Scene Assets and Generation Pipeline for OmniSim
100
+ We generate the synthetic dataset using the OmniGibson simulator. The dataset consists of 20 interactive scenes from 7 scene models: *\#rs*, *\#ihlen*, *\#beechwood*, *\#merom*, *\#pomaria*, *\#wainscott*, and *\#benevolence*. The scenes feature various interactive objects, including cabinets, refrigerators, doors, drawers, and more, each with different hinge joints.
101
+
102
+ We configure the simulator camera with an intrinsic parameter set of focal length 8, aperture 20, and a resolution of 1024 × 1024. By varying the rotation vectors for each joint of the articulated objects, we can observe different motion states of various objects. We generated 20 high-definition subsets, each consisting of RGB images, depth, camera trajectory, interactive object masks, and corresponding object state quantities relative to their "closed" state at each time step, from multiple camera trajectories and viewpoints.
103
+
104
+ The data is obtained through the following steps:
105
+ - The scene model is loaded, and the respective objects are selected, with motion trajectories set for each joint.
106
+ - Keyframes are set for camera movement in the scene, and smooth trajectories are obtained through interpolation.
107
+ - The simulator is then initiated, and the information captured by the camera at each moment is recorded.
108
+
109
+ #### Scene Assets and Generation Pipeline for InterReal
110
+ InterReal is primarily captured using the Polycam app on an Apple iPhone 15 Pro. We selected 8 everyday scenes and placed various interactive objects within each scene, including transformers, laptops, microwaves, and more. We recorded 8 videos, each at a frame rate of 5FPS, capturing 700 to 1000 frames per video.
111
+
112
+ The dataset was processed via the following steps:
113
+ - manual object movement and keyframe capture
114
+ - OBJ file export and pose optimization using Polycam
115
+ - conversion to a dataset containing RGB images and transformation matrices using Nerfstudio
116
+ - mask generation for each object in each scene using SAM and corresponding prompts and state quantity labeling for certain keyframes.
117
+
118
+
119
+ ## Citation
120
+
121
+ If you find our work useful, please consider citing us!
122
+ ```bibtex
123
+ @article{livescene2024,
124
+ title = {LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Rendering and Control},
125
+ author = {Delin Qu, Qizhi Chen, Pingrui Zhang, Xianqiang Gao, Bin Zhao, Zhigang Wang, Dong Wang, Xuelong Li},
126
+ year = {2024},
127
+ journal = {arXiv preprint arXiv:2406.16038}
128
+ }
129
+ ```
dataset.png ADDED

Git LFS Details

  • SHA256: 872e44dbf617d5e88de287e7eeeb5e12d31cdd5b5286e6d5eadc86d27bcd1dc9
  • Pointer size: 133 Bytes
  • Size of remote file: 46.5 MB
dataset_statistic.png ADDED

Git LFS Details

  • SHA256: 7bcffe16d1be367aab71e65c5d0fda342c68483808b6cb7af978d5ac65a22f11
  • Pointer size: 131 Bytes
  • Size of remote file: 537 kB