Datasets:

DOI:
License:
Falcary commited on
Commit
9bbe8c2
·
verified ·
1 Parent(s): 2ea4ee8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -3
README.md CHANGED
@@ -1,3 +1,72 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: GS2E
3
+ paperswithcode_id: robust-e-nerf-synthetic-event-dataset
4
+ license: cc-by-4.0
5
+ viewer: false
6
+ size_categories:
7
+ - 1K<n<10K
8
+ ---
9
+
10
+ # 📦 GS2E: Gaussian Splatting is an Effective DataGenerator for Event Stream Generation
11
+ > *Submission to NeurIPS 2025 D&B Track, Under Review.*
12
+
13
+ ## 🧾 Dataset Summary
14
+ **GS2E** (Gaussian Splatting for Event stream Extraction) is a synthetic multi-view event dataset designed to support high-fidelity 3D scene understanding, novel view synthesis, and event-based neural rendering. Unlike previous video-driven or graphics-only event datasets, GS2E leverages 3D Gaussian Splatting (3DGS) to generate geometry-consistent photorealistic RGB frames from sparse camera poses, followed by physically-informed event simulation with adaptive contrast threshold modeling. The dataset enables scalable, controllable, and sensor-faithful generation of realistic event streams with aligned RGB and camera pose data.
15
+
16
+ ## 📚 Dataset Description
17
+ Event cameras offer unique advantages—such as low latency, high temporal resolution, and high dynamic range—making them ideal for 3D reconstruction and SLAM under rapid motion and challenging lighting. However, the lack of large-scale, geometry-consistent event datasets has hindered the development of event-driven or hybrid RGB-event methods.
18
+
19
+ GS2E addresses this gap by synthesizing event data from sparse, static RGB images. Using 3D Gaussian Splatting (3DGS), we reconstruct high-fidelity 3D scenes and generate dense camera trajectories to render blur-free and motion-blurred sequences. These sequences are then processed by a physically-grounded event simulator, incorporating adaptive contrast thresholds that vary across scenes and motion profiles.
20
+
21
+ The dataset includes:
22
+
23
+ * **21 multi-view event sequences** across **7 scenes** and **3 difficulty levels** (easy/medium/hard)
24
+ * Per-frame photorealistic RGB renderings (clean and motion-blurred)
25
+ * Ground truth camera poses
26
+ * Geometry-consistent synthetic event streams
27
+ * Consistent intrinsics and camera paths with NeRF Synthetic Dataset
28
+
29
+ The result is a simulation-friendly yet physically-informed dataset for training and evaluating event-based 3D reconstruction, localization, SLAM, and novel view synthesis.
30
+
31
+
32
+ If you use this synthetic event dataset for your work, please cite:
33
+
34
+ ```bibtex
35
+ TBD
36
+ ```
37
+
38
+ ## Dataset Structure and Contents
39
+
40
+ This synthetic event dataset is organized first by scene, then by level of difficulty. Each sequence recording is given in the form of a [ROS bag](http://wiki.ros.org/rosbag) named `esim.bag`, with the following data streams:
41
+
42
+ | ROS Topic | Data | Publishing Rate (Hz) |
43
+ | :--- | :--- | :--- |
44
+ | `/cam0/events` | Events | - |
45
+ | `/cam0/pose` | Camera Pose | 1000 |
46
+ | `/imu` | IMU measurements with simulated noise | 1000 |
47
+ | `/cam0/image_raw` | RGB image | 250 |
48
+ | `/cam0/depthmap` | Depth map | 10 |
49
+ | `/cam0/optic_flow` | Optical flow map | 10 |
50
+ | `/cam0/camera_info` | Camera intrinsic and lens distortion parameters | 10
51
+
52
+ It is obtained by running the improved ESIM with the associated `esim.conf` configuration file, which references camera intrinsics configuration files `pinhole_mono_nodistort_f={1111, 1250}.yaml` and camera trajectory CSV files `{hemisphere, sphere}_spiral-rev=4[...].csv`.
53
+
54
+ The validation and test views of each scene are given in the `views/` folder, which is structured according to the NeRF synthetic dataset (except for the depth and normal maps). These views are rendered from the scene Blend-files, given in the `scenes/` folder. Specifically, we create a [Conda](https://docs.conda.io/en/latest/) environment with [Blender as a Python module](https://docs.blender.org/api/current/info_advanced_blender_as_bpy.html) installed, according to [these instructions](https://github.com/wengflow/rpg_esim#blender), to run the `bpy_render_views.py` Python script for rendering the evaluation views.
55
+
56
+
57
+ ## Setup
58
+
59
+ 1. Install [Git LFS](https://git-lfs.com/) according to the [official instructions](https://github.com/git-lfs/git-lfs?utm_source=gitlfs_site&utm_medium=installation_link&utm_campaign=gitlfs#installing).
60
+ 2. Setup Git LFS for your user account with:
61
+ ```bash
62
+ git lfs install
63
+ ```
64
+ 3. Clone this dataset repository into the desired destination directory with:
65
+ ```bash
66
+ git lfs clone https://huggingface.co/datasets/wengflow/robust-e-nerf
67
+ ```
68
+ 4. To minimize disk usage, remove the `.git/` folder. However, this would complicate the pulling of changes in this upstream dataset repository.
69
+ 5.
70
+ ---
71
+ license: cc-by-4.0
72
+ ---