
text
stringlengths 13
17
|
---|
-076WPWoCRE_137 |
-076WPWoCRE_150 |
-076WPWoCRE_211 |
-076WPWoCRE_221 |
-076WPWoCRE_231 |
-076WPWoCRE_245 |
-076WPWoCRE_255 |
-076WPWoCRE_300 |
-076WPWoCRE_36 |
-076WPWoCRE_49 |
-076WPWoCRE_524 |
-076WPWoCRE_537 |
-076WPWoCRE_557 |
-076WPWoCRE_567 |
-076WPWoCRE_699 |
-076WPWoCRE_714 |
-076WPWoCRE_89 |
-112u2iaIok_104 |
-112u2iaIok_118 |
-112u2iaIok_133 |
-112u2iaIok_164 |
-112u2iaIok_180 |
-112u2iaIok_204 |
-112u2iaIok_219 |
-112u2iaIok_253 |
-112u2iaIok_266 |
-112u2iaIok_280 |
-112u2iaIok_38 |
-112u2iaIok_62 |
-112u2iaIok_7 |
-112u2iaIok_72 |
-112u2iaIok_93 |
-3OY-UFP2O4_1010 |
-3OY-UFP2O4_1020 |
-3OY-UFP2O4_1030 |
-3OY-UFP2O4_1040 |
-3OY-UFP2O4_131 |
-3OY-UFP2O4_143 |
-3OY-UFP2O4_153 |
-3OY-UFP2O4_315 |
-3OY-UFP2O4_419 |
-3OY-UFP2O4_432 |
-3OY-UFP2O4_713 |
-3OY-UFP2O4_75 |
-3OY-UFP2O4_772 |
-3OY-UFP2O4_884 |
-3OY-UFP2O4_903 |
-3OY-UFP2O4_931 |
-3OY-UFP2O4_957 |
-3OY-UFP2O4_983 |
-3X9U3VK0tM_180 |
-3X9U3VK0tM_20 |
-3X9U3VK0tM_204 |
-6hVjVnEzuw_107 |
-6hVjVnEzuw_121 |
-6hVjVnEzuw_150 |
-6hVjVnEzuw_62 |
-6hVjVnEzuw_97 |
-6r8s9QRZWM_14 |
-6r8s9QRZWM_26 |
-7TP1KxGOAg_11 |
-7TP1KxGOAg_23 |
-7TP1KxGOAg_45 |
-8EUHRbTIg8_129 |
-8EUHRbTIg8_139 |
-8EUHRbTIg8_16 |
-8EUHRbTIg8_211 |
-8EUHRbTIg8_266 |
-8EUHRbTIg8_276 |
-8EUHRbTIg8_28 |
-8EUHRbTIg8_291 |
-8EUHRbTIg8_326 |
-8EUHRbTIg8_354 |
-8EUHRbTIg8_53 |
-8EUHRbTIg8_66 |
-8EUHRbTIg8_87 |
-BXG9C3bdQU_105 |
-BXG9C3bdQU_115 |
-BXG9C3bdQU_125 |
-BXG9C3bdQU_135 |
-BXG9C3bdQU_151 |
-BXG9C3bdQU_161 |
-BXG9C3bdQU_172 |
-BXG9C3bdQU_194 |
-BXG9C3bdQU_205 |
-BXG9C3bdQU_246 |
-BXG9C3bdQU_278 |
-BXG9C3bdQU_28 |
-BXG9C3bdQU_296 |
-BXG9C3bdQU_43 |
-BXG9C3bdQU_70 |
-BXG9C3bdQU_82 |
-BXG9C3bdQU_9 |
-BXG9C3bdQU_92 |
-DsQYpDkJvY_105 |
-DsQYpDkJvY_170 |
-DsQYpDkJvY_29 |
-DsQYpDkJvY_41 |
-DsQYpDkJvY_72 |
-EOhDHns4xw_118 |
Sphere360
Sphere360 is a comprehensive dataset of paired 360-degree videos and spatial audio content sourced from YouTube. The collection contains over 103,000 matched 360-degree video and audio clips, representing a total of 288 hours of immersive content. This repository includes both the curated dataset and the essential web crawling and data processing tools used for its compilation.
Copyright
The video data utilized in this study were sourced from the YouTube platform. All content is copyrighted by their respective creators and owners. The videos included in this research adhere to YouTube's terms of service and, where applicable, to Creative Commons licenses. Specifically, videos under the Creative Commons license have been appropriately attributed to the original authors in accordance with the license terms (CC BY 4.0).
For videos not governed by a Creative Commons license, we acknowledge that they are protected by copyright and are used exclusively for academic research purposes. No commercial use of these videos or content is intended. The use of these videos falls under the fair use doctrine for educational and research purposes, as permitted by copyright law.
All channel information contained in the dataset is recorded in dataset/channels.csv
.
Dataset Split
The dataset split configuration can be found in the dataset/split
directory, containing:
- Training set: ~100.5k samples
- Test set: ~3k samples
- Each sample duration: 10 seconds
Toolset Environment
Python Environment
Data Crawling:
- Python Version: 3.10
- Requirements: toolset/crawl/requirements.txt
Data Cleaning:
- Python Version: 3.10
- Requirements: toolset/clean/requirements.txt
YouTube API
- Apply for a YouTube API Key from Google Cloud Console
- Insert the obtained key into:
# Location: toolset/crawl/core/build.py __API_KEY = "YOUR_YOUTUBE_API_KEY_HERE" # Enter your YouTube API key here
FFmpeg
This project uses FFmpeg for audio/video data processing. Please configure the FFmpeg environment.
yt-dlp (Optional)
To use the download scripts provided in this repository, please configure the yt-dlp environment.
Data Crawling
The general workflow for data crawling is as follows:
Use formatted keywords for search, combining specific event labels (e.g.
firework
,cat
,waterfall
) with qualifying terms (e.g.spatial audio 360
) to ensure class diversity and retrieve more 360° and FOA contentImplement two-stage data crawling:
Stage 1: Channel-Based Crawling
Use a large-scale approach to filter relevant channels. Detailed process:
- Identify channels that appear in search results more than a specified threshold count
- Sample and download from these channels, then perform quality verification (manually or using cleaning pipelines) to filter out high-quality channels from unusable ones
- Obtain video lists from high-quality channels and proceed with downloading
Stage 2: Video-Based Crawling
- Filter out videos from unusable channels in search results
- Screen remaining videos (manually or using cleaning pipelines)
For detailed workflow and script usage, please refer to docs/crawl.md.
Data Cleaning
The cleaning pipeline primarily consists of four dimensions:
- Silent Filtering: Filters out silent audio segments
- Static Frame Filtering: Removes static or nearly static videos
- Audio-Visual Matching Filtering: Eliminates videos with audio-visual mismatches (e.g., those containing background music, voiceovers, or post-production audio)
- Voice Detection Filtering: Filters out videos containing human speech
For detailed workflow and script usage, please refer to docs/clean.md.
Acknowledgments
This project is built upon the following resources and open-source projects:
Data Sources
-
Our project utilizes its end-to-end speech recognition model to achieve Voice Detection Filtering.
Data Cleaning Dependencies
Our project employs its cross-modal alignment capability to implement the Audio-Visual Matching Filtering.
SenseVoice (Replace with actual link)
An advanced speech understanding toolkit, licensed under [License Type] (e.g., Apache 2.0).
Its end-to-end speech recognition model was instrumental in generating textual metadata for this project.
- Downloads last month
- 151
Models trained or fine-tuned on omniaudio/Sphere360
