|
--- |
|
dataset_info: |
|
features: |
|
- name: index |
|
dtype: int64 |
|
- name: audio |
|
dtype: |
|
audio: |
|
sampling_rate: 16000 |
|
- name: subset |
|
dtype: string |
|
- name: speaker |
|
dtype: string |
|
- name: label |
|
dtype: string |
|
- name: original_name |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 24343526.0 |
|
num_examples: 887 |
|
download_size: 22452898 |
|
dataset_size: 24343526.0 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
license: cc-by-4.0 |
|
tags: |
|
- audio |
|
- animal-vocalization |
|
- birdsong |
|
- zebra-finch |
|
- perceptual-similarity |
|
- benchmark |
|
- zero-shot |
|
- vocsim |
|
- avian-perceptual-judgment |
|
- audio-perceptual-judgment |
|
size_categories: |
|
- n<1K |
|
pretty_name: VocSim - Avian Perception Alignment |
|
--- |
|
|
|
# Dataset Card for VocSim - Avian Perception Alignment |
|
|
|
## Dataset Description |
|
|
|
This dataset is used in the **VocSim benchmark** paper, specifically designed to evaluate how well neural audio embeddings align with biological perceptual judgments of similarity. It utilizes data from **Zandberg et al. (2024)**, which includes recordings of zebra finch (*Taeniopygia guttata*) song syllables and results from behavioral experiments (probe and triplet tasks) measuring the birds' perception of syllable similarity. |
|
|
|
The dataset allows researchers to: |
|
1. Extract features/embeddings from the song syllables using various computational models. |
|
2. Compute pairwise distances between these embeddings. |
|
3. Compare the resulting computational similarity matrices against the avian perceptual judgments recorded in the accompanying `probes.csv` and `triplets.csv` files. |
|
|
|
This facilitates the development and benchmarking of audio representations that better capture biologically relevant acoustic features. |
|
|
|
**Included Files:** |
|
* Hugging Face `Dataset` object containing audio file paths and metadata. |
|
* `probes.csv`: Contains results from perceptual probe trials (sound_id, left, right, decision, etc.). Filtered to include only rows where all mentioned audio files exist. |
|
* `triplets.csv`: Contains results from perceptual triplet trials (Anchor, Positive, Negative, diff, etc.). Filtered to include only rows where all mentioned audio files exist. |
|
* `missing_audio_files.txt` (optional): Lists identifiers from the original CSVs for which no corresponding audio file was found. |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
A typical example in the dataset looks like this: |
|
|
|
```python |
|
{ |
|
'audio': {'path': '/path/to/datasets/avian_perception/wavs/ZF_M_123_syllable_A.wav', 'array': array([-0.00024414, -0.00048828, ..., 0.00024414], dtype=float32), 'sampling_rate': 16000}, |
|
'subset': 'avian_perception', |
|
'index': 42, |
|
'speaker': 'ZF_M_123', |
|
'label': 'ZF_M_123', # Label is set to speaker ID for this dataset |
|
'original_name': 'ZF_M_123_syllable_A.wav' # Identifier as used in CSVs |
|
} |
|
``` |
|
## Citation Information |
|
|
|
If you use this dataset in your work, please cite both the VocSim benchmark paper and the original source data paper: |
|
|
|
```bib |
|
@unpublished{vocsim2025, |
|
title={VocSim: Zero-Shot Audio Similarity Benchmark for Neural Embeddings}, |
|
author={Anonymous}, |
|
year={2025}, |
|
note={Submitted manuscript} |
|
} |
|
|
|
@article{zandberg2024bird, |
|
author = {Zandberg, Lies and Morfi, Veronica and George, Julia M. and Clayton, David F. and Stowell, Dan and Lachlan, Robert F.}, |
|
title = {Bird song comparison using deep learning trained from avian perceptual judgments}, |
|
journal = {PLoS Computational Biology}, |
|
volume = {20}, |
|
number = {8}, |
|
year = {2024}, |
|
month = {aug}, |
|
pages = {e1012329}, |
|
doi = {10.1371/journal.pcbi.1012329}, |
|
publisher = {Public Library of Science} |
|
} |
|
``` |