Datasets:
The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Bird3M Dataset
Dataset Description
Bird3M is the first synchronized, multi-modal, multi-individual dataset designed for comprehensive behavioral analysis of freely interacting birds, specifically zebra finches, in naturalistic settings. It addresses the critical need for benchmark datasets that integrate precisely synchronized multi-modal recordings to support tasks such as 3D pose estimation, multi-animal tracking, sound source localization, and vocalization attribution. The dataset facilitates research in machine learning, neuroscience, and ethology by enabling the development of robust, unified models for long-term tracking and interpretation of complex social behaviors.
Purpose
Bird3M bridges the gap in publicly available datasets for multi-modal animal behavior analysis by providing:
- A benchmark for unified machine learning models tackling multiple behavioral tasks.
- A platform for exploring efficient multi-modal information fusion.
- A resource for ethological studies linking movement, vocalization, and social context to uncover neural and evolutionary mechanisms.
Dataset Structure
The dataset is organized into three splits: train
, val
, and test
, each as a Hugging Face Dataset
object. Each row corresponds to a single bird instance in a video frame, with associated multi-modal data.
Accessing Splits
from datasets import load_dataset
dataset = load_dataset("anonymous-submission000/bird3m")
train_dataset = dataset["train"]
val_dataset = dataset["val"]
test_dataset = dataset["test"]
Dataset Fields
Each example includes the following fields:
bird_id
(string
): Unique identifier for the bird instance (e.g., "bird_1").back_bbox_2d
(Sequence[float64]
): 2D bounding box for the back view, format[x_min, y_min, x_max, y_max]
.back_keypoints_2d
(Sequence[float64]
): 2D keypoints for the back view, format[x1, y1, v1, x2, y2, v2, ...]
, wherev
is visibility (0: not labeled, 1: labeled but invisible, 2: visible).back_view_boundary
(Sequence[int64]
): Back view boundary, format[x, y, width, height]
.bird_name
(string
): Biological identifier (e.g., "b13k20_f").video_name
(string
): Video file identifier (e.g., "BP_2020-10-13_19-44-38_564726_0240000").frame_name
(string
): Frame filename (e.g., "img00961.png").frame_path
(Image
): Path to the frame image (.png
), loaded as a PIL Image.keypoints_3d
(Sequence[Sequence[float64]]
): 3D keypoints, format[[x1, y1, z1], [x2, y2, z2], ...]
.radio_path
(binary
): Path to radio data (.npz
), stored as binary.reprojection_error
(Sequence[float64]
): Reprojection errors for 3D keypoints.side_bbox_2d
(Sequence[float64]
): 2D bounding box for the side view.side_keypoints_2d
(Sequence[float64]
): 2D keypoints for the side view.side_view_boundary
(Sequence[int64]
): Side view boundary.backpack_color
(string
): Backpack tag color (e.g., "purple").experiment_id
(string
): Experiment identifier (e.g., "CopExpBP03").split
(string
): Dataset split ("train", "val", "test").top_bbox_2d
(Sequence[float64]
): 2D bounding box for the top view.top_keypoints_2d
(Sequence[float64]
): 2D keypoints for the top view.top_view_boundary
(Sequence[int64]
): Top view boundary.video_path
(Video
): Path to the video clip (.mp4
), loaded as a Video object.acc_ch_map
(struct
): Maps accelerometer channels to bird identifiers.acc_sr
(float64
): Accelerometer sampling rate (Hz).has_overlap
(bool
): Indicates if accelerometer events overlap with vocalizations.mic_ch_map
(struct
): Maps microphone channels to descriptions.mic_sr
(float64
): Microphone sampling rate (Hz).acc_path
(Audio
): Path to accelerometer audio (.wav
), loaded as an Audio signal.mic_path
(Audio
): Path to microphone audio (.wav
), loaded as an Audio signal.vocalization
(list[struct]
): Vocalization events, each with:overlap_type
(string
): Overlap/attribution confidence.has_bird
(bool
): Indicates if attributed to a bird.2ddistance
(bool
): Indicates if 2D keypoint distance is <20px.small_2ddistance
(float64
): Minimum 2D keypoint distance (px).voc_metadata
(Sequence[float64]
): Onset/offset times[onset_sec, offset_sec]
.
How to Use
Loading and Accessing Data
from datasets import load_dataset
import numpy as np
# Load dataset
dataset = load_dataset("anonymous-submission000/bird3m")
train_data = dataset["train"]
# Access an example
example = train_data[0]
# Access fields
bird_id = example["bird_id"]
keypoints_3d = example["keypoints_3d"]
top_bbox = example["top_bbox_2d"]
vocalizations = example["vocalization"]
# Load multimedia
image = example["frame_path"] # PIL Image
video = example["video_path"] # Video object
mic_audio = example["mic_path"] # Audio signal
acc_audio = example["acc_path"] # Audio signal
# Access audio arrays
mic_array = mic_audio["array"]
mic_sr = mic_audio["sampling_rate"]
acc_array = acc_audio["array"]
acc_sr = acc_audio["sampling_rate"]
# Load radio data
radio_bytes = example["radio_path"]
try:
from io import BytesIO
radio_data = np.load(BytesIO(radio_bytes))
print("Radio data keys:", list(radio_data.keys()))
except Exception as e:
print(f"Could not load radio data: {e}")
# Print example info
print(f"Bird ID: {bird_id}")
print(f"Number of 3D keypoints: {len(keypoints_3d)}")
print(f"Top Bounding Box: {top_bbox}")
print(f"Number of vocalization events: {len(vocalizations)}")
if vocalizations:
first_vocal = vocalizations[0]
print(f"First vocal event metadata: {first_vocal['voc_metadata']}")
print(f"First vocal event overlap type: {first_vocal['overlap_type']}")
Example: Extracting Vocalization Audio Clip
if vocalizations and mic_sr:
onset, offset = vocalizations[0]["voc_metadata"]
onset_sample = int(onset * mic_sr)
offset_sample = int(offset * mic_sr)
vocal_audio_clip = mic_array[onset_sample:offset_sample]
print(f"Duration of first vocal clip: {offset - onset:.3f} seconds")
print(f"Shape of first vocal audio clip: {vocal_audio_clip.shape}")
Code Availability: Baseline code is available at https://github.com/anonymoussubmission0000/bird3m.
Citation
@article{2025bird3m,
title={Bird3M: A Multi-Modal Dataset for Social Behavior Analysis Tool Building},
author={tbd},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2025}
}
- Downloads last month
- 118