datasetId
large_stringlengths 12
97
| author
large_stringlengths 5
15
| last_modified
unknowndate 2025-04-22 11:26:33
2025-04-22 11:45:54
| downloads
int64 0
5.6M
| likes
int64 0
43
| tags
large listlengths 1
26
| task_categories
large listlengths 0
5
| createdAt
unknowndate 2022-10-24 15:39:05
2025-04-22 11:44:40
| card
large_stringlengths 31
25.8k
|
---|---|---|---|---|---|---|---|---|
kothasuhas/llama-3b-gold_prefix_k10000_iter0 | kothasuhas | "2025-04-22T11:31:22Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:31:06Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 11728744
num_examples: 10000
- name: validation
num_bytes: 2424848
num_examples: 1000
download_size: 9046400
dataset_size: 14153592
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
MikeGreen2710/first_100k_location | MikeGreen2710 | "2025-04-22T11:30:03Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:29:53Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: location
dtype: int64
splits:
- name: train
num_bytes: 129600000
num_examples: 100000
download_size: 18353516
dataset_size: 129600000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anshulsc/MTabVQA-GRPO-Spider | anshulsc | "2025-04-22T11:29:17Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T05:27:47Z" | ---
dataset_info:
features:
- name: images
sequence: image
- name: problem
dtype: string
- name: answer
struct:
- name: data
sequence:
sequence: string
splits:
- name: train
num_bytes: 452817298.535
num_examples: 2395
download_size: 294213967
dataset_size: 452817298.535
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Eluza133/A12d12s12 | Eluza133 | "2025-04-22T11:28:16Z" | 3,310 | 0 | [
"license:apache-2.0",
"modality:image",
"modality:video",
"region:us"
] | [] | "2025-02-27T15:03:01Z" | ---
license: apache-2.0
---
|
davnas/library-occupancy | davnas | "2025-04-22T11:26:37Z" | 1,177 | 0 | [
"region:us"
] | [] | "2024-12-10T12:50:21Z" | ---
dataset_info:
features:
- name: CommitTime
dtype: timestamp[ns]
- name: Time
dtype: string
- name: Occupancy_main
dtype: int64
- name: Occupancy_southEast
dtype: int64
- name: Occupancy_north
dtype: int64
- name: Occupancy_south
dtype: int64
- name: Occupancy_angdomen
dtype: int64
- name: Occupancy_newton
dtype: int64
- name: Prediction_date
dtype: timestamp[ns]
splits:
- name: train
num_bytes: 179945
num_examples: 2465
download_size: 26804
dataset_size: 179945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Pendrokar/TTS_Arena | Pendrokar | "2025-04-22T11:26:33Z" | 2,096 | 4 | [
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"arena"
] | [] | "2024-10-11T16:52:25Z" | ---
configs:
- config_name: summary
data_files:
- split: rejections
path: tts_arena_vote_summary.tsv
- split: rejections_3m
path: tts_arena_vote_summary_3m.tsv
- split: rejections_all
path: tts_arena_vote_summary_all.tsv
sep: "\t"
language:
- en
tags:
- arena
pretty_name: TTS Spaces Arena Votes
---
[TTS Arena's](https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena) DB is _SQLlite_ DB file. The above is just a summary query that should be useful for TTS developers to evaluate faults of their model.
## Why no audio samples?
Unsafe. Cannot constantly oversee the output of uncontrolled HuggingFace Spaces. While it could be safeguarded by using an ASR model before uploading, something unwanted may still slip through.
## Useful queries for TTS developers and evaluators
### All votes mentioning specified TTS model:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id
WHERE
vl.chosen = "Pendrokar/xVASynth-TTS"
OR vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen, rejected
ORDER BY times DESC, spokentext ASC
LIMIT 0, 49999;
```
### All rejections of specified TTS model against another:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id AND vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen
ORDER BY spokentext ASC
LIMIT 0, 49999;
```
### All rejections of a TTS model against another:
**The one used in dataset viewer.** Note that the `chosen` column may include models that the `rejected` model beat more times. That is also why `votes` may sometimes be even less than the amount of distinct chosen models.
```sql
SELECT
st.spokentext,
vl.rejected,
COUNT(vl.rejected) - COALESCE(chosen_counts.chosen_count, 0) AS votes,
(COUNT(DISTINCT vl.chosen) || ' ' || GROUP_CONCAT(DISTINCT ' ' || vl.chosen)) AS chosen,
MAX(vl.timestamp) AS lastvote
FROM
votelog vl
JOIN
spokentext st ON vl.id = st.votelog_id
LEFT JOIN (
SELECT
st_inner.spokentext,
vl_inner.chosen,
COUNT(vl_inner.chosen) AS chosen_count
FROM
votelog vl_inner
JOIN
spokentext st_inner ON vl_inner.id = st_inner.votelog_id
GROUP BY
st_inner.spokentext,
vl_inner.chosen
ORDER BY
chosen_count DESC
) AS chosen_counts ON st.spokentext = chosen_counts.spokentext AND vl.rejected = chosen_counts.chosen
GROUP BY
st.spokentext,
vl.rejected
HAVING
votes > 0
AND lastvote BETWEEN datetime('now', '-1 month') AND datetime('now', 'localtime')
ORDER BY
((votes * COUNT(DISTINCT vl.chosen)) / 2) DESC,
COUNT(DISTINCT vl.chosen) DESC,
st.spokentext ASC;
```
If you use this data in your publication, please cite us!
Copy the BibTeX citation to cite this source:
```bibtext\n
@misc{tts-arena,
title = {Text to Speech Arena - Pendrokar's HF Spaces Fork},
author = {mrfakename and Srivastav, Vaibhav and Fourrier, Clémentine and Pouget, Lucain and Lacombe, Yoach and main and Gandhi, Sanchit},
year = 2024,
publisher = {Hugging Face},
howpublished = "\\url{https://huggingface.co/spaces/TTS-AGI/TTS-Arena}"
}
``` |
nicchio816/x_dataset_111 | nicchio816 | "2025-04-22T11:45:54Z" | 157 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-04-13T16:24:27Z" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** nicchio816/x_dataset_111
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5F1t5ddY4PW34FQBK4iHVi1CbhySSbV5Yr4swpChryepB1pn
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{nicchio8162025datauniversex_dataset_111,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={nicchio816},
year={2025},
url={https://huggingface.co/datasets/nicchio816/x_dataset_111},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 0
- **Date Range:** 2025-04-22T11:45:53Z to 2025-04-22T11:45:53Z
- **Last Updated:** 2025-04-22T11:45:53Z
### Data Distribution
- Tweets with hashtags: 0.00%
- Tweets without hashtags: 100.00%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-04-21T15:28:02Z | 952903 | 952903 |
| 2025-04-21T16:34:00Z | 883651 | 1836554 |
| 2025-04-21T17:41:26Z | 888964 | 2725518 |
| 2025-04-21T18:49:51Z | 901683 | 3627201 |
| 2025-04-21T19:59:16Z | 966991 | 4594192 |
| 2025-04-21T21:08:03Z | 974205 | 5568397 |
| 2025-04-21T22:17:19Z | 975262 | 6543659 |
| 2025-04-21T23:25:51Z | 975876 | 7519535 |
| 2025-04-22T00:34:32Z | 976149 | 8495684 |
| 2025-04-22T01:44:17Z | 977365 | 9473049 |
| 2025-04-22T02:41:59Z | 990568 | 10463617 |
| 2025-04-22T03:21:59Z | 985716 | 11449333 |
| 2025-04-22T04:00:51Z | 982225 | 12431558 |
| 2025-04-22T04:39:29Z | 979298 | 13410856 |
| 2025-04-22T05:17:59Z | 969267 | 14380123 |
| 2025-04-22T05:56:55Z | 967362 | 15347485 |
| 2025-04-22T06:35:47Z | 958869 | 16306354 |
| 2025-04-22T07:15:02Z | 959513 | 17265867 |
| 2025-04-22T07:53:25Z | 968789 | 18234656 |
| 2025-04-22T08:31:37Z | 971638 | 19206294 |
| 2025-04-22T09:10:11Z | 979484 | 20185778 |
| 2025-04-22T09:49:04Z | 981272 | 21167050 |
| 2025-04-22T10:28:06Z | 982880 | 22149930 |
| 2025-04-22T11:06:38Z | 982778 | 23132708 |
| 2025-04-22T11:45:46Z | 981714 | 24114422 |
| 2025-04-22T11:45:53Z | 0 | 24114422 |
|
nicchio816/reddit_dataset_111 | nicchio816 | "2025-04-22T11:45:53Z" | 283 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-04-21T15:20:14Z" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** nicchio816/reddit_dataset_111
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5F1t5ddY4PW34FQBK4iHVi1CbhySSbV5Yr4swpChryepB1pn
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{nicchio8162025datauniversereddit_dataset_111,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={nicchio816},
year={2025},
url={https://huggingface.co/datasets/nicchio816/reddit_dataset_111},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 24114422
- **Date Range:** 2025-03-18T20:00:00Z to 2025-04-21T20:00:00Z
- **Last Updated:** 2025-04-22T11:45:50Z
### Data Distribution
- Posts: 7.79%
- Comments: 92.21%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/AskReddit | 70459 | 0.29% |
| 2 | r/politics | 67009 | 0.28% |
| 3 | r/wallstreetbets | 60396 | 0.25% |
| 4 | r/worldnews | 34129 | 0.14% |
| 5 | r/teenagers | 33314 | 0.14% |
| 6 | r/europe | 32889 | 0.14% |
| 7 | r/canada | 30825 | 0.13% |
| 8 | r/gaming | 29651 | 0.12% |
| 9 | r/AITAH | 29435 | 0.12% |
| 10 | r/pcmasterrace | 29148 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-04-21T15:28:02Z | 952903 | 952903 |
| 2025-04-21T16:34:00Z | 883651 | 1836554 |
| 2025-04-21T17:41:26Z | 888964 | 2725518 |
| 2025-04-21T18:49:51Z | 901683 | 3627201 |
| 2025-04-21T19:59:16Z | 966991 | 4594192 |
| 2025-04-21T21:08:03Z | 974205 | 5568397 |
| 2025-04-21T22:17:19Z | 975262 | 6543659 |
| 2025-04-21T23:25:51Z | 975876 | 7519535 |
| 2025-04-22T00:34:32Z | 976149 | 8495684 |
| 2025-04-22T01:44:17Z | 977365 | 9473049 |
| 2025-04-22T02:41:59Z | 990568 | 10463617 |
| 2025-04-22T03:21:59Z | 985716 | 11449333 |
| 2025-04-22T04:00:51Z | 982225 | 12431558 |
| 2025-04-22T04:39:29Z | 979298 | 13410856 |
| 2025-04-22T05:17:59Z | 969267 | 14380123 |
| 2025-04-22T05:56:55Z | 967362 | 15347485 |
| 2025-04-22T06:35:47Z | 958869 | 16306354 |
| 2025-04-22T07:15:02Z | 959513 | 17265867 |
| 2025-04-22T07:53:25Z | 968789 | 18234656 |
| 2025-04-22T08:31:37Z | 971638 | 19206294 |
| 2025-04-22T09:10:11Z | 979484 | 20185778 |
| 2025-04-22T09:49:04Z | 981272 | 21167050 |
| 2025-04-22T10:28:06Z | 982880 | 22149930 |
| 2025-04-22T11:06:38Z | 982778 | 23132708 |
| 2025-04-22T11:45:46Z | 981714 | 24114422 |
|
OpenGVLab/InternVL-Data | OpenGVLab | "2025-04-22T11:45:37Z" | 119 | 22 | [
"task_categories:image-to-text",
"task_categories:question-answering",
"language:multilingual",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"modality:image",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"region:us"
] | [
"image-to-text",
"question-answering"
] | "2025-04-12T07:26:00Z" | ---
language:
- multilingual
license: cc-by-4.0
task_categories:
- image-to-text
- question-answering
size_categories:
- 10M<n<100M
---
# InternVL-Data
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[📜 InternVL3\]](https://huggingface.co/papers/2504.10479)
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
<div align="center">
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
</div>
## Introduction
Welcome to the InternVL3 Open Dataset! This dataset is designed to support research and development in the field of multimodal large language models (MLLMs), specifically for tasks involving image, text, and video understanding. The dataset is composed of data collected from various sources, including curated open-source datasets, self-synthesized datasets, and data gathered from the internet.
Our first phase plan is to release the SFT data for InternVL2.5 and InternVL3. We will continue uploading the data over the next two to four weeks, starting with the SFT data for InternVL2.5, followed by the SFT data for InternVL3. We kindly ask for your patience as we continue to release the data in the coming weeks.
## Data List
### InternVL2.5-SFT
TODO
### InternVL3-SFT
TODO
## License
This dataset is released under the CC BY 4.0 License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{zhu2025internvl3,
title={InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models},
author={Zhu, Jinguo and Wang, Weiyun and Chen, Zhe and Liu, Zhaoyang and Ye, Shenglong and Gu, Lixin and Duan, Yuchen and Tian, Hao and Su, Weijie and Shao, Jie and others},
journal={arXiv preprint arXiv:2504.10479},
year={2025}
}
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}
``` |
pooja-gani/qasper-sentence-classification | pooja-gani | "2025-04-22T11:45:30Z" | 23 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2025-04-21T08:35:32Z" | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 9641942
num_examples: 3733
- name: dev
num_bytes: 4899712
num_examples: 1882
download_size: 3083617
dataset_size: 14541654
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
|
yhaha/EmoVoice-DB | yhaha | "2025-04-22T11:45:15Z" | 0 | 0 | [
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"arxiv:2504.12867",
"region:us",
"Emotional_TTS"
] | [] | "2025-04-22T02:17:37Z" | ---
license: mit
language:
- en
tags:
- Emotional_TTS
size_categories:
- 10K<n<100K
---
# Dataset Card for EmoVoice-DB
## Overview of EmoVoice-DB
EmoVoice-DB is an English emotional speech dataset featuring fine-grained emotion labels expressed through natural language descriptions. This dataset contains over 20,000 emotionally expressive speech samples, each annotated with detailed and precise emotional descriptions, totaling approximately 40 hours of audio. EmoVoice-DB is built using synthetic data generated by the powerful GPT-4o(https://platform.openai.com/docs/models/gpt-4o) and GPT-4o-audio(https://platform.openai.com/docs/models/gpt-4o-audio-preview) models.
The EmoVoice-DB dataset spans seven core emotion categories— angry, happy, sad, surprised, disgusted, fearful, and neutral—with a balanced distribution of samples across all emotional classes. It features a diverse range of textual content, including novel excerpts, dialogue, and observational phrases. Additionally, the dataset includes speech samples of five distinct speaker timbres, enhancing the diversity of vocal expression. All emotional speech samples are synthesized using the advanced GPT-4o-audio model, ensuring precise emotional control, strong expressiveness, and human-level naturalness. A detailed statistical overview and examples of the dataset are provided in Table below. EmoVoice-DB provides a valuable resource for advancing research in fields such as emotional speech synthesis, speech emotion recognition, and emotion analysis.
## Statistics and Examples of EmoVoice-DB Dataset
| Emotion | Count | Duration (h) | Text Example | Emotion Description Example |
|------------|-------|--------------|-------------------------------------------------------------------------|---------------------------------------------------------------------|
| Angry | 3486 | 5.76 | Wobbly tables ruin everything! | Expressing aggravated displeasure and discontent. |
| Happy | 3269 | 6.02 | You did an AMAZING job on the presentation! | Expressing supportive joy and pride in someone's accomplishment. |
| Sad | 3174 | 6.94 | Cracked earth stretches for miles, nothing GREEN to soothe the eye. | Conveying a pervasive sense of desolation and despair. |
| Surprised | 3072 | 5.67 | The curtain rose without warning, revealing impossible colors and shapes. | Evoking an excited and bewildered wonder in a rising, quickened cadence. |
| Fearful | 2961 | 5.52 | Moonlight glinted off the knife, casting shadows that DANCED like ghosts. | Emanating a chilling foreboding, underscored by a quivering voice. |
| Disgusted | 2950 | 5.59 | How could anyone EVER think that brown and pink match! | Expressing a moment of incredulous disdain and distaste. |
| Neutral | 3188 | 4.95 | Leaves rustled in the evening breeze, swaying gently to unseen rhythms. | Emanating a peaceful, contemplative atmosphere. |
| **Sum** | **22100** | **40.45** | | |
## Dataset Split
| Split | \#Instances |
|---------------|------------------------------|
| Train | 63150(21050 speech) |
| Validation | 350 |
| Test | 700 |
## Dataset Instance
```
{
"key": "gpt4o_388_angry_ash",
"source_text": "The kettle SCREAMED as it reached boiling point, mirroring my inner tension.", # Text
"target_text": "The kettle SCREAMED as it reached boiling point, mirroring my inner tension.", # Text
"emotion": "angry", # Coarse emotion category
"emotion_text_prompt": "Parallel emotions with rising heat, an audible cry of pent emotion.", # Fine-grained emotion descripion
"target_wav": "EmoVoice-DB/angry/gpt4o_388_angry_ash.wav", # Ground truth speech
"answer_cosyvoice_speech_token": [626, 3094, 96, 441, 167,...], # 50HZ CosyVoice Semantic Token
"neutral_speaker_wav": "EmoVoice-DB/neutral/gpt4o_23948_neutral_ash.wav" # Prompt speech for inference(test.jsonl only)
}
```
## Dataset Creation
Step 1: Generating text and emotional descriptions: Pairs of texts and corresponding emotional descriptions are generated using the GPT-4o model.
Step 2: Generating emotion speech: Emotional speech samples are generated by prompting GPT-4o-audio model using both text and emotion descriptions constructed earlier.
Step 3: Post-processing: Samples with high WER are filtered out.
Step 4: Data Augmentation: leverage GPT-4o to rephrase emotional descriptions while maintaining the original meanings. For each data entry, we generate two rephrased versions, resulting in three semantically equivalent but lexically diverse descriptions per emotional speech sample.
(For more details, please refer to the paper.)
## Paper and Citation
[EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting](https://arxiv.org/abs/2504.12867).
```
@article{yang2025emovoice,
title={EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting},
author={Yang, Guanrou and Yang, Chen and Chen, Qian and Ma, Ziyang and Chen, Wenxi and Wang, Wen and Wang, Tianrui and Yang, Yifan and Niu, Zhikang and Liu, Wenrui and others},
journal={arXiv preprint arXiv:2504.12867},
year={2025}
}
```
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
## Contact
[email protected] |
martinaianaro99/SC_ViLT_L4_F | martinaianaro99 | "2025-04-22T11:45:03Z" | 7 | 0 | [
"region:us"
] | [] | "2025-04-16T09:05:20Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int32
- name: token_type_ids
sequence: int32
- name: labels
sequence: int32
- name: pixel_values
sequence:
sequence:
sequence: float32
- name: masked_indices
sequence: int32
- name: metadata
struct:
- name: chunk_index
dtype: int64
- name: include_only_masked_tokens_images
dtype: bool
- name: logic_name
dtype: string
- name: mask_images_equally
dtype: bool
splits:
- name: SC_L4_img_F_chunk0
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk1
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk2
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk3
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk4
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk5
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk6
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk7
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk8
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk9
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk10
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk11
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk12
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk13
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk14
num_bytes: 1810438036
num_examples: 1020
- name: SC_L4_img_F_chunk15
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk16
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk17
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk18
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk19
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk20
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk21
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk22
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk23
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk24
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk25
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk26
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk27
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk28
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk29
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk30
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk31
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk32
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk33
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk34
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk35
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk36
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk37
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk38
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk39
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk40
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk41
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk42
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk43
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk44
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk45
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk46
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk47
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk48
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk49
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk50
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk51
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk52
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk53
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk54
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk55
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk56
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk57
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk58
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk59
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk60
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk61
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk62
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk63
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk64
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk65
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk66
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk67
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk68
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk69
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk70
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk71
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk72
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk73
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk74
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk75
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk76
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk77
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk78
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk79
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk80
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk81
num_bytes: 1805113219
num_examples: 1017
- name: SC_L4_img_F_chunk82
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk83
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk84
num_bytes: 1810438036
num_examples: 1020
- name: SC_L4_img_F_chunk85
num_bytes: 1808663097
num_examples: 1019
- name: SC_L4_img_F_chunk86
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk87
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk88
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk89
num_bytes: 1810438036
num_examples: 1020
- name: SC_L4_img_F_chunk90
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk91
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk92
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk93
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk94
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk95
num_bytes: 1810438036
num_examples: 1020
- name: SC_L4_img_F_chunk96
num_bytes: 1806888158
num_examples: 1018
- name: SC_L4_img_F_chunk97
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk98
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk99
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk100
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk101
num_bytes: 1808663097
num_examples: 1019
- name: SC_L4_img_F_chunk102
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk103
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk104
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk105
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk106
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk107
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk108
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk109
num_bytes: 1810438036
num_examples: 1020
- name: SC_L4_img_F_chunk110
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk111
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk112
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk113
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk114
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk115
num_bytes: 1810438036
num_examples: 1020
- name: SC_L4_img_F_chunk116
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk117
num_bytes: 1808663097
num_examples: 1019
- name: SC_L4_img_F_chunk118
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk119
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk120
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk121
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk122
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk123
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk124
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk125
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk126
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk127
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk128
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk129
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk130
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk131
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk132
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk133
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk134
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk135
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk136
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk137
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk138
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk139
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk140
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk141
num_bytes: 1810438036
num_examples: 1020
- name: SC_L4_img_F_chunk142
num_bytes: 1808663097
num_examples: 1019
- name: SC_L4_img_F_chunk143
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk144
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk145
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk146
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk147
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk148
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk149
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk150
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk151
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk152
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk153
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk154
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk155
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk156
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk157
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk158
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk159
num_bytes: 1817537792
num_examples: 1024
- name: SC_L4_img_F_chunk160
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk161
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk162
num_bytes: 1810438036
num_examples: 1020
- name: SC_L4_img_F_chunk163
num_bytes: 1813987914
num_examples: 1022
- name: SC_L4_img_F_chunk164
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk165
num_bytes: 1815762853
num_examples: 1023
- name: SC_L4_img_F_chunk166
num_bytes: 1808663097
num_examples: 1019
- name: SC_L4_img_F_chunk167
num_bytes: 1812212975
num_examples: 1021
- name: SC_L4_img_F_chunk168
num_bytes: 1808663097
num_examples: 1019
- name: SC_L4_img_F_chunk169
num_bytes: 1813987914
num_examples: 1022
download_size: 6655954535
dataset_size: 308663710559
configs:
- config_name: default
data_files:
- split: SC_L4_img_F_chunk0
path: data/SC_L4_img_F_chunk0-*
- split: SC_L4_img_F_chunk1
path: data/SC_L4_img_F_chunk1-*
- split: SC_L4_img_F_chunk2
path: data/SC_L4_img_F_chunk2-*
- split: SC_L4_img_F_chunk3
path: data/SC_L4_img_F_chunk3-*
- split: SC_L4_img_F_chunk4
path: data/SC_L4_img_F_chunk4-*
- split: SC_L4_img_F_chunk5
path: data/SC_L4_img_F_chunk5-*
- split: SC_L4_img_F_chunk6
path: data/SC_L4_img_F_chunk6-*
- split: SC_L4_img_F_chunk7
path: data/SC_L4_img_F_chunk7-*
- split: SC_L4_img_F_chunk8
path: data/SC_L4_img_F_chunk8-*
- split: SC_L4_img_F_chunk9
path: data/SC_L4_img_F_chunk9-*
- split: SC_L4_img_F_chunk10
path: data/SC_L4_img_F_chunk10-*
- split: SC_L4_img_F_chunk11
path: data/SC_L4_img_F_chunk11-*
- split: SC_L4_img_F_chunk12
path: data/SC_L4_img_F_chunk12-*
- split: SC_L4_img_F_chunk13
path: data/SC_L4_img_F_chunk13-*
- split: SC_L4_img_F_chunk14
path: data/SC_L4_img_F_chunk14-*
- split: SC_L4_img_F_chunk15
path: data/SC_L4_img_F_chunk15-*
- split: SC_L4_img_F_chunk16
path: data/SC_L4_img_F_chunk16-*
- split: SC_L4_img_F_chunk17
path: data/SC_L4_img_F_chunk17-*
- split: SC_L4_img_F_chunk18
path: data/SC_L4_img_F_chunk18-*
- split: SC_L4_img_F_chunk19
path: data/SC_L4_img_F_chunk19-*
- split: SC_L4_img_F_chunk20
path: data/SC_L4_img_F_chunk20-*
- split: SC_L4_img_F_chunk21
path: data/SC_L4_img_F_chunk21-*
- split: SC_L4_img_F_chunk22
path: data/SC_L4_img_F_chunk22-*
- split: SC_L4_img_F_chunk23
path: data/SC_L4_img_F_chunk23-*
- split: SC_L4_img_F_chunk24
path: data/SC_L4_img_F_chunk24-*
- split: SC_L4_img_F_chunk25
path: data/SC_L4_img_F_chunk25-*
- split: SC_L4_img_F_chunk26
path: data/SC_L4_img_F_chunk26-*
- split: SC_L4_img_F_chunk27
path: data/SC_L4_img_F_chunk27-*
- split: SC_L4_img_F_chunk28
path: data/SC_L4_img_F_chunk28-*
- split: SC_L4_img_F_chunk29
path: data/SC_L4_img_F_chunk29-*
- split: SC_L4_img_F_chunk30
path: data/SC_L4_img_F_chunk30-*
- split: SC_L4_img_F_chunk31
path: data/SC_L4_img_F_chunk31-*
- split: SC_L4_img_F_chunk32
path: data/SC_L4_img_F_chunk32-*
- split: SC_L4_img_F_chunk33
path: data/SC_L4_img_F_chunk33-*
- split: SC_L4_img_F_chunk34
path: data/SC_L4_img_F_chunk34-*
- split: SC_L4_img_F_chunk35
path: data/SC_L4_img_F_chunk35-*
- split: SC_L4_img_F_chunk36
path: data/SC_L4_img_F_chunk36-*
- split: SC_L4_img_F_chunk37
path: data/SC_L4_img_F_chunk37-*
- split: SC_L4_img_F_chunk38
path: data/SC_L4_img_F_chunk38-*
- split: SC_L4_img_F_chunk39
path: data/SC_L4_img_F_chunk39-*
- split: SC_L4_img_F_chunk40
path: data/SC_L4_img_F_chunk40-*
- split: SC_L4_img_F_chunk41
path: data/SC_L4_img_F_chunk41-*
- split: SC_L4_img_F_chunk42
path: data/SC_L4_img_F_chunk42-*
- split: SC_L4_img_F_chunk43
path: data/SC_L4_img_F_chunk43-*
- split: SC_L4_img_F_chunk44
path: data/SC_L4_img_F_chunk44-*
- split: SC_L4_img_F_chunk45
path: data/SC_L4_img_F_chunk45-*
- split: SC_L4_img_F_chunk46
path: data/SC_L4_img_F_chunk46-*
- split: SC_L4_img_F_chunk47
path: data/SC_L4_img_F_chunk47-*
- split: SC_L4_img_F_chunk48
path: data/SC_L4_img_F_chunk48-*
- split: SC_L4_img_F_chunk49
path: data/SC_L4_img_F_chunk49-*
- split: SC_L4_img_F_chunk50
path: data/SC_L4_img_F_chunk50-*
- split: SC_L4_img_F_chunk51
path: data/SC_L4_img_F_chunk51-*
- split: SC_L4_img_F_chunk52
path: data/SC_L4_img_F_chunk52-*
- split: SC_L4_img_F_chunk53
path: data/SC_L4_img_F_chunk53-*
- split: SC_L4_img_F_chunk54
path: data/SC_L4_img_F_chunk54-*
- split: SC_L4_img_F_chunk55
path: data/SC_L4_img_F_chunk55-*
- split: SC_L4_img_F_chunk56
path: data/SC_L4_img_F_chunk56-*
- split: SC_L4_img_F_chunk57
path: data/SC_L4_img_F_chunk57-*
- split: SC_L4_img_F_chunk58
path: data/SC_L4_img_F_chunk58-*
- split: SC_L4_img_F_chunk59
path: data/SC_L4_img_F_chunk59-*
- split: SC_L4_img_F_chunk60
path: data/SC_L4_img_F_chunk60-*
- split: SC_L4_img_F_chunk61
path: data/SC_L4_img_F_chunk61-*
- split: SC_L4_img_F_chunk62
path: data/SC_L4_img_F_chunk62-*
- split: SC_L4_img_F_chunk63
path: data/SC_L4_img_F_chunk63-*
- split: SC_L4_img_F_chunk64
path: data/SC_L4_img_F_chunk64-*
- split: SC_L4_img_F_chunk65
path: data/SC_L4_img_F_chunk65-*
- split: SC_L4_img_F_chunk66
path: data/SC_L4_img_F_chunk66-*
- split: SC_L4_img_F_chunk67
path: data/SC_L4_img_F_chunk67-*
- split: SC_L4_img_F_chunk68
path: data/SC_L4_img_F_chunk68-*
- split: SC_L4_img_F_chunk69
path: data/SC_L4_img_F_chunk69-*
- split: SC_L4_img_F_chunk70
path: data/SC_L4_img_F_chunk70-*
- split: SC_L4_img_F_chunk71
path: data/SC_L4_img_F_chunk71-*
- split: SC_L4_img_F_chunk72
path: data/SC_L4_img_F_chunk72-*
- split: SC_L4_img_F_chunk73
path: data/SC_L4_img_F_chunk73-*
- split: SC_L4_img_F_chunk74
path: data/SC_L4_img_F_chunk74-*
- split: SC_L4_img_F_chunk75
path: data/SC_L4_img_F_chunk75-*
- split: SC_L4_img_F_chunk76
path: data/SC_L4_img_F_chunk76-*
- split: SC_L4_img_F_chunk77
path: data/SC_L4_img_F_chunk77-*
- split: SC_L4_img_F_chunk78
path: data/SC_L4_img_F_chunk78-*
- split: SC_L4_img_F_chunk79
path: data/SC_L4_img_F_chunk79-*
- split: SC_L4_img_F_chunk80
path: data/SC_L4_img_F_chunk80-*
- split: SC_L4_img_F_chunk81
path: data/SC_L4_img_F_chunk81-*
- split: SC_L4_img_F_chunk82
path: data/SC_L4_img_F_chunk82-*
- split: SC_L4_img_F_chunk83
path: data/SC_L4_img_F_chunk83-*
- split: SC_L4_img_F_chunk84
path: data/SC_L4_img_F_chunk84-*
- split: SC_L4_img_F_chunk85
path: data/SC_L4_img_F_chunk85-*
- split: SC_L4_img_F_chunk86
path: data/SC_L4_img_F_chunk86-*
- split: SC_L4_img_F_chunk87
path: data/SC_L4_img_F_chunk87-*
- split: SC_L4_img_F_chunk88
path: data/SC_L4_img_F_chunk88-*
- split: SC_L4_img_F_chunk89
path: data/SC_L4_img_F_chunk89-*
- split: SC_L4_img_F_chunk90
path: data/SC_L4_img_F_chunk90-*
- split: SC_L4_img_F_chunk91
path: data/SC_L4_img_F_chunk91-*
- split: SC_L4_img_F_chunk92
path: data/SC_L4_img_F_chunk92-*
- split: SC_L4_img_F_chunk93
path: data/SC_L4_img_F_chunk93-*
- split: SC_L4_img_F_chunk94
path: data/SC_L4_img_F_chunk94-*
- split: SC_L4_img_F_chunk95
path: data/SC_L4_img_F_chunk95-*
- split: SC_L4_img_F_chunk96
path: data/SC_L4_img_F_chunk96-*
- split: SC_L4_img_F_chunk97
path: data/SC_L4_img_F_chunk97-*
- split: SC_L4_img_F_chunk98
path: data/SC_L4_img_F_chunk98-*
- split: SC_L4_img_F_chunk99
path: data/SC_L4_img_F_chunk99-*
- split: SC_L4_img_F_chunk100
path: data/SC_L4_img_F_chunk100-*
- split: SC_L4_img_F_chunk101
path: data/SC_L4_img_F_chunk101-*
- split: SC_L4_img_F_chunk102
path: data/SC_L4_img_F_chunk102-*
- split: SC_L4_img_F_chunk103
path: data/SC_L4_img_F_chunk103-*
- split: SC_L4_img_F_chunk104
path: data/SC_L4_img_F_chunk104-*
- split: SC_L4_img_F_chunk105
path: data/SC_L4_img_F_chunk105-*
- split: SC_L4_img_F_chunk106
path: data/SC_L4_img_F_chunk106-*
- split: SC_L4_img_F_chunk107
path: data/SC_L4_img_F_chunk107-*
- split: SC_L4_img_F_chunk108
path: data/SC_L4_img_F_chunk108-*
- split: SC_L4_img_F_chunk109
path: data/SC_L4_img_F_chunk109-*
- split: SC_L4_img_F_chunk110
path: data/SC_L4_img_F_chunk110-*
- split: SC_L4_img_F_chunk111
path: data/SC_L4_img_F_chunk111-*
- split: SC_L4_img_F_chunk112
path: data/SC_L4_img_F_chunk112-*
- split: SC_L4_img_F_chunk113
path: data/SC_L4_img_F_chunk113-*
- split: SC_L4_img_F_chunk114
path: data/SC_L4_img_F_chunk114-*
- split: SC_L4_img_F_chunk115
path: data/SC_L4_img_F_chunk115-*
- split: SC_L4_img_F_chunk116
path: data/SC_L4_img_F_chunk116-*
- split: SC_L4_img_F_chunk117
path: data/SC_L4_img_F_chunk117-*
- split: SC_L4_img_F_chunk118
path: data/SC_L4_img_F_chunk118-*
- split: SC_L4_img_F_chunk119
path: data/SC_L4_img_F_chunk119-*
- split: SC_L4_img_F_chunk120
path: data/SC_L4_img_F_chunk120-*
- split: SC_L4_img_F_chunk121
path: data/SC_L4_img_F_chunk121-*
- split: SC_L4_img_F_chunk122
path: data/SC_L4_img_F_chunk122-*
- split: SC_L4_img_F_chunk123
path: data/SC_L4_img_F_chunk123-*
- split: SC_L4_img_F_chunk124
path: data/SC_L4_img_F_chunk124-*
- split: SC_L4_img_F_chunk125
path: data/SC_L4_img_F_chunk125-*
- split: SC_L4_img_F_chunk126
path: data/SC_L4_img_F_chunk126-*
- split: SC_L4_img_F_chunk127
path: data/SC_L4_img_F_chunk127-*
- split: SC_L4_img_F_chunk128
path: data/SC_L4_img_F_chunk128-*
- split: SC_L4_img_F_chunk129
path: data/SC_L4_img_F_chunk129-*
- split: SC_L4_img_F_chunk130
path: data/SC_L4_img_F_chunk130-*
- split: SC_L4_img_F_chunk131
path: data/SC_L4_img_F_chunk131-*
- split: SC_L4_img_F_chunk132
path: data/SC_L4_img_F_chunk132-*
- split: SC_L4_img_F_chunk133
path: data/SC_L4_img_F_chunk133-*
- split: SC_L4_img_F_chunk134
path: data/SC_L4_img_F_chunk134-*
- split: SC_L4_img_F_chunk135
path: data/SC_L4_img_F_chunk135-*
- split: SC_L4_img_F_chunk136
path: data/SC_L4_img_F_chunk136-*
- split: SC_L4_img_F_chunk137
path: data/SC_L4_img_F_chunk137-*
- split: SC_L4_img_F_chunk138
path: data/SC_L4_img_F_chunk138-*
- split: SC_L4_img_F_chunk139
path: data/SC_L4_img_F_chunk139-*
- split: SC_L4_img_F_chunk140
path: data/SC_L4_img_F_chunk140-*
- split: SC_L4_img_F_chunk141
path: data/SC_L4_img_F_chunk141-*
- split: SC_L4_img_F_chunk142
path: data/SC_L4_img_F_chunk142-*
- split: SC_L4_img_F_chunk143
path: data/SC_L4_img_F_chunk143-*
- split: SC_L4_img_F_chunk144
path: data/SC_L4_img_F_chunk144-*
- split: SC_L4_img_F_chunk145
path: data/SC_L4_img_F_chunk145-*
- split: SC_L4_img_F_chunk146
path: data/SC_L4_img_F_chunk146-*
- split: SC_L4_img_F_chunk147
path: data/SC_L4_img_F_chunk147-*
- split: SC_L4_img_F_chunk148
path: data/SC_L4_img_F_chunk148-*
- split: SC_L4_img_F_chunk149
path: data/SC_L4_img_F_chunk149-*
- split: SC_L4_img_F_chunk150
path: data/SC_L4_img_F_chunk150-*
- split: SC_L4_img_F_chunk151
path: data/SC_L4_img_F_chunk151-*
- split: SC_L4_img_F_chunk152
path: data/SC_L4_img_F_chunk152-*
- split: SC_L4_img_F_chunk153
path: data/SC_L4_img_F_chunk153-*
- split: SC_L4_img_F_chunk154
path: data/SC_L4_img_F_chunk154-*
- split: SC_L4_img_F_chunk155
path: data/SC_L4_img_F_chunk155-*
- split: SC_L4_img_F_chunk156
path: data/SC_L4_img_F_chunk156-*
- split: SC_L4_img_F_chunk157
path: data/SC_L4_img_F_chunk157-*
- split: SC_L4_img_F_chunk158
path: data/SC_L4_img_F_chunk158-*
- split: SC_L4_img_F_chunk159
path: data/SC_L4_img_F_chunk159-*
- split: SC_L4_img_F_chunk160
path: data/SC_L4_img_F_chunk160-*
- split: SC_L4_img_F_chunk161
path: data/SC_L4_img_F_chunk161-*
- split: SC_L4_img_F_chunk162
path: data/SC_L4_img_F_chunk162-*
- split: SC_L4_img_F_chunk163
path: data/SC_L4_img_F_chunk163-*
- split: SC_L4_img_F_chunk164
path: data/SC_L4_img_F_chunk164-*
- split: SC_L4_img_F_chunk165
path: data/SC_L4_img_F_chunk165-*
- split: SC_L4_img_F_chunk166
path: data/SC_L4_img_F_chunk166-*
- split: SC_L4_img_F_chunk167
path: data/SC_L4_img_F_chunk167-*
- split: SC_L4_img_F_chunk168
path: data/SC_L4_img_F_chunk168-*
- split: SC_L4_img_F_chunk169
path: data/SC_L4_img_F_chunk169-*
---
|
MultiBridge/LnNor_raw | MultiBridge | "2025-04-22T11:44:48Z" | 0 | 0 | [
"language:no",
"language:en",
"language:pl",
"license:cc-by-4.0",
"region:us"
] | [] | "2025-01-29T19:38:49Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
language:
- 'no'
- en
- pl
pretty_name: LnNorRaw
---
# Dataset Card for the LnNor Corpus
<!This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).>
A multilingual dataset of high-quality speech recordings in Norwegian, English, and Polish, designed for research into cross-linguistic influence, multilingual language acquisition, and applications in NLP and speech processing such as ASR, TTS, and linguistic variability modeling. The dataset includes 2,783 recordings, totaling 101 hours, with a size of 50.1 GB. These recordings capture phonological, syntactic, and semantic variability through structured tasks like reading, picture description, and spontaneous conversation.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Magdalena Wrembel, Krzysztof Hwaszcz, Agnieszka Pludra, Anna Skałba, Jarosław Weckwerth, Kamil Malarski, Zuzanna Ewa Cal, Hanna Kędzierska, Tristan Czarnecki-Verner, Anna Balas, Kamil Kaźmierski, Sylwiusz Żychliński, Justyna Gruszecka
- **Funded by:** EEA Financial Mechanism and Norwegian Financial Mechanism
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** Norwegian, English, Polish
- **License:** Creative Commons Attribution 4.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://adim.web.amu.edu.pl/en/lnnor-corpus/
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
- **Multilingual ASR training:** Supports building and evaluating ASR systems for multilingual and code-switching scenarios.
- **Linguistic modeling:** Enables research on phonological, syntactic, and semantic variability in multilingual contexts.
- **TTS and speech synthesis:** Provides diverse phonetic data for training multilingual text-to-speech models.
- **Cross-linguistic NLP research:** Facilitates studies on L3 acquisition and cross-linguistic influence in multilinguals.
### Out-of-Scope Use
- **Privacy-violating applications:** The dataset is anonymized and must not be used for speaker identification or biometric analysis tasks.
- **Non-supported languages:** The dataset is tailored for Norwegian, English, and Polish only.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The recordings are systematically labeled using a structured format: **PROJECT_SPEAKER ID_LANGUAGE STATUS_TASK**.
Each component of the label provides specific details:
- **PROJECT:** The project under which the data was collected. Possible values:
- **A** for ADIM,
- **C** for CLIMAD.
- **SPEAKER ID:** A unique 8-character identifier assigned to each speaker.
- **LANGUAGE STATUS:** The language used in the recording and its status for the speaker; examples:
- **L1PL** (Polish as L1),
- **L2EN** (English as L2),
- **L3NO** (Norwegian as L3).
- **TASK:** The type of speech task recorded. Examples include:
- **WR** (word reading),
- **SR** (sentence reading),
- **TR** (text reading "The North Wind and the Sun"),
- **PD** (picture description),
- **ST** (story telling),
- **VT** (video story telling),
- **VD** (video description),
- **TP/TE** (translation from Polish/English into Norwegian).
If a task type was repeated, sequential numbers (e.g., SR1, SR2) are appended to distinguish iterations.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The dataset was developed to advance research in multilingualism and third language (L3) acquisition, with a specific focus on Norwegian, English, and Polish. Its primary aim is to enable studies on cross-linguistic influence, phonological, syntactic and semantic variability, and multilingual language processing. It supports the development of technologies such as multilingual ASR, TTS, and NLP systems.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
The dataset was collected as part of two research projects, CLIMAD (Cross-linguistic Influence in Multilingualism across Domains: Phonology and Syntax) and ADIM (Across-domain Investigations in Multilingualism: Modeling L3 Acquisition in Diverse Settings), which focused on cross-linguistic influence and L3 acquisition in multilingual settings. The dataset comprises recordings from 231 speakers across three languages: Norwegian, English, and Polish. Speakers include L1 Polish learners of Norwegian, L1 English and L1 Norwegian natives, and L2/L3/Ln speakers of English and Norwegian. Speech was elicited using a range of tasks such as word, sentence, and text readings, picture descriptions, video story retelling, and socio-phonetic interviews. Metadata is based on the Language History Questionnaire and includes age, gender, language proficiency, exposure, and other sociolinguistic factors.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Data were recorded between 2021 and 2024 using Shure SM-35 unidirectional cardioid microphones and Marantz PMD620 recorders, ensuring minimal noise interference. Recordings were captured at 48 kHz, 16-bit resolution [TO BE CONFIRMED]. Some of the recordings were annotated with orthographic and/or phonetic transcriptions and aligned at a word and phoneme level. Metadata includes speaker characteristics, language status (L1, L2, L3/Ln), task type, and audio details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Source data producers include:
- Polish L1 speakers learning Norwegian as L3/Ln in formal and naturalistic contexts,
- native speakers of Norwegian and English as control groups,
- speakers of English and Norwegian as L2/L3/Ln with diverse L1 backgrounds.
### Annotations
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
The dataset includes the following types of annotations:
- Orthographic transcriptions (available for selected recordings)
- Phonetic transcriptions (available for selected recordings)
- Word-level alignments (available for selected recordings)
- Phoneme-level alignments (available for selected recordings)
- Speaker metadata (available for all recordings)
- speaker ID, age, gender, education, current residence, language proficiency (native and additional languages), language status (L1, L2, L3/Ln)
- Audio metadata (available for all recordings)
- recording ID, task type (e.g., word reading, sentence reading), sampling rate
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
The annotation process combined both automated and manual methods. It consisted of the following steps:
- Orthographic transcriptions: For Polish and English recordings, transcriptions were generated using a STT tool [NAME NEEDS TO BE ADDED] or created manually by linguists with a high level of proficiency in the respective languages. Norwegian transcriptions were entirely human-generated to ensure high accuracy.
- Phonetic transcriptions: Phonetic transcriptions were automatically generated using WebMAUS. The output was encoded in SAMPA (Speech Assessment Methods Phonetic Alphabet), ensuring consistency and compatibility with downstream processing.
- Alignments: Word- and phoneme-level alignments were created using WebMAUS, which produced TextGrids that aligned the transcriptions with corresponding audio files.
- Speaker metadata: The speaker metadata were collected before the recording sessions through the Linguistic History Questionnaire (LHQ) and supplementary forms provided to participants. These forms were designed to capture detailed linguistic and demographic information, ensuring a comprehensive profile of each speaker.
- Audio metadata: The audio metadata were automatically captured during the recording process by the equipment used for data collection and embedded into the corresponding audio files.
-
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
The annotations were created under the supervision of a team of linguists and language experts from the Faculty of English at Adam Mickiewicz University in Poznań, Wrocław University of Science and Technology, and the University of Szczecin, all of whom were members of the CLIMAD and ADIM projects. The annotators had extensive experience in transcription, phonetic analysis, and linguistic research in Polish, English, and Norwegian. Their role in the annotation process included:
- providing expertise in phonetic analysis and transcription techniques,
- supervising the use of automated tools such as WebMAUS for phonetic transcriptions and alignments,
- generating transcriptions for recordings that featured languages with limited support in STT tools (i.e., Norwegian) or contained challenging audio (overlapping speech or atypical pronunciations that required careful transcription),
- validating a subset of annotations to ensure high-quality outputs for critical data points.
While the majority of annotations were generated using automated tools, the annotators’ oversight ensured consistency and accuracy across the dataset.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
[More Information Needed]
## Dataset Card Authors
Agnieszka Pludra
Izabela Krysińska
Piotr Kabaciński
## Dataset Card Contact
[email protected]
[email protected]
[email protected] |
giannhskp/medline_ru_en_backtranslation_filtered_60.0_wmt22-cometkiwi-da | giannhskp | "2025-04-22T11:44:46Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:44:40Z" | ---
dataset_info:
features:
- name: ru
dtype: string
- name: en
dtype: string
- name: comet_score
dtype: float64
splits:
- name: train
num_bytes: 7508052.681046153
num_examples: 15703
download_size: 4899600
dataset_size: 7508052.681046153
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Kyleyee/train_data_HH_sft_CompletionOnly | Kyleyee | "2025-04-22T11:44:43Z" | 13 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"trl"
] | [] | "2025-04-22T03:10:25Z" | ---
tags:
- trl
---
# HH-RLHF-Helpful-Base SFT Dataset
This dataset duplicates each sample into two, turning `chosen` and `rejected` into separate examples under the `output` column, while renaming `prompt` to `instruction`.
|
MikeGreen2710/first_100k_agriculture_forestry | MikeGreen2710 | "2025-04-22T11:44:30Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:44:26Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: agriculture_forestry
dtype: int64
splits:
- name: train
num_bytes: 129600000
num_examples: 100000
download_size: 18341525
dataset_size: 129600000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
agents-course/certificates | agents-course | "2025-04-22T11:43:52Z" | 30,176 | 43 | [
"license:apache-2.0",
"modality:image",
"region:us"
] | [] | "2025-02-06T08:17:59Z" | ---
license: apache-2.0
---
|
efwkjn/dataset | efwkjn | "2025-04-22T11:43:26Z" | 3,459 | 0 | [
"region:us"
] | [] | "2025-04-12T22:57:56Z" | ---
viewer: false
---
Processed whisper training data. Final pass datamix |
LLM-EDA/qwen_7B_pairs.json | LLM-EDA | "2025-04-22T11:43:26Z" | 19 | 0 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"question-answering"
] | "2025-04-21T12:04:42Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- code
size_categories:
- 1K<n<10K
---
An example preference pairs dataset for DPO. This dataset is prompted on fine-tuned qwen_7B. Check https://github.com/CatIIIIIIII/VeriPrefer for usage. |
tomap1410/TrivialIndicator | tomap1410 | "2025-04-22T11:43:25Z" | 118 | 0 | [
"region:us"
] | [] | "2025-04-21T19:05:14Z" | ---
dataset_info:
features:
- name: task
dtype: string
- name: goals
dtype: int64
- name: description
dtype: string
- name: complete
dtype: string
- name: store_place
dtype: string
- name: email_working
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 124
num_examples: 1
download_size: 3169
dataset_size: 124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
trnguyenai01/TrivialIndicator | trnguyenai01 | "2025-04-22T11:43:20Z" | 112 | 0 | [
"region:us"
] | [] | "2025-04-21T19:05:10Z" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 17044001
num_examples: 6850
download_size: 6773709
dataset_size: 17044001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TeoGchx/beat_with_latents | TeoGchx | "2025-04-22T11:43:03Z" | 69 | 0 | [
"region:us"
] | [] | "2025-04-22T08:14:43Z" | ---
dataset_info:
features:
- name: motion_tokens
sequence:
sequence:
sequence: int64
- name: speech_tokens
sequence:
sequence:
sequence: int64
- name: motion_latents
sequence:
sequence:
sequence: float64
- name: speech_latents
sequence:
sequence:
sequence: float64
- name: beat_motion
struct:
- name: betas
sequence:
sequence: float64
- name: expressions
sequence:
sequence: float64
- name: gender
dtype: string
- name: mocap_frame_rate
dtype: int64
- name: model
dtype: string
- name: poses
sequence:
sequence: float64
- name: trans
sequence:
sequence: float64
splits:
- name: val
num_bytes: 5548424308
num_examples: 106
- name: train.1
num_bytes: 8123645036
num_examples: 182
- name: train.2
num_bytes: 7693824380
num_examples: 182
- name: train.3
num_bytes: 6905286808
num_examples: 182
- name: train.4
num_bytes: 9567519408
num_examples: 182
- name: train.5
num_bytes: 8748141744
num_examples: 182
download_size: 35606709754
dataset_size: 46586841684
configs:
- config_name: default
data_files:
- split: val
path: data/val-*
- split: train.1
path: data/train.1-*
- split: train.2
path: data/train.2-*
- split: train.3
path: data/train.3-*
- split: train.4
path: data/train.4-*
- split: train.5
path: data/train.5-*
---
|
LLM-EDA/pyra_medium | LLM-EDA | "2025-04-22T11:42:57Z" | 23 | 0 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"question-answering"
] | "2025-04-21T11:31:11Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- code
size_categories:
- 1K<n<10K
---
Filtered dataset of https://huggingface.co/datasets/LLM-EDA/pyra for RL. Keep only code more than 50 lines. Check https://github.com/CatIIIIIIII/VeriPrefer for usage. |
davnas/occupancy_perc | davnas | "2025-04-22T11:42:31Z" | 2,714 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2024-12-21T17:11:13Z" | ---
dataset_info:
features:
- name: index
dtype: string
- name: KTH Library
dtype: int64
- name: South-East Gallery
dtype: int64
- name: North Gallery
dtype: int64
- name: South Gallery
dtype: int64
- name: Ångdomen
dtype: int64
- name: Newton
dtype: int64
splits:
- name: train
num_bytes: 3910183
num_examples: 55073
download_size: 670449
dataset_size: 3910183
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/auto-rm-k10000-lr0.001-epochs1-er1-0 | kothasuhas | "2025-04-22T11:42:26Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:42:10Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34046295
num_examples: 10000
- name: validation
num_bytes: 8574979
num_examples: 1000
download_size: 26466555
dataset_size: 42621274
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
LLM-EDA/pyra_tb | LLM-EDA | "2025-04-22T11:42:16Z" | 17 | 0 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"question-answering"
] | "2025-04-21T12:15:41Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- code
size_categories:
- 1K<n<10K
---
This is the corresponding testbench data of pyra_medium (https://huggingface.co/datasets/LLM-EDA/pyra_medium). Check https://github.com/CatIIIIIIII/VeriPrefer for usage. |
RyanYr/brm-dapo-qwen2.5math-7B-base-lr5e-7-beta0.01_matheval | RyanYr | "2025-04-22T11:41:48Z" | 295 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2025-04-08T15:50:27Z" | ---
dataset_info:
features:
- name: data_source
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: responses
sequence: string
- name: gt_ans
dtype: string
- name: extracted_solution
sequence: string
- name: rm_scores
sequence: bool
- name: avg_accuracy
dtype: float64
- name: pass_accuracy
dtype: bool
- name: cons_accuracy
dtype: float64
splits:
- name: train
num_bytes: 3103830
num_examples: 30
- name: '40'
num_bytes: 3769764
num_examples: 30
- name: '2480'
num_bytes: 3267455
num_examples: 30
- name: '2440'
num_bytes: 3168555
num_examples: 30
- name: '2400'
num_bytes: 3161284
num_examples: 30
- name: '2360'
num_bytes: 3189203
num_examples: 30
- name: '2320'
num_bytes: 3205885
num_examples: 30
- name: '2280'
num_bytes: 3257951
num_examples: 30
- name: '2240'
num_bytes: 3291126
num_examples: 30
- name: '2200'
num_bytes: 3213537
num_examples: 30
- name: '2160'
num_bytes: 3109956
num_examples: 30
- name: '2120'
num_bytes: 3124236
num_examples: 30
- name: '2080'
num_bytes: 3177284
num_examples: 30
- name: '2040'
num_bytes: 3278167
num_examples: 30
- name: '2000'
num_bytes: 3236770
num_examples: 30
- name: '1960'
num_bytes: 3239933
num_examples: 30
- name: '1920'
num_bytes: 3290885
num_examples: 30
- name: '1880'
num_bytes: 3312243
num_examples: 30
- name: '1840'
num_bytes: 3237138
num_examples: 30
- name: '1800'
num_bytes: 3173552
num_examples: 30
- name: '1760'
num_bytes: 3333255
num_examples: 30
- name: '1720'
num_bytes: 3301038
num_examples: 30
- name: '1680'
num_bytes: 3236810
num_examples: 30
- name: '1640'
num_bytes: 3277238
num_examples: 30
- name: '1620'
num_bytes: 3315933
num_examples: 30
- name: '1600'
num_bytes: 3339073
num_examples: 30
- name: '1560'
num_bytes: 3366952
num_examples: 30
- name: '1520'
num_bytes: 3184370
num_examples: 30
- name: '1480'
num_bytes: 3307446
num_examples: 30
- name: '1440'
num_bytes: 3274455
num_examples: 30
- name: '1400'
num_bytes: 3297891
num_examples: 30
- name: '1360'
num_bytes: 3268157
num_examples: 30
- name: '1320'
num_bytes: 3253084
num_examples: 30
- name: '1280'
num_bytes: 3215998
num_examples: 30
- name: '1240'
num_bytes: 3337983
num_examples: 30
- name: '1200'
num_bytes: 3226344
num_examples: 30
- name: '1160'
num_bytes: 3254055
num_examples: 30
- name: '1120'
num_bytes: 3366505
num_examples: 30
- name: '1080'
num_bytes: 3357140
num_examples: 30
- name: '1040'
num_bytes: 3344619
num_examples: 30
- name: '1000'
num_bytes: 3251026
num_examples: 30
- name: '960'
num_bytes: 3314508
num_examples: 30
- name: '920'
num_bytes: 3288608
num_examples: 30
- name: '880'
num_bytes: 3350946
num_examples: 30
- name: '840'
num_bytes: 3225488
num_examples: 30
- name: '800'
num_bytes: 3403626
num_examples: 30
- name: '760'
num_bytes: 3435757
num_examples: 30
download_size: 66320030
dataset_size: 153937059
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: '1680'
path: data/1680-*
- split: '1640'
path: data/1640-*
- split: '1620'
path: data/1620-*
- split: '1600'
path: data/1600-*
- split: '1560'
path: data/1560-*
- split: '1520'
path: data/1520-*
- split: '40'
path: data/40-*
- split: '1720'
path: data/1720-*
- split: '1840'
path: data/1840-*
- split: '1800'
path: data/1800-*
- split: '1760'
path: data/1760-*
- split: '1920'
path: data/1920-*
- split: '1880'
path: data/1880-*
- split: '1960'
path: data/1960-*
- split: '2000'
path: data/2000-*
- split: '2040'
path: data/2040-*
- split: '2080'
path: data/2080-*
- split: '2120'
path: data/2120-*
- split: '2200'
path: data/2200-*
- split: '2160'
path: data/2160-*
- split: '2280'
path: data/2280-*
- split: '2240'
path: data/2240-*
- split: '2320'
path: data/2320-*
- split: '2360'
path: data/2360-*
- split: '2400'
path: data/2400-*
- split: '2440'
path: data/2440-*
- split: '2480'
path: data/2480-*
- split: '1480'
path: data/1480-*
- split: '1440'
path: data/1440-*
- split: '1400'
path: data/1400-*
- split: '1360'
path: data/1360-*
- split: '1320'
path: data/1320-*
- split: '1280'
path: data/1280-*
- split: '1240'
path: data/1240-*
- split: '1200'
path: data/1200-*
- split: '1160'
path: data/1160-*
- split: '1120'
path: data/1120-*
- split: '1080'
path: data/1080-*
- split: '1040'
path: data/1040-*
- split: '1000'
path: data/1000-*
- split: '960'
path: data/960-*
- split: '920'
path: data/920-*
- split: '880'
path: data/880-*
- split: '840'
path: data/840-*
- split: '800'
path: data/800-*
- split: '760'
path: data/760-*
---
|
KakologArchives/KakologArchives | KakologArchives | "2025-04-22T11:41:45Z" | 5,599,728 | 15 | [
"task_categories:text-classification",
"language:ja",
"license:mit",
"region:us"
] | [
"text-classification"
] | "2023-05-12T13:31:56Z" | ---
pretty_name: ニコニコ実況 過去ログアーカイブ
license: mit
language:
- ja
task_categories:
- text-classification
---
# ニコニコ実況 過去ログアーカイブ
ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp) のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。
去る2020年12月、ニコニコ実況は [ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html) されました。
これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。
そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。
しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。
一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。
また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。
私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。
このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティでの実況番組も含めた新ニコニコ実況、さらに 2024/06/10 からは実況用代替コメントサーバーである [NX-Jikkyo](https://nx-jikkyo.tsukumijima.net/) の当日分の過去ログを5分に1回収集し、随時反映しています。
過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。
よろしければそちらもご活用ください。
## Dataset Structure
### Builder Config
| Key | Value Type | Default Value | Description |
| --------------- | ---------- | ------------- | ----------- |
| channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) |
| year | int | None | 取得する過去ログの年 (省略時はすべての年) |
| number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) |
### Data Splits
| Split | Approximate Size | Description |
| ------- | ---------------- | ----------- |
| sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 |
| all | 190GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。190GB 以上あるため注意してください。 |
### Data Fields
| Field | Type | Description |
| --------------- | -------- | ----------- |
| thread | string | コメントのスレッド ID |
| no | int64 | コメント番号 (コメ番) |
| vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) |
| date | int64 | コメント投稿時間の UNIX タイムスタンプ |
| date_usec | int64 | コメント投稿時間の小数点以下の時間 |
| user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) |
| mail | string | コメントのコマンド (184, red naka big など、省略されることもある) |
| premium | boolean | コメントしたユーザーがプレミアム会員であれば True |
| anonymity | boolean | 匿名コメントであれば True |
| content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) |
## Example
```python
from datasets import load_dataset
dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10)
for data in dataset['train']:
print(data)
```
## Licensing Information
[MIT License](https://opensource.org/license/mit/)
|
LLM-EDA/pyra | LLM-EDA | "2025-04-22T11:41:44Z" | 18 | 0 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"question-answering"
] | "2025-04-21T11:23:36Z" | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- code
size_categories:
- 10K<n<100K
---
Filtered dataset sourced from https://huggingface.co/datasets/bnadimi/PyraNet-Verilog for SFT. Keep only high-quality data. Check https://github.com/CatIIIIIIII/VeriPrefer for usage. |
gmongaras/CC12M_and_Imagenet21K_Recap_Highqual_512 | gmongaras | "2025-04-22T11:41:44Z" | 202 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2025-04-21T13:33:57Z" | ---
dataset_info:
features:
- name: image
dtype: binary
- name: class
dtype: string
- name: id
dtype: string
- name: recaption
dtype: string
- name: recaption_short
dtype: string
- name: height
dtype: int64
- name: width
dtype: int64
- name: aspect_ratio
dtype: float64
- name: bucket_size
dtype: string
splits:
- name: train
num_bytes: 11190480250
num_examples: 42444
download_size: 11179133656
dataset_size: 11190480250
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hf-doc-build/doc-build-dev | hf-doc-build | "2025-04-22T11:41:26Z" | 124,970 | 4 | [
"license:mit",
"region:us",
"documentation"
] | [] | "2022-11-08T09:03:37Z" | ---
license: mit
tags:
- documentation
pretty_name: HF Documentation (PRs)
viewer: false
---
This is a dataset which contains the docs from all the PRs that are updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo. |
intelsense/openhermes-en2bn-messages-2 | intelsense | "2025-04-22T11:40:51Z" | 4,152 | 0 | [
"region:us"
] | [] | "2025-04-09T15:15:41Z" | ---
dataset_info:
features:
- name: custom_instruction
dtype: 'null'
- name: topic
dtype: 'null'
- name: model_name
dtype: 'null'
- name: model
dtype: 'null'
- name: skip_prompt_formatting
dtype: bool
- name: category
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: 'null'
- name: views
dtype: 'null'
- name: language
dtype: 'null'
- name: id
dtype: string
- name: title
dtype: 'null'
- name: idx
dtype: 'null'
- name: hash
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: system_prompt
dtype: 'null'
- name: source
dtype: string
- name: system_message
dtype: string
- name: human_message
dtype: string
- name: gpt_message
dtype: string
- name: system_message_bn
dtype: string
- name: human_message_bn
dtype: string
- name: gpt_message_bn
dtype: string
splits:
- name: train
num_bytes: 824145818
num_examples: 112200
download_size: 296351792
dataset_size: 824145818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tomap1410/RiskIndicator | tomap1410 | "2025-04-22T11:40:50Z" | 111 | 0 | [
"region:us"
] | [] | "2025-04-21T19:17:00Z" | ---
dataset_info:
features:
- name: task
dtype: string
- name: goals
dtype: int64
- name: description
dtype: string
- name: complete
dtype: string
- name: store_place
dtype: string
- name: email_working
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 122
num_examples: 1
download_size: 3159
dataset_size: 122
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
beyzaatay/saglikVeriseti | beyzaatay | "2025-04-22T11:40:45Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:26:30Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: response
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 36596114.409395054
num_examples: 59316
- name: test
num_bytes: 4066440.590604943
num_examples: 6591
download_size: 24447889
dataset_size: 40662555.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
trnguyenai01/RiskIndicator | trnguyenai01 | "2025-04-22T11:40:35Z" | 111 | 0 | [
"region:us"
] | [] | "2025-04-21T19:16:55Z" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 18008418
num_examples: 6550
download_size: 7149123
dataset_size: 18008418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
opentargets/ot-release-metrics | opentargets | "2025-04-22T11:40:28Z" | 248 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | "2024-08-15T14:19:23Z" | ---
license: apache-2.0
dataset_info:
features:
- name: value
dtype: float64
- name: datasourceId
dtype: string
- name: variable
dtype: string
- name: field
dtype: string
- name: runId
dtype: string
splits:
- name: train
num_bytes: 673444
num_examples: 6383
download_size: 37395
dataset_size: 673444
configs:
- config_name: default
data_files:
- split: train
path: metrics/train-*
---
Repository for the Open Targets Platform release metrics.
Each file is indexed by a `runId`. The format of this variable will depend on the type of the run:
| Run type | OT_RELEASE format | Example metrics output name |
|-------------------|-------------------|-----------------------------|
| Pre-ETL | YY.MM_pre | 23.12_pre |
| Post-ETL, regular | YY.MM | 23.12_2023-10-31 |
| Post-ETL, PPP | partners/YY.MM | 23.12_ppp_2023-11-24 |
👉 We have built a dashboard to explore these metrics and help in the QC of a new Open Targets release: https://open-targets-metrics.streamlit.app/
|
tlf123/act_test2 | tlf123 | "2025-04-22T11:40:23Z" | 20 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | "2025-04-22T01:05:27Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1107,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
hf-doc-build/doc-build | hf-doc-build | "2025-04-22T11:39:48Z" | 321,207 | 9 | [
"license:mit",
"region:us"
] | [] | "2022-10-24T15:39:05Z" | ---
license: mit
pretty_name: Generated Docs for HF
viewer: false
---
This repo contains all the docs published on https://huggingface.co/docs.
The docs are generated with https://github.com/huggingface/doc-builder.
<!-- comment to trigger webhook.= --> |
intelsense/dolphin-flan5m-en2bn | intelsense | "2025-04-22T11:39:16Z" | 3,014 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2025-04-05T12:07:32Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction_bn
dtype: string
- name: input_bn
dtype: string
- name: output_bn
dtype: string
splits:
- name: train
num_bytes: 112402002
num_examples: 20880
download_size: 47696338
dataset_size: 112402002
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andrewzamai/CNADFTD-ADNI2NIFD-AN-fold-523-subtypes-betweenT-2.25p-1804-NoLR-testset-T3RepXMRI-DNT | andrewzamai | "2025-04-22T11:39:12Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:39:09Z" | ---
dataset_info:
features:
- name: subject
dtype: string
- name: txt_report
dtype: string
- name: gold_diagnosis
dtype: string
splits:
- name: test
num_bytes: 964942
num_examples: 549
download_size: 155471
dataset_size: 964942
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
nit1607/tech_full_article_and_summary | nit1607 | "2025-04-22T11:38:51Z" | 178 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2025-03-25T12:19:50Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: question_id
dtype: int64
- name: base_question_with_prefix
dtype: string
- name: base_question
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: SourceSection
dtype: string
- name: TargetSection
dtype: string
splits:
- name: train
num_bytes: 12379287
num_examples: 19021
download_size: 2077278
dataset_size: 12379287
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
IcarusWizard/AIME-NoB | IcarusWizard | "2025-04-22T11:37:29Z" | 51 | 0 | [
"task_categories:reinforcement-learning",
"language:en",
"license:mit",
"arxiv:2404.18896",
"region:us"
] | [
"reinforcement-learning"
] | "2025-04-10T14:31:14Z" | ---
license: mit
task_categories:
- reinforcement-learning
language:
- en
---
# Data Card
## Motivation
> **For what purpose was the dataset created?**
The dataset is created for the experiment section of our paper "Overcoming Knowledge Barriers: Online Imitation Learning from Observation with Pretrained World Models" to pretrained world models and do imitation learning. We release the datasets for the community as a common test bench for similar problems.
Code available at https://github.com/IcarusWizard/AIME-NoB.
> **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?**
The dataset is collected by [Xingyuan Zhang](https://icaruswizard.github.io/) during his Ph.D. at Machine Learning Research Lab at Volkswagen AG.
## Uses
> **Has the dataset been used for any tasks already?**
Yes, the datasets has been used in our AIME-NoB paper for pretraining world models and imitation learning from observation.
> **Is there a repository that links to any or all papers or systems that use the dataset?**
No.
> **What (other) tasks could the dataset be used for?**
The datasets can also be used for offline reinforcement learning.
> **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?**
No. Everything from the simulator is recorded in the dataset.
> **Are there tasks for which the dataset should not be used?**
Not at the moment.
## Data description
> **What data does each instance consist of? What is the format of it?**
Every dataset consists of certain number of trajectories and each trajecory is stored as a separate `.hdf5` file.
The `.hdf5` file can be loaded by `h5py.File` which give you a dictionary-like structure with each entry as a `np.ndarray`.
The dictionary has both the proprioceptions and the images for each time step.
Note: the key `pre_action` means the actions taken by the agent one time step before which leads to the current observation, hence all the `pre_action` in the first time step is 0.
> **Are there recommended data splits (e.g., training, development/validation, testing)?**
Each dataset is self-contained, we don't have a recommended data splits inside of it.
> **Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?**
Yes, the datasets are self-contained.
> **Is there any example code for loading the dataset?**
```python
import os
from aime_nob.data import SequenceDataset
from aime_nob.utils import DATA_PATH
dataset_name = 'walker-plan2explore-buffer'
dataset = SequenceDataset(os.path.join(DATA_PATH, dataset_name), horizon=50, overlap=True)
```
## Data Creation
The buffer datasets for DMC are collected by running plan2explore algorithm on each environment with the visual setup for 2000 trajectories and taking the replay buffer. The result dataset has 2005 trajectories in total due to the initial warmup with 5 random trajectories. For example, you can collect the `walker-plan2explore-buffer` dataset by `python train_scripts/train_plan2explore.py env=walker environment_setup=visual`.
The MetaWorld expert datasets are collected by the trained policies from [tdmpc2](https://www.tdmpc2.com/models) for 50 trajectories.
## Distribution
> **How will the dataset will be distributed (e.g., tarball on website, API, GitHub)?**
The datasets will be hosted with [Github Release](https://huggingface.co/datasets/IcarusWizard/AIME-NoB).
> **When will the dataset be distributed?**
May 2024.
> **Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?**
CC BY 4.0.
> **Have any third parties imposed IP-based or other restrictions on the data associated with the instances?**
No.
## Maintenance
> **Who will be supporting/hosting/maintaining the dataset?**
Xingyuan Zhang will maintain this dataset. You can contact him with [email protected].
> **Will there be an erratum? If yes, how should people get access to that?**
There won't.
> **Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)?**
Not planned, but may act as requested from the community.
> **Will older versions of the dataset continue to be supported/hosted/maintained?**
Yes.
> **If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?**
The dataset is free to use, people can build their own work on it and release by themselves.
## Additional Information
### Version
Version 1.0, the initial release.
### Dataset Curators
The dataset is collected by [Xingyuan Zhang](https://icaruswizard.github.io/) during his Ph.D. at Machine Learning Research Lab at Volkswagen AG.
### Licensing Information
_© 2024. This work is licensed under a [_CC BY 4.0 license_](https://creativecommons.org/licenses/by/4.0/)_.
### Citation Information
If you find the datasets useful, please cite our paper.
```BibTeX
@misc{zhang2024overcoming,
title={Overcoming Knowledge Barriers: Online Imitation Learning from Observation with Pretrained World Models},
author={Xingyuan Zhang and Philip Becker-Ehmck and Patrick van der Smagt and Maximilian Karl},
year={2024},
eprint={2404.18896},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
hettc/polkadot-elections | hettc | "2025-04-22T11:37:01Z" | 57 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | "2025-04-20T10:19:57Z" | ---
license: apache-2.0
---
|
konwoo/auto-rm-erall-k100000-lr1e-5-epochs1-er1-0 | konwoo | "2025-04-22T11:36:42Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:35:57Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 333744350
num_examples: 100000
- name: validation
num_bytes: 8574979
num_examples: 1000
download_size: 215651483
dataset_size: 342319329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
tbetton/yourbench_example | tbetton | "2025-04-22T11:36:03Z" | 23 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2025-04-14T09:24:18Z" | ---
dataset_info:
- config_name: chunked
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
- name: chunks
list:
- name: chunk_id
dtype: string
- name: chunk_text
dtype: string
- name: multihop_chunks
list:
- name: chunk_ids
sequence: string
- name: chunks_text
sequence: string
- name: chunk_info_metrics
list:
- name: avg_token_length
dtype: float64
- name: bigram_diversity
dtype: float64
- name: flesch_reading_ease
dtype: float64
- name: gunning_fog
dtype: float64
- name: perplexity
dtype: float64
- name: token_count
dtype: float64
- name: unique_token_ratio
dtype: float64
- name: chunking_model
dtype: string
splits:
- name: train
num_bytes: 1523472
num_examples: 5
download_size: 664856
dataset_size: 1523472
- config_name: ingested
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
splits:
- name: train
num_bytes: 414378
num_examples: 5
download_size: 222360
dataset_size: 414378
- config_name: lighteval
features:
- name: question
dtype: string
- name: ground_truth_answer
dtype: string
- name: question_category
dtype: string
- name: kind
dtype: string
- name: estimated_difficulty
dtype: int64
- name: citations
sequence: string
- name: document_id
dtype: string
- name: chunk_ids
sequence: string
- name: question_generating_model
dtype: string
- name: chunks
sequence: string
- name: document
dtype: string
splits:
- name: train
num_bytes: 1082888
num_examples: 22
download_size: 252805
dataset_size: 1082888
- config_name: multi_hop_questions
features:
- name: document_id
dtype: string
- name: source_chunk_ids
sequence: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: citations
sequence: string
- name: raw_response
dtype: string
splits:
- name: train
num_bytes: 37805
num_examples: 8
download_size: 20859
dataset_size: 37805
- config_name: single_shot_questions
features:
- name: chunk_id
dtype: string
- name: document_id
dtype: string
- name: question
dtype: string
- name: self_answer
dtype: string
- name: estimated_difficulty
dtype: int64
- name: self_assessed_question_type
dtype: string
- name: generating_model
dtype: string
- name: thought_process
dtype: string
- name: raw_response
dtype: string
- name: citations
sequence: string
splits:
- name: train
num_bytes: 76814
num_examples: 14
download_size: 39553
dataset_size: 76814
- config_name: summarized
features:
- name: document_id
dtype: string
- name: document_text
dtype: string
- name: document_filename
dtype: string
- name: document_metadata
struct:
- name: file_size
dtype: int64
- name: raw_document_summary
dtype: string
- name: document_summary
dtype: string
- name: summarization_model
dtype: string
splits:
- name: train
num_bytes: 427204
num_examples: 5
download_size: 241001
dataset_size: 427204
configs:
- config_name: chunked
data_files:
- split: train
path: chunked/train-*
- config_name: ingested
data_files:
- split: train
path: ingested/train-*
- config_name: lighteval
data_files:
- split: train
path: lighteval/train-*
- config_name: multi_hop_questions
data_files:
- split: train
path: multi_hop_questions/train-*
- config_name: single_shot_questions
data_files:
- split: train
path: single_shot_questions/train-*
- config_name: summarized
data_files:
- split: train
path: summarized/train-*
---
|
kothasuhas/auto-rm-k10000-lr0.01-epochs1-er1-0 | kothasuhas | "2025-04-22T11:35:40Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:35:36Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34046295
num_examples: 10000
- name: validation
num_bytes: 8574979
num_examples: 1000
download_size: 26471527
dataset_size: 42621274
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
davnas/real-time-library-occupancy | davnas | "2025-04-22T11:35:31Z" | 4,140 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2024-12-09T23:45:08Z" | ---
dataset_info:
features:
- name: time
dtype: string
- name: KTH Library
dtype: int64
- name: South-East Gallery
dtype: int64
- name: North Gallery
dtype: int64
- name: South Gallery
dtype: int64
- name: Ångdomen
dtype: int64
- name: Newton
dtype: int64
splits:
- name: train
num_bytes: 3932974
num_examples: 55394
download_size: 656772
dataset_size: 3932974
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
prithivMLmods/IndoorOutdoorNet-20K | prithivMLmods | "2025-04-22T11:35:03Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | "2025-04-22T04:02:16Z" | ---
license: apache-2.0
---
|
latentcanon/HistVis | latentcanon | "2025-04-22T11:35:02Z" | 421 | 0 | [
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"modality:image",
"doi:10.57967/hf/5066",
"region:us"
] | [] | "2025-03-31T14:56:06Z" | ---
license: cc-by-nc-4.0
language:
- en
size_categories:
- 10K<n<100K
---
# 📚 HistVis Dataset
**HistVis** is a dataset designed to evaluate how text-to-image models represent cultural and historical variations in human activities. It contains images generated by multiple models across temporal prompts and activity categories.
## 📋 Dataset Structure
The main metadata is stored in [`dataset.csv`](./dataset.csv), with one row per image. Below is a description of each column:
| Column | Description |
|--------|-------------|
| `image_path` | **Relative path** to the image file within the repository. These correspond to generations by different models under specific historical prompts. |
| `model` | The name of the text-to-image model used to generate the image (e.g., `Flux_Schnell`, `SD_3`, `SD_XL`). |
| `historical_period` | The historical era or century the prompt refers to (e.g., `19th_century`, `1920s`). This is the temporal condition imposed in the prompt. |
| `universal_human_activity` | The prompt used to describe the universal human activity, such as "a person listening to music" or "a person laughing with a friend". |
| `category` | The broader conceptual category of the human activity (e.g., `Health and Well-being`, `Art`, "Music"). This groups related prompts under common cultural dimensions. |
## 🧾 Prompt Format
Each image in the dataset was generated using the following prompt template:
> **"a [universal_human_activity] in the [historical_period]"**
For example:
- "a person listening to music in the 1960s"
- "a person laughing with a friend in the 19th century"
## 💻 Using the Dataset
You can access the HistVis dataset using the Hugging Face Datasets library. Below are examples showing how to load and explore the dataset.
### Basic Usage
```python
from datasets import load_dataset
import pandas as pd
# Load the dataset metadata (CSV only)
dataset = load_dataset('csv', data_files='https://huggingface.co/datasets/latentcanon/HistVis/resolve/main/dataset.csv')
# Set pandas to display full content without truncation
pd.set_option('display.max_colwidth', None)
# View the entire dataset
print(f"Dataset contains {len(dataset['train'])} entries")
# To see all entries
df = pd.DataFrame(dataset['train'])
print(df)
# Or access any specific entry
print(f"Entry #42: {dataset['train'][42]}") |
thexForce/grounding_ontology | thexForce | "2025-04-22T11:34:46Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:25:11Z" | ---
dataset_info:
features:
- name: ontology_4bad7
dtype: string
splits:
- name: train
num_bytes: 1336
num_examples: 1
download_size: 7055
dataset_size: 1336
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
konwoo/auto-rm-erall-k200000-lr1e-4-epochs1-er1-0 | konwoo | "2025-04-22T11:33:48Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:33:25Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 670224855
num_examples: 200000
- name: validation
num_bytes: 8574979
num_examples: 1000
download_size: 427472101
dataset_size: 678799834
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
kothasuhas/auto-rm-k2000-lr0.00001-epochs1-er0-0 | kothasuhas | "2025-04-22T11:33:28Z" | 0 | 0 | [
"region:us"
] | [] | "2025-04-22T11:04:56Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: validation
num_bytes: 8574979
num_examples: 1000
- name: train
num_bytes: 4885952
num_examples: 2000
download_size: 11215059
dataset_size: 13460931
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
intelsense/smol-magpie-ultra-bn | intelsense | "2025-04-22T11:32:48Z" | 4,989 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2025-03-23T19:39:55Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
- name: difficulty
dtype: string
- name: quality
dtype: string
- name: reward_model_score
dtype: float64
- name: conversation_tokens
dtype: int64
splits:
- name: train
num_bytes: 304175886
num_examples: 18230
download_size: 103465962
dataset_size: 304175886
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Trelis/touch-rugby-o4-mini-5k_chunks-2_chunks | Trelis | "2025-04-22T11:32:48Z" | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | "2025-04-21T11:39:23Z" | ---
dataset_info:
features:
- name: document
dtype: string
- name: chunk_id
dtype: int64
- name: chunk_text
dtype: string
- name: is_table
dtype: bool
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: evaluation_criteria
dtype: string
- name: difficulty
dtype: int64
- name: category
dtype: string
- name: model
dtype: string
splits:
- name: train
num_bytes: 210239
num_examples: 29
download_size: 38747
dataset_size: 210239
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
StormKing99/x_dataset_8191 | StormKing99 | "2025-04-22T11:32:19Z" | 77,413 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-01-26T04:23:40Z" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** StormKing99/x_dataset_8191
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5CDsAAsUBDzucJv3GgPdsi1EDBgqdgpRGsm396nqDd3RVx4u
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{StormKing992025datauniversex_dataset_8191,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={StormKing99},
year={2025},
url={https://huggingface.co/datasets/StormKing99/x_dataset_8191},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 150425608
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-07T00:00:00Z
- **Last Updated:** 2025-02-12T18:47:42Z
### Data Distribution
- Tweets with hashtags: 42.10%
- Tweets without hashtags: 57.90%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 85002227 | 57.31% |
| 2 | #riyadh | 1046251 | 0.71% |
| 3 | #zelena | 785878 | 0.53% |
| 4 | #tiktok | 615020 | 0.41% |
| 5 | #bbb25 | 382300 | 0.26% |
| 6 | #ad | 358410 | 0.24% |
| 7 | #jhope_at_galadespiècesjaunes | 234370 | 0.16% |
| 8 | #bbmzansi | 194620 | 0.13% |
| 9 | #pr | 189419 | 0.13% |
| 10 | #trump | 182679 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T04:23:37Z | 2098210 | 2098210 |
| 2025-01-26T04:24:18Z | 2162522 | 4260732 |
| 2025-01-29T17:24:35Z | 30495898 | 34756630 |
| 2025-02-02T05:41:30Z | 28962209 | 63718839 |
| 2025-02-05T17:59:56Z | 29099416 | 92818255 |
| 2025-02-09T06:21:50Z | 29023092 | 121841347 |
| 2025-02-12T18:47:42Z | 28584261 | 150425608 |
|
Subsets and Splits