Dataset Viewer
datasetId
large_stringlengths 9
98
| author
large_stringlengths 4
19
| last_modified
unknowndate 2025-04-17 06:53:24
2025-04-17 10:00:54
| downloads
int64 0
5.74M
| likes
int64 0
52
| tags
large listlengths 1
31
| task_categories
large listlengths 1
13
⌀ | createdAt
unknowndate 2022-10-24 15:39:05
2025-04-17 09:57:53
| card
large_stringlengths 31
594k
|
---|---|---|---|---|---|---|---|---|
jigsaw-rl/visualpuzzles | jigsaw-rl | "2025-04-17T07:12:12" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:52:55" | ---
configs:
- config_name: default
data_files:
- split: test
path: test.parquet
--- |
li0612/my-distiset-e9c973b4 | li0612 | "2025-04-17T07:09:53" | 0 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_categories:question-answering",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | [
"text-generation",
"text2text-generation",
"text-retrieval",
"question-answering"
] | "2025-04-17T07:09:44" | ---
size_categories: n<1K
task_categories:
- text-generation
- text2text-generation
- text-retrieval
- question-answering
dataset_info:
features:
- name: context
dtype: 'null'
- name: question
dtype: string
- name: response
dtype: 'null'
splits:
- name: train
num_bytes: 272
num_examples: 10
download_size: 1534
dataset_size: 272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-e9c973b4
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/li0612/my-distiset-e9c973b4/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/li0612/my-distiset-e9c973b4/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"context": null,
"question": "What is none?",
"response": null
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("li0612/my-distiset-e9c973b4", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("li0612/my-distiset-e9c973b4")
```
</details>
|
dzinampini/potato-leaf-diseases | dzinampini | "2025-04-17T07:09:44" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:09:29" | ---
dataset_info:
features:
- name: image
dtype: string
- name: label
dtype:
class_label:
names:
'0': bacteria
'1': early_blight
'2': fungi
'3': healthy
'4': late_blight
'5': nematode
'6': pest
'7': phytopthora
'8': virus
- name: class_names
sequence: string
splits:
- name: train
num_bytes: 1432718
num_examples: 9148
- name: validation
num_bytes: 179170
num_examples: 1144
- name: test
num_bytes: 179168
num_examples: 1144
download_size: 125540
dataset_size: 1791056
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Gurwinder/notable-dataset | Gurwinder | "2025-04-17T07:09:37" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:08:57" | ---
dataset_info:
features:
- name: image
dtype: image
- name: table
dtype: string
splits:
- name: train
num_bytes: 232718912.0
num_examples: 2000
- name: val
num_bytes: 49165568.0
num_examples: 500
download_size: 278965330
dataset_size: 281884480.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
---
|
li0612/my-distiset-96b19d22 | li0612 | "2025-04-17T07:08:54" | 0 | 0 | [
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:08:38" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': low-priority-match
'1': no-match
'2': high-priority-match
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 871
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
edaydin0405/bilgiislem | edaydin0405 | "2025-04-17T07:08:31" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:51:54" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 100528.15789473684
num_examples: 102
- name: test
num_bytes: 11826.842105263158
num_examples: 12
download_size: 73812
dataset_size: 112355.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Amylyx/x_dataset_232 | Amylyx | "2025-04-17T07:08:08" | 1,673 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-04-03T15:43:56" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** Amylyx/x_dataset_232
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** lmdcd_dataserver
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{Amylyx2025datauniversex_dataset_232,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={Amylyx},
year={2025},
url={https://huggingface.co/datasets/Amylyx/x_dataset_232},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 1100
- **Date Range:** 2025-03-14T00:00:00Z to 2025-03-28T00:00:00Z
- **Last Updated:** 2025-04-17T07:08:06Z
### Data Distribution
- Tweets with hashtags: 100.00%
- Tweets without hashtags: 0.00%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | #btc | 2 | 3.92% |
| 2 | #ليلة_القدر | 2 | 3.92% |
| 3 | #budnolollabr | 2 | 3.92% |
| 4 | #ذووووي_احتياجات_ينخاكم_الفزعه | 2 | 3.92% |
| 5 | #managingforprofit | 1 | 1.96% |
| 6 | #boi | 1 | 1.96% |
| 7 | #eliminatoriasendsports | 1 | 1.96% |
| 8 | #pr | 1 | 1.96% |
| 9 | #ポケモンスリープ | 1 | 1.96% |
| 10 | #trabajosinsueldo | 1 | 1.96% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-04-03T16:18:40Z | 50 | 50 |
| 2025-04-03T17:44:08Z | 50 | 100 |
| 2025-04-03T17:44:59Z | 50 | 150 |
| 2025-04-04T10:42:06Z | 50 | 200 |
| 2025-04-05T03:40:06Z | 50 | 250 |
| 2025-04-05T20:40:06Z | 50 | 300 |
| 2025-04-06T13:44:08Z | 50 | 350 |
| 2025-04-07T06:42:07Z | 50 | 400 |
| 2025-04-07T23:40:06Z | 50 | 450 |
| 2025-04-08T16:39:27Z | 50 | 500 |
| 2025-04-09T12:24:05Z | 50 | 550 |
| 2025-04-10T05:22:05Z | 50 | 600 |
| 2025-04-10T22:20:05Z | 50 | 650 |
| 2025-04-11T15:18:05Z | 50 | 700 |
| 2025-04-12T08:16:05Z | 50 | 750 |
| 2025-04-13T01:14:07Z | 50 | 800 |
| 2025-04-13T18:12:05Z | 50 | 850 |
| 2025-04-14T11:10:07Z | 50 | 900 |
| 2025-04-15T04:08:11Z | 50 | 950 |
| 2025-04-15T21:06:05Z | 50 | 1000 |
| 2025-04-16T14:10:06Z | 50 | 1050 |
| 2025-04-17T07:08:06Z | 50 | 1100 |
|
VincentG1234/EPC-dataset | VincentG1234 | "2025-04-17T07:08:08" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:04:05" | ---
dataset_info:
features:
- name: pdf_path
dtype: string
- name: page
dtype: int64
- name: data
struct:
- name: carbon_emission_rating
dtype: string
- name: carbon_emission_score
dtype: string
- name: current_epc_label
dtype: string
- name: current_epc_score
dtype: string
- name: potential_carbon_emission_rating
dtype: string
- name: potential_carbon_emission_score
dtype: string
- name: potential_epc_label
dtype: string
- name: potential_epc_score
dtype: string
- name: image
dtype: binary
splits:
- name: train
num_bytes: 13588805
num_examples: 30
download_size: 13502342
dataset_size: 13588805
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Amylyx/reddit_dataset_232 | Amylyx | "2025-04-17T07:08:03" | 826 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-04-03T15:43:55" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** Amylyx/reddit_dataset_232
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** lmdcd_dataserver
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{Amylyx2025datauniversereddit_dataset_232,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={Amylyx},
year={2025},
url={https://huggingface.co/datasets/Amylyx/reddit_dataset_232},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 1000
- **Date Range:** 2025-03-14T00:00:00Z to 2025-03-28T00:00:00Z
- **Last Updated:** 2025-04-17T07:08:02Z
### Data Distribution
- Posts: 6.60%
- Comments: 93.40%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/AskReddit | 236 | 23.60% |
| 2 | r/politics | 40 | 4.00% |
| 3 | r/pics | 27 | 2.70% |
| 4 | r/AITAH | 24 | 2.40% |
| 5 | r/europe | 23 | 2.30% |
| 6 | r/nba | 21 | 2.10% |
| 7 | r/soccer | 20 | 2.00% |
| 8 | r/CollegeBasketball | 19 | 1.90% |
| 9 | r/wallstreetbets | 18 | 1.80% |
| 10 | r/SluttyConfessionsDesi | 18 | 1.80% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-04-03T17:44:56Z | 50 | 50 |
| 2025-04-04T10:42:02Z | 50 | 100 |
| 2025-04-05T03:40:02Z | 50 | 150 |
| 2025-04-05T20:40:02Z | 50 | 200 |
| 2025-04-06T13:44:03Z | 50 | 250 |
| 2025-04-07T06:42:03Z | 50 | 300 |
| 2025-04-07T23:40:02Z | 50 | 350 |
| 2025-04-08T16:39:10Z | 50 | 400 |
| 2025-04-09T12:24:02Z | 50 | 450 |
| 2025-04-10T05:22:02Z | 50 | 500 |
| 2025-04-10T22:20:01Z | 50 | 550 |
| 2025-04-11T15:18:02Z | 50 | 600 |
| 2025-04-12T08:16:01Z | 50 | 650 |
| 2025-04-13T01:14:02Z | 50 | 700 |
| 2025-04-13T18:12:02Z | 50 | 750 |
| 2025-04-14T11:10:02Z | 50 | 800 |
| 2025-04-15T04:08:05Z | 50 | 850 |
| 2025-04-15T21:06:01Z | 50 | 900 |
| 2025-04-16T14:10:02Z | 50 | 950 |
| 2025-04-17T07:08:02Z | 50 | 1000 |
|
Ratchet315/test_dataset1 | Ratchet315 | "2025-04-17T07:08:01" | 0 | 0 | [
"task_categories:robotics",
"region:us",
"phosphobot",
"so100",
"phospho-dk1"
] | [
"robotics"
] | "2025-04-17T07:07:58" |
---
tags:
- phosphobot
- so100
- phospho-dk1
task_categories:
- robotics
---
# test_dataset1
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
kothasuhas/tinystories_10k_docs | kothasuhas | "2025-04-17T07:07:44" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:07:36" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8986259.154496541
num_examples: 10000
- name: validation
num_bytes: 449312.95772482705
num_examples: 500
download_size: 4996427
dataset_size: 9435572.112221368
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
kothasuhas/tinystories_640k_docs | kothasuhas | "2025-04-17T07:07:41" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:07:11" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 575120585.8877786
num_examples: 640000
- name: validation
num_bytes: 449312.95772482705
num_examples: 500
download_size: 304322009
dataset_size: 575569898.8455034
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
twei11/node1_round_59 | twei11 | "2025-04-17T07:07:10" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:07:02" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 7171451
num_examples: 1800
download_size: 3510116
dataset_size: 7171451
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dannnnthemannnn/Multi-choice-Continuous-Test-Remax-14B-v5-generations | dannnnthemannnn | "2025-04-17T07:07:09" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:06:57" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: question_id
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 7437127
num_examples: 768
download_size: 3736854
dataset_size: 7437127
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/Shrutilipi_Hindi_resampled_44100_chunk_15 | SayantanJoker | "2025-04-17T07:05:41" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:01:41" | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 6037887014.0
num_examples: 10000
download_size: 6019260505
dataset_size: 6037887014.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chenggong1995/math3to5-100 | chenggong1995 | "2025-04-17T07:05:41" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:05:22" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: solution
dtype: string
- name: type
dtype: string
- name: solution_hint
dtype: string
splits:
- name: train
num_bytes: 151270.79207920792
num_examples: 100
download_size: 90525
dataset_size: 151270.79207920792
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NUSTM/ECF | NUSTM | "2025-04-17T07:04:25" | 65 | 2 | [
"language:en",
"license:gpl-3.0",
"modality:text",
"region:us",
"emotion-cause-analysis"
] | null | "2024-10-12T08:22:03" | ---
license: gpl-3.0
language:
- en
tags:
- emotion-cause-analysis
---
# Emotion-Cause-in-Friends (ECF)
For the task named Multimodal Emotion-Cause Pair Extraction in Conversation, we accordingly construct a multimodal conversational emotion cause dataset ECF, which contains 9,794 multimodal emotion-cause pairs among 13,619 utterances in the *Friends* sitcom.
For more details, please refer to our GitHub:
- [Multimodal Emotion-Cause Pair Extraction in Conversations](https://github.com/NUSTM/MECPE/tree/main/data)
- [SemEval-2024 Task 3](https://github.com/NUSTM/SemEval-2024_ECAC)
## Dataset Statistics
| Item | Train | Dev | Test | Total |
| ------------------------------- | ----- | ----- | ----- | ------ |
| Conversations | 1001 | 112 | 261 | 1,374 |
| Utterances | 9,966 | 1,087 | 2,566 | 13,619 |
| Emotion (utterances) | 5,577 | 668 | 1,445 | 7,690 |
| Emotion-cause (utterance) pairs | 7,055 | 866 | 1,873 | 9,794 |
## About Multimodal Data
⚠️ Due to potential copyright issues with the TV show "Friends", we do not provide pre-segmented video clips.
If you need to utilize multimodal data, you may consider the following options:
1. Use the acoustic and visual features we provide:
- [`audio_embedding_6373.npy`](https://drive.google.com/file/d/1EhU2jFSr_Vi67Wdu1ARJozrTJtgiQrQI/view?usp=share_link): the embedding table composed of the 6373-dimensional acoustic features of each utterances extracted with openSMILE
- [`video_embedding_4096.npy`](https://drive.google.com/file/d/1NGSsiQYDTqgen_g9qndSuha29JA60x14/view?usp=share_link): the embedding table composed of the 4096-dimensional visual features of each utterances extracted with 3D-CNN
- Please note that the above features only include the original ECF (1.0) dataset; the SemEval evaluation data is not included. If needed, you can contact us, and we will do our best to release new features.
2. Since ECF is constructed based on the MELD dataset, you can download the raw video clips from [MELD](https://github.com/declare-lab/MELD).
Most utterances in ECF align with MELD. However, **we have made certain modifications to MELD's raw data while constructing ECF, including but not limited to editing utterance text, adjusting timestamps, and adding or removing utterances**. Therefore, some timestamps provided in ECF have been corrected, and there are also new utterances that cannot be found in MELD. Given this, we recommend option (3) if feasible.
3. Download the raw videos of _Friends_ from the website, and use the FFmpeg toolkit to extract audio-visual clips of each utterance based on the timestamps we provide.
## Citation
If you find ECF useful for your research, please cite our paper using the following BibTeX entries:
```
@ARTICLE{wang2023multimodal,
author={Wang, Fanfan and Ding, Zixiang and Xia, Rui and Li, Zhaoyu and Yu, Jianfei},
journal={IEEE Transactions on Affective Computing},
title={Multimodal Emotion-Cause Pair Extraction in Conversations},
year={2023},
volume={14},
number={3},
pages={1832-1844},
doi = {10.1109/TAFFC.2022.3226559}
}
@InProceedings{wang2024SemEval,
author={Wang, Fanfan and Ma, Heqing and Xia, Rui and Yu, Jianfei and Cambria, Erik},
title={SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations},
booktitle={Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)},
month={June},
year={2024},
address={Mexico City, Mexico},
publisher={Association for Computational Linguistics},
pages={2022--2033},
url = {https://aclanthology.org/2024.semeval2024-1.273}
}
```
|
whucedar/amoros_prof_vocab_02 | whucedar | "2025-04-17T07:04:22" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:54:42" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 97702560.0
num_examples: 442
- name: test
num_bytes: 48755585.0
num_examples: 218
download_size: 145924995
dataset_size: 146458145.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
yinyue27/RefRef | yinyue27 | "2025-04-17T07:03:06" | 509 | 2 | [
"task_categories:image-to-3d",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"region:us",
"code"
] | [
"image-to-3d"
] | "2024-11-02T14:43:29" | ---
license: cc-by-4.0
task_categories:
- image-to-3d
language:
- en
tags:
- code
pretty_name: RefRef
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: depth
dtype: image
- name: mask
dtype: image
- name: transform_matrix
sequence:
sequence: float64
length: 4
length: 4
- name: rotation
dtype: float32
splits:
- name: textured_cube_scene
num_bytes: 673141617.0
num_examples: 300
download_size: 280778834
dataset_size: 673141617.0
configs:
- config_name: default
data_files:
- split: textured_cube_scene
path: data/textured_cube_scene-*
---
# RefRef: A Synthetic Dataset and Benchmark for Reconstructing Scenes with Refractive and Reflective Objects (haven't uploaded everything yet)
## Overview
**RefRef** is a synthetic dataset and benchmark designed for the task of reconstructing scenes with complex refractive and reflective objects. Our dataset consists of 50 objects categorized based on their geometric and material complexity: single-material convex objects, single-material non-convex objects, and multi-material non-convex objects, where the materials have different colors, opacities, and refractive indices.
Each object is placed in two distinct bounded environments and one unbounded environment, resulting in 150 unique scenes with diverse geometries, material properties, and backgrounds.
Our dataset provides a controlled setting for evaluating and developing 3D reconstruction and novel view synthesis methods that handle complex optical effects.
## Directory Structure
```plaintext
RefRef_Dataset/
├── README.md
├── dataset_info/ # Metadata and dataset description files
│ ├── object_list.txt
│ ├── scene_list.txt
│ └── IoR_info.json # IoR values mapped to each object
├── image_data/ # Rendered images, depth maps, and masks for each object
│ ├── textured_cube_scene/
│ │ └── {single-material_convex, single-material_non-convex, multiple-materials_non-convex}/
│ │ └── {object_name}/
│ │ ├── train/ # Training set
│ │ │ ├── r_0.png # RGB image
│ │ │ ├── r_0_depth_0000.png # Depth map
│ │ │ ├── r_0_mask_0000.png # Mask
│ │ │ ├── r_1.png
│ │ │ ├── r_1_depth_0000.png
│ │ │ ├── r_1_mask_0000.png
│ │ │ └── ...
│ │ ├── val/ # Validation set
│ │ ├── test/ # Testing set
│ │ ├── transforms_train.json
│ │ ├── transforms_val.json
│ │ └── transforms_test.json
│ ├── textured_sphere_scene/
│ │ └── ...
│ ├── environment_map_scene/
│ └── ...
├── mesh_files/ # 3D mesh files (.ply format) for each object
│ └── {single-material_convex, single-material_non-convex, multiple-materials_non-convex}/
│ └── ...
├── blender_files/ # Blender source files for each object, organised by scene
│ ├── bgpanels_cube/ # Background panels for cube scene
│ ├── bgpanels_sphere/ # Background panels for sphere scene
│ └── {textured_cube_scene, textured_sphere_scene}/
│ └── ...
└── benchmarks/ # Benchmark results from various methods
├── oracle_method/
├── Zip-NeRF/
├── Ray Deformation/
├── MS-NeRF/
├── NeUS/
└── ...
```
## Object and Scenes
The dataset includes 50 objects categorised into four groups based on their complexity, material composition, and shape:
- `single-convex/`(18 scenes): Objects with convex geometries, each composed of a single refractive material, such as transparent cubes, balls, and pyramids.
- `single-non-convex/`(40 scenes): Objects with non-convex geometries, each composed of a single refractive material, such as animal sculptures, glass jars, light bulbs, candle holders, and magnifiers.
- `multiple-non-convex/`(42 scenes): Objects with non-convex geometries, each composed of multiple refractive materials or a combination of refractive and opaque materials, such as reed diffusers, a glass of wine, and flasks filled with chemical liquid.
Each object is placed in two distinct scenes:
- `textured_cube_scene/`: Objects placed within a bounded textured cube environment.
- `textured_sphere_scene/`: Objects placed within a bounded textured sphere environment.
- `environment_map_scene/`: Objects placed in an unbounded environment map background.
## IoR Information
A single JSON file `IoR_info.json` is be provided in the `dataset_info/` directory, which maps each component of each object to its Index of Refraction (IoR) values.
Example format for `IoR_info.json`:
```json
{
"cube": 1.5,
"diamond": 2.418,
"wine_glass": {"glass": 1.5, "alcohol": 1.36},
"water_pitcher": {"glass": 1.5, "water": 1.333, "ice": 1.309}
...
} |
qr12138/reddit_dataset_170 | qr12138 | "2025-04-17T07:03:06" | 886 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-03-15T09:24:09" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** qr12138/reddit_dataset_170
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5Dc1nTgKrrJxRfrtRZVwMNTJup7t3ML57upG9nsbpy1PuXDM
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{qr121382025datauniversereddit_dataset_170,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={qr12138},
year={2025},
url={https://huggingface.co/datasets/qr12138/reddit_dataset_170},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 11214024
- **Date Range:** 2021-03-17T00:00:00Z to 2025-04-17T00:00:00Z
- **Last Updated:** 2025-04-17T07:03:04Z
### Data Distribution
- Posts: 4.10%
- Comments: 95.90%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/politics | 576375 | 5.14% |
| 2 | r/AskReddit | 467046 | 4.16% |
| 3 | r/wallstreetbets | 451827 | 4.03% |
| 4 | r/worldnews | 318203 | 2.84% |
| 5 | r/AmItheAsshole | 183759 | 1.64% |
| 6 | r/gaming | 180148 | 1.61% |
| 7 | r/relationship_advice | 175034 | 1.56% |
| 8 | r/AITAH | 173749 | 1.55% |
| 9 | r/NoStupidQuestions | 167331 | 1.49% |
| 10 | r/nfl | 159976 | 1.43% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-03-15T09:24:12Z | 9490 | 9490 |
| 2025-03-16T03:27:40Z | 46400 | 55890 |
| 2025-03-16T20:44:09Z | 22551 | 78441 |
| 2025-03-17T15:11:41Z | 138743 | 217184 |
| 2025-03-18T08:28:48Z | 306549 | 523733 |
| 2025-03-19T02:33:41Z | 67170 | 590903 |
| 2025-03-19T20:48:06Z | 68876 | 659779 |
| 2025-03-20T14:48:33Z | 94965 | 754744 |
| 2025-03-21T08:49:03Z | 166017 | 920761 |
| 2025-03-22T02:52:04Z | 349195 | 1269956 |
| 2025-03-22T20:07:40Z | 282604 | 1552560 |
| 2025-03-23T14:08:49Z | 296900 | 1849460 |
| 2025-03-24T08:09:43Z | 322174 | 2171634 |
| 2025-03-25T02:10:44Z | 395826 | 2567460 |
| 2025-03-25T20:13:15Z | 356459 | 2923919 |
| 2025-03-26T14:18:44Z | 367177 | 3291096 |
| 2025-03-27T08:26:50Z | 393641 | 3684737 |
| 2025-03-28T02:37:53Z | 393410 | 4078147 |
| 2025-03-28T20:54:01Z | 341540 | 4419687 |
| 2025-03-29T15:07:33Z | 282226 | 4701913 |
| 2025-03-31T02:35:30Z | 401806 | 5103719 |
| 2025-03-31T20:36:09Z | 183053 | 5286772 |
| 2025-04-01T14:51:45Z | 180812 | 5467584 |
| 2025-04-02T08:58:59Z | 220374 | 5687958 |
| 2025-04-03T04:12:21Z | 256538 | 5944496 |
| 2025-04-03T23:55:23Z | 215115 | 6159611 |
| 2025-04-04T18:38:08Z | 178532 | 6338143 |
| 2025-04-05T12:33:31Z | 173281 | 6511424 |
| 2025-04-06T06:50:59Z | 327093 | 6838517 |
| 2025-04-07T01:12:32Z | 112467 | 6950984 |
| 2025-04-07T19:40:14Z | 435175 | 7386159 |
| 2025-04-08T17:11:24Z | 326486 | 7712645 |
| 2025-04-09T16:21:48Z | 516436 | 8229081 |
| 2025-04-10T10:22:39Z | 306826 | 8535907 |
| 2025-04-11T04:23:45Z | 456582 | 8992489 |
| 2025-04-11T22:16:56Z | 225454 | 9217943 |
| 2025-04-12T16:30:20Z | 242178 | 9460121 |
| 2025-04-13T10:41:18Z | 365485 | 9825606 |
| 2025-04-14T04:42:34Z | 424670 | 10250276 |
| 2025-04-14T23:04:31Z | 290242 | 10540518 |
| 2025-04-15T17:46:39Z | 331422 | 10871940 |
| 2025-04-16T12:18:04Z | 310592 | 11182532 |
| 2025-04-17T07:03:04Z | 31492 | 11214024 |
|
RaphaelLiu/PusaV0.5_Training | RaphaelLiu | "2025-04-17T07:03:03" | 894 | 1 | [
"license:apache-2.0",
"modality:video",
"arxiv:2410.03160",
"region:us"
] | null | "2025-04-09T07:39:44" | ---
license: apache-2.0
---
# PusaV0.5 Training Dataset
[Code Repository](https://github.com/Yaofang-Liu/Pusa-VidGen) | [Model Hub](https://huggingface.co/RaphaelLiu/Pusa-V0.5) | [Training Toolkit](https://github.com/Yaofang-Liu/Mochi-Full-Finetuner) | [Dataset](https://huggingface.co/datasets/RaphaelLiu/PusaV0.5_Training) | [Paper](https://arxiv.org/abs/2410.03160) | [Follow on X](https://x.com/stephenajason) | [Xiaohongshu](https://www.xiaohongshu.com/explore/67f898dc000000001c008339?source=webshare&xhsshare=pc_web&xsec_token=ABAhG8mltqyMxL9kI0eRxwj7EwiW7MFYH2oPl4n8ww0OM=&xsec_source=pc_share)
## Dataset Overview
This repository contains the pre-encoded training dataset used for fine-tuning the [Pusa-V0.5](https://github.com/Yaofang-Liu/Pusa-VidGen) video generation model. The dataset consists of 52,695 pre-encoded latent samples derived from [VIDGEN-1M](https://huggingface.co/datasets/Fudan-FUXI/VIDGEN-1M), though Pusa-V0.5 was trained using only 16,000 of this dataset.
## Dataset Structure
The dataset is organized into two main directories:
```
PusaV0.5_Training/
videos/
xxxx.latent.pt # Pre-encoded video latents
xxxx.latent.pt
...
captions/
xxxx.embed.pt # Pre-encoded text embeddings
xxxx.embed.pt
...
```
- **videos/**: Contains pre-encoded video latents in PyTorch tensor format. Atually, the corresponding videos (`.mp4` files) are also provided in `videos/`, you may check them out for more details.
- **captions/**: Contains corresponding text embeddings for each video
## Dataset Details
- **Total Samples**: 52,695 video-text embedding pairs
- **Source**: Randomly sampled from [VIDGEN-1M](https://huggingface.co/datasets/Fudan-FUXI/VIDGEN-1M)
- **Format**: Pre-encoded latents (.pt files) ready for training
- **Used in Pusa-V0.5**: 16,000 samples from this dataset were used to train the released Pusa-V0.5 model
## Usage
### Download the Dataset
```bash
huggingface-cli download RaphaelLiu/PusaV0.5_Training --repo-type dataset --local-dir <path_to_dataset_directory>
```
### Using with Mochi-Full-Finetuner
This dataset is designed to work seamlessly with the [Mochi-Full-Finetuner](https://github.com/Yaofang-Liu/Mochi-Full-Finetuner) repository for training Pusa or Mochi models:
```bash
python -u /path/to/src/genmo/mochi_preview/train_pusa.py \
--world_size=8 \
--model_dir="/path/to/model/directory" \
--data_path="/path/to/PusaV0.5_Training/videos"
```
Note: When specifying `--data_path`, provide only the path to the videos directory. The training script will automatically locate the captions directory by replacing "videos" with "captions" in the base path.
## Creating Your Own Dataset
If you wish to create your own dataset in the same format, follow the instructions in the [Mochi LoRA Training repository](https://github.com/genmoai/mochi/tree/main/demos/fine_tuner). Your dataset should match the structure shown above, with corresponding latent and embedding files for each sample.
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{Liu2025pusa,
title={Pusa: Thousands Timesteps Video Diffusion Model},
author={Yaofang Liu and Rui Liu},
year={2025},
url={https://github.com/Yaofang-Liu/Pusa-VidGen},
}
```
```bibtex
@article{liu2024redefining,
title={Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep Approach},
author={Liu, Yaofang and Ren, Yumeng and Cun, Xiaodong and Artola, Aitor and Liu, Yang and Zeng, Tieyong and Chan, Raymond H and Morel, Jean-michel},
journal={arXiv preprint arXiv:2410.03160},
year={2024}
}
```
|
macrocosm-os/macrobench-bittensor-01 | macrocosm-os | "2025-04-17T07:02:16" | 2,094 | 2 | [
"license:mit",
"region:us"
] | null | "2025-02-05T11:18:22" | ---
configs:
- config_name: '20241001'
data_files:
- path: 20241001/miner_evaluations.parquet
split: '20241001'
- config_name: '20241002'
data_files:
- path: 20241002/miner_evaluations.parquet
split: '20241002'
- config_name: '20241003'
data_files:
- path: 20241003/miner_evaluations.parquet
split: '20241003'
- config_name: '20241004'
data_files:
- path: 20241004/miner_evaluations.parquet
split: '20241004'
- config_name: '20241005'
data_files:
- path: 20241005/miner_evaluations.parquet
split: '20241005'
- config_name: '20241006'
data_files:
- path: 20241006/miner_evaluations.parquet
split: '20241006'
- config_name: '20241007'
data_files:
- path: 20241007/miner_evaluations.parquet
split: '20241007'
- config_name: '20241008'
data_files:
- path: 20241008/miner_evaluations.parquet
split: '20241008'
- config_name: '20241009'
data_files:
- path: 20241009/miner_evaluations.parquet
split: '20241009'
- config_name: '20241010'
data_files:
- path: 20241010/miner_evaluations.parquet
split: '20241010'
- config_name: '20241011'
data_files:
- path: 20241011/miner_evaluations.parquet
split: '20241011'
- config_name: '20241012'
data_files:
- path: 20241012/miner_evaluations.parquet
split: '20241012'
- config_name: '20241013'
data_files:
- path: 20241013/miner_evaluations.parquet
split: '20241013'
- config_name: '20241014'
data_files:
- path: 20241014/miner_evaluations.parquet
split: '20241014'
- config_name: '20241015'
data_files:
- path: 20241015/miner_evaluations.parquet
split: '20241015'
- config_name: '20241016'
data_files:
- path: 20241016/miner_evaluations.parquet
split: '20241016'
- config_name: '20241017'
data_files:
- path: 20241017/miner_evaluations.parquet
split: '20241017'
- config_name: '20241018'
data_files:
- path: 20241018/miner_evaluations.parquet
split: '20241018'
- config_name: '20241019'
data_files:
- path: 20241019/miner_evaluations.parquet
split: '20241019'
- config_name: '20241020'
data_files:
- path: 20241020/miner_evaluations.parquet
split: '20241020'
- config_name: '20241021'
data_files:
- path: 20241021/miner_evaluations.parquet
split: '20241021'
- config_name: '20241022'
data_files:
- path: 20241022/miner_evaluations.parquet
split: '20241022'
- config_name: '20241023'
data_files:
- path: 20241023/miner_evaluations.parquet
split: '20241023'
- config_name: '20241024'
data_files:
- path: 20241024/miner_evaluations.parquet
split: '20241024'
- config_name: '20241025'
data_files:
- path: 20241025/miner_evaluations.parquet
split: '20241025'
- config_name: '20241026'
data_files:
- path: 20241026/miner_evaluations.parquet
split: '20241026'
- config_name: '20241027'
data_files:
- path: 20241027/miner_evaluations.parquet
split: '20241027'
- config_name: '20241028'
data_files:
- path: 20241028/miner_evaluations.parquet
split: '20241028'
- config_name: '20241029'
data_files:
- path: 20241029/miner_evaluations.parquet
split: '20241029'
- config_name: '20241030'
data_files:
- path: 20241030/miner_evaluations.parquet
split: '20241030'
- config_name: '20241031'
data_files:
- path: 20241031/miner_evaluations.parquet
split: '20241031'
- config_name: '20241101'
data_files:
- path: 20241101/miner_evaluations.parquet
split: '20241101'
- config_name: '20241102'
data_files:
- path: 20241102/miner_evaluations.parquet
split: '20241102'
- config_name: '20241103'
data_files:
- path: 20241103/miner_evaluations.parquet
split: '20241103'
- config_name: '20241104'
data_files:
- path: 20241104/miner_evaluations.parquet
split: '20241104'
- config_name: '20241105'
data_files:
- path: 20241105/miner_evaluations.parquet
split: '20241105'
- config_name: '20241106'
data_files:
- path: 20241106/miner_evaluations.parquet
split: '20241106'
- config_name: '20241107'
data_files:
- path: 20241107/miner_evaluations.parquet
split: '20241107'
- config_name: '20241108'
data_files:
- path: 20241108/miner_evaluations.parquet
split: '20241108'
- config_name: '20241109'
data_files:
- path: 20241109/miner_evaluations.parquet
split: '20241109'
- config_name: '20241110'
data_files:
- path: 20241110/miner_evaluations.parquet
split: '20241110'
- config_name: '20241111'
data_files:
- path: 20241111/miner_evaluations.parquet
split: '20241111'
- config_name: '20241112'
data_files:
- path: 20241112/miner_evaluations.parquet
split: '20241112'
- config_name: '20241113'
data_files:
- path: 20241113/miner_evaluations.parquet
split: '20241113'
- config_name: '20241114'
data_files:
- path: 20241114/miner_evaluations.parquet
split: '20241114'
- config_name: '20241115'
data_files:
- path: 20241115/miner_evaluations.parquet
split: '20241115'
- config_name: '20241116'
data_files:
- path: 20241116/miner_evaluations.parquet
split: '20241116'
- config_name: '20241117'
data_files:
- path: 20241117/miner_evaluations.parquet
split: '20241117'
- config_name: '20241118'
data_files:
- path: 20241118/miner_evaluations.parquet
split: '20241118'
- config_name: '20241119'
data_files:
- path: 20241119/miner_evaluations.parquet
split: '20241119'
- config_name: '20241120'
data_files:
- path: 20241120/miner_evaluations.parquet
split: '20241120'
- config_name: '20241121'
data_files:
- path: 20241121/miner_evaluations.parquet
split: '20241121'
- config_name: '20241122'
data_files:
- path: 20241122/miner_evaluations.parquet
split: '20241122'
- config_name: '20241123'
data_files:
- path: 20241123/miner_evaluations.parquet
split: '20241123'
- config_name: '20241124'
data_files:
- path: 20241124/miner_evaluations.parquet
split: '20241124'
- config_name: '20241125'
data_files:
- path: 20241125/miner_evaluations.parquet
split: '20241125'
- config_name: '20241126'
data_files:
- path: 20241126/miner_evaluations.parquet
split: '20241126'
- config_name: '20241127'
data_files:
- path: 20241127/miner_evaluations.parquet
split: '20241127'
- config_name: '20241128'
data_files:
- path: 20241128/miner_evaluations.parquet
split: '20241128'
- config_name: '20241129'
data_files:
- path: 20241129/miner_evaluations.parquet
split: '20241129'
- config_name: '20241130'
data_files:
- path: 20241130/miner_evaluations.parquet
split: '20241130'
- config_name: '20241201'
data_files:
- path: 20241201/miner_evaluations.parquet
split: '20241201'
- config_name: '20241202'
data_files:
- path: 20241202/miner_evaluations.parquet
split: '20241202'
- config_name: '20241203'
data_files:
- path: 20241203/miner_evaluations.parquet
split: '20241203'
- config_name: '20241204'
data_files:
- path: 20241204/miner_evaluations.parquet
split: '20241204'
- config_name: '20241205'
data_files:
- path: 20241205/miner_evaluations.parquet
split: '20241205'
- config_name: '20241206'
data_files:
- path: 20241206/miner_evaluations.parquet
split: '20241206'
- config_name: '20241207'
data_files:
- path: 20241207/miner_evaluations.parquet
split: '20241207'
- config_name: '20241208'
data_files:
- path: 20241208/miner_evaluations.parquet
split: '20241208'
- config_name: '20241209'
data_files:
- path: 20241209/miner_evaluations.parquet
split: '20241209'
- config_name: '20241210'
data_files:
- path: 20241210/miner_evaluations.parquet
split: '20241210'
- config_name: '20241211'
data_files:
- path: 20241211/miner_evaluations.parquet
split: '20241211'
- config_name: '20241212'
data_files:
- path: 20241212/miner_evaluations.parquet
split: '20241212'
- config_name: '20241213'
data_files:
- path: 20241213/miner_evaluations.parquet
split: '20241213'
- config_name: '20241214'
data_files:
- path: 20241214/miner_evaluations.parquet
split: '20241214'
- config_name: '20241215'
data_files:
- path: 20241215/miner_evaluations.parquet
split: '20241215'
- config_name: '20241216'
data_files:
- path: 20241216/miner_evaluations.parquet
split: '20241216'
- config_name: '20241217'
data_files:
- path: 20241217/miner_evaluations.parquet
split: '20241217'
- config_name: '20241218'
data_files:
- path: 20241218/miner_evaluations.parquet
split: '20241218'
- config_name: '20241219'
data_files:
- path: 20241219/miner_evaluations.parquet
split: '20241219'
- config_name: '20241220'
data_files:
- path: 20241220/miner_evaluations.parquet
split: '20241220'
- config_name: '20241221'
data_files:
- path: 20241221/miner_evaluations.parquet
split: '20241221'
- config_name: '20241222'
data_files:
- path: 20241222/miner_evaluations.parquet
split: '20241222'
- config_name: '20241223'
data_files:
- path: 20241223/miner_evaluations.parquet
split: '20241223'
- config_name: '20241224'
data_files:
- path: 20241224/miner_evaluations.parquet
split: '20241224'
- config_name: '20241225'
data_files:
- path: 20241225/miner_evaluations.parquet
split: '20241225'
- config_name: '20241226'
data_files:
- path: 20241226/miner_evaluations.parquet
split: '20241226'
- config_name: '20241227'
data_files:
- path: 20241227/miner_evaluations.parquet
split: '20241227'
- config_name: '20241228'
data_files:
- path: 20241228/miner_evaluations.parquet
split: '20241228'
- config_name: '20241229'
data_files:
- path: 20241229/miner_evaluations.parquet
split: '20241229'
- config_name: '20241230'
data_files:
- path: 20241230/miner_evaluations.parquet
split: '20241230'
- config_name: '20241231'
data_files:
- path: 20241231/miner_evaluations.parquet
split: '20241231'
- config_name: '20250101'
data_files:
- path: 20250101/miner_evaluations.parquet
split: '20250101'
- config_name: '20250102'
data_files:
- path: 20250102/miner_evaluations.parquet
split: '20250102'
- config_name: '20250103'
data_files:
- path: 20250103/miner_evaluations.parquet
split: '20250103'
- config_name: '20250104'
data_files:
- path: 20250104/miner_evaluations.parquet
split: '20250104'
- config_name: '20250105'
data_files:
- path: 20250105/miner_evaluations.parquet
split: '20250105'
- config_name: '20250106'
data_files:
- path: 20250106/miner_evaluations.parquet
split: '20250106'
- config_name: '20250107'
data_files:
- path: 20250107/miner_evaluations.parquet
split: '20250107'
- config_name: '20250108'
data_files:
- path: 20250108/miner_evaluations.parquet
split: '20250108'
- config_name: '20250109'
data_files:
- path: 20250109/miner_evaluations.parquet
split: '20250109'
- config_name: '20250110'
data_files:
- path: 20250110/miner_evaluations.parquet
split: '20250110'
- config_name: '20250111'
data_files:
- path: 20250111/miner_evaluations.parquet
split: '20250111'
- config_name: '20250112'
data_files:
- path: 20250112/miner_evaluations.parquet
split: '20250112'
- config_name: '20250113'
data_files:
- path: 20250113/miner_evaluations.parquet
split: '20250113'
- config_name: '20250114'
data_files:
- path: 20250114/miner_evaluations.parquet
split: '20250114'
- config_name: '20250115'
data_files:
- path: 20250115/miner_evaluations.parquet
split: '20250115'
- config_name: '20250116'
data_files:
- path: 20250116/miner_evaluations.parquet
split: '20250116'
- config_name: '20250117'
data_files:
- path: 20250117/miner_evaluations.parquet
split: '20250117'
- config_name: '20250118'
data_files:
- path: 20250118/miner_evaluations.parquet
split: '20250118'
- config_name: '20250119'
data_files:
- path: 20250119/miner_evaluations.parquet
split: '20250119'
- config_name: '20250120'
data_files:
- path: 20250120/miner_evaluations.parquet
split: '20250120'
- config_name: '20250121'
data_files:
- path: 20250121/miner_evaluations.parquet
split: '20250121'
- config_name: '20250122'
data_files:
- path: 20250122/miner_evaluations.parquet
split: '20250122'
- config_name: '20250123'
data_files:
- path: 20250123/miner_evaluations.parquet
split: '20250123'
- config_name: '20250124'
data_files:
- path: 20250124/miner_evaluations.parquet
split: '20250124'
- config_name: '20250125'
data_files:
- path: 20250125/miner_evaluations.parquet
split: '20250125'
- config_name: '20250126'
data_files:
- path: 20250126/miner_evaluations.parquet
split: '20250126'
- config_name: '20250127'
data_files:
- path: 20250127/miner_evaluations.parquet
split: '20250127'
- config_name: '20250128'
data_files:
- path: 20250128/miner_evaluations.parquet
split: '20250128'
- config_name: '20250129'
data_files:
- path: 20250129/miner_evaluations.parquet
split: '20250129'
- config_name: '20250130'
data_files:
- path: 20250130/miner_evaluations.parquet
split: '20250130'
- config_name: '20250131'
data_files:
- path: 20250131/miner_evaluations.parquet
split: '20250131'
- config_name: '20250201'
data_files:
- path: 20250201/miner_evaluations.parquet
split: '20250201'
- config_name: '20250202'
data_files:
- path: 20250202/miner_evaluations.parquet
split: '20250202'
- config_name: '20250203'
data_files:
- path: 20250203/miner_evaluations.parquet
split: '20250203'
- config_name: '20250204'
data_files:
- path: 20250204/miner_evaluations.parquet
split: '20250204'
- config_name: '20250205'
data_files:
- path: 20250205/miner_evaluations.parquet
split: '20250205'
- config_name: '20250206'
data_files:
- path: 20250206/miner_evaluations.parquet
split: '20250206'
- config_name: '20250207'
data_files:
- path: 20250207/miner_evaluations.parquet
split: '20250207'
- config_name: '20250208'
data_files:
- path: 20250208/miner_evaluations.parquet
split: '20250208'
- config_name: '20250209'
data_files:
- path: 20250209/miner_evaluations.parquet
split: '20250209'
- config_name: '20250210'
data_files:
- path: 20250210/miner_evaluations.parquet
split: '20250210'
- config_name: '20250211'
data_files:
- path: 20250211/miner_evaluations.parquet
split: '20250211'
- config_name: '20250212'
data_files:
- path: 20250212/miner_evaluations.parquet
split: '20250212'
- config_name: '20250213'
data_files:
- path: 20250213/miner_evaluations.parquet
split: '20250213'
- config_name: '20250214'
data_files:
- path: 20250214/miner_evaluations.parquet
split: '20250214'
- config_name: '20250215'
data_files:
- path: 20250215/miner_evaluations.parquet
split: '20250215'
- config_name: '20250216'
data_files:
- path: 20250216/miner_evaluations.parquet
split: '20250216'
- config_name: '20250217'
data_files:
- path: 20250217/miner_evaluations.parquet
split: '20250217'
- config_name: '20250218'
data_files:
- path: 20250218/miner_evaluations.parquet
split: '20250218'
- config_name: '20250219'
data_files:
- path: 20250219/miner_evaluations.parquet
split: '20250219'
- config_name: '20250220'
data_files:
- path: 20250220/miner_evaluations.parquet
split: '20250220'
- config_name: '20250221'
data_files:
- path: 20250221/miner_evaluations.parquet
split: '20250221'
- config_name: '20250222'
data_files:
- path: 20250222/miner_evaluations.parquet
split: '20250222'
- config_name: '20250223'
data_files:
- path: 20250223/miner_evaluations.parquet
split: '20250223'
- config_name: '20250224'
data_files:
- path: 20250224/miner_evaluations.parquet
split: '20250224'
- config_name: '20250225'
data_files:
- path: 20250225/miner_evaluations.parquet
split: '20250225'
- config_name: '20250226'
data_files:
- path: 20250226/miner_evaluations.parquet
split: '20250226'
- config_name: '20250227'
data_files:
- path: 20250227/miner_evaluations.parquet
split: '20250227'
- config_name: '20250228'
data_files:
- path: 20250228/miner_evaluations.parquet
split: '20250228'
- config_name: '20250301'
data_files:
- path: 20250301/miner_evaluations.parquet
split: '20250301'
- config_name: '20250302'
data_files:
- path: 20250302/miner_evaluations.parquet
split: '20250302'
- config_name: '20250303'
data_files:
- path: 20250303/miner_evaluations.parquet
split: '20250303'
- config_name: '20250304'
data_files:
- path: 20250304/miner_evaluations.parquet
split: '20250304'
- config_name: '20250305'
data_files:
- path: 20250305/miner_evaluations.parquet
split: '20250305'
- config_name: '20250306'
data_files:
- path: 20250306/miner_evaluations.parquet
split: '20250306'
- config_name: '20250307'
data_files:
- path: 20250307/miner_evaluations.parquet
split: '20250307'
- config_name: '20250308'
data_files:
- path: 20250308/miner_evaluations.parquet
split: '20250308'
- config_name: '20250309'
data_files:
- path: 20250309/miner_evaluations.parquet
split: '20250309'
- config_name: '20250310'
data_files:
- path: 20250310/miner_evaluations.parquet
split: '20250310'
- config_name: '20250311'
data_files:
- path: 20250311/miner_evaluations.parquet
split: '20250311'
- config_name: '20250312'
data_files:
- path: 20250312/miner_evaluations.parquet
split: '20250312'
- config_name: '20250313'
data_files:
- path: 20250313/miner_evaluations.parquet
split: '20250313'
- config_name: '20250314'
data_files:
- path: 20250314/miner_evaluations.parquet
split: '20250314'
- config_name: '20250315'
data_files:
- path: 20250315/miner_evaluations.parquet
split: '20250315'
- config_name: '20250316'
data_files:
- path: 20250316/miner_evaluations.parquet
split: '20250316'
- config_name: '20250317'
data_files:
- path: 20250317/miner_evaluations.parquet
split: '20250317'
- config_name: '20250318'
data_files:
- path: 20250318/miner_evaluations.parquet
split: '20250318'
- config_name: '20250319'
data_files:
- path: 20250319/miner_evaluations.parquet
split: '20250319'
- config_name: '20250320'
data_files:
- path: 20250320/miner_evaluations.parquet
split: '20250320'
- config_name: '20250321'
data_files:
- path: 20250321/miner_evaluations.parquet
split: '20250321'
- config_name: '20250322'
data_files:
- path: 20250322/miner_evaluations.parquet
split: '20250322'
- config_name: '20250323'
data_files:
- path: 20250323/miner_evaluations.parquet
split: '20250323'
- config_name: '20250324'
data_files:
- path: 20250324/miner_evaluations.parquet
split: '20250324'
- config_name: '20250325'
data_files:
- path: 20250325/miner_evaluations.parquet
split: '20250325'
- config_name: '20250326'
data_files:
- path: 20250326/miner_evaluations.parquet
split: '20250326'
- config_name: '20250327'
data_files:
- path: 20250327/miner_evaluations.parquet
split: '20250327'
- config_name: '20250328'
data_files:
- path: 20250328/miner_evaluations.parquet
split: '20250328'
- config_name: '20250329'
data_files:
- path: 20250329/miner_evaluations.parquet
split: '20250329'
- config_name: '20250330'
data_files:
- path: 20250330/miner_evaluations.parquet
split: '20250330'
- config_name: '20250331'
data_files:
- path: 20250331/miner_evaluations.parquet
split: '20250331'
- config_name: '20250401'
data_files:
- path: 20250401/miner_evaluations.parquet
split: '20250401'
- config_name: '20250402'
data_files:
- path: 20250402/miner_evaluations.parquet
split: '20250402'
- config_name: '20250403'
data_files:
- path: 20250403/miner_evaluations.parquet
split: '20250403'
- config_name: '20250404'
data_files:
- path: 20250404/miner_evaluations.parquet
split: '20250404'
- config_name: '20250405'
data_files:
- path: 20250405/miner_evaluations.parquet
split: '20250405'
- config_name: '20250406'
data_files:
- path: 20250406/miner_evaluations.parquet
split: '20250406'
- config_name: '20250407'
data_files:
- path: 20250407/miner_evaluations.parquet
split: '20250407'
- config_name: '20250408'
data_files:
- path: 20250408/miner_evaluations.parquet
split: '20250408'
- config_name: '20250409'
data_files:
- path: 20250409/miner_evaluations.parquet
split: '20250409'
- config_name: '20250410'
data_files:
- path: 20250410/miner_evaluations.parquet
split: '20250410'
- config_name: '20250411'
data_files:
- path: 20250411/miner_evaluations.parquet
split: '20250411'
- config_name: '20250412'
data_files:
- path: 20250412/miner_evaluations.parquet
split: '20250412'
- config_name: '20250413'
data_files:
- path: 20250413/miner_evaluations.parquet
split: '20250413'
- config_name: '20250414'
data_files:
- path: 20250414/miner_evaluations.parquet
split: '20250414'
- config_name: '20250415'
data_files:
- path: 20250415/miner_evaluations.parquet
split: '20250415'
- config_name: '20250416'
data_files:
- path: 20250416/miner_evaluations.parquet
split: '20250416'
- config_name: '20250417'
data_files:
- path: 20250417/miner_evaluations.parquet
split: '20250417'
last_updated: '20250417'
license: mit
--- |
hirundo-io/bbq-physical-unbias-multi-choice | hirundo-io | "2025-04-17T07:01:41" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-15T16:31:20" | ---
dataset_info:
features:
- name: question
dtype: string
- name: correct_answer
dtype: string
- name: incorrect_answers
sequence: string
splits:
- name: train
num_bytes: 349746
num_examples: 788
download_size: 38951
dataset_size: 349746
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chenggong1995/NuminaMath-TIR-100 | chenggong1995 | "2025-04-17T07:01:37" | 73 | 0 | [
"region:us"
] | null | "2025-02-28T02:43:17" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 460058
num_examples: 100
- name: test
num_bytes: 461331
num_examples: 99
download_size: 432654
dataset_size: 921389
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
WitchesSocialStream/AozoraDivr | WitchesSocialStream | "2025-04-17T07:01:27" | 2,524 | 4 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-08-18T10:22:11" | ---
license: cc-by-4.0
viewer: false
---

## Data Formats
We present data as is with minimum enrichment.
- Any cryptographic CIDs are stripped as they do not possess any useful textual data.
### Changelog
- 21/11/24:
- Fixed Videos being uploaded as `null`
- Code is more robust. Should be less prone to dropouts.
- Did some code refactoring...
- ...and subsequently broke some MiracleSpec messages...
- ...But it has been fixed.
- 25/11/24:
- Fixed: Follow and block actions didn't have `chg` values associated previously, making it hard to determine if the user followed or unfollowed. This has been fixed.
- 27/11/24:
- Started to ignore certain miracle road spec data. A list is shown below with a reasoning.
- We reject external "Link" / Not tied to bluesky data.
- 13/12/24:
- ~~No changes but just a word of caution: **There might be leaked keys.** I haven't been acting on them based on the basis of "If you post it, you better fix it." policy.~~
- As a countermeasure for future occurances, I've blocked HF Forums's bridgy bot. Future bridgy bots may be blocked as well.
- 07/02/25:
- New cover image.
- Cleaned up front header bits.
### Blocks
Ignored `$type` / `Miracle Roads`:
- `jp.5leaf.sync.mastodon` (Reason: Sync from mastodon.)
Ignored Users:
- `did:plc:pgryn3ephfd2xgft23qokfzt` (Reason: Bridgy bot for HF Forums to bluesky. People keep accidentally leaking the HF tokens.)
### Streams
The firehose is split into ~~2~~ 3 jsonl file for your usage:
- `..._atproto_interactions.jsonl`: Contains interaction events, such as likes, follows, reposts and blocks
- `..._atproto_general.jsonl`: Contains posts and replies. Used to contain accounts & identities but they have been moved to `_accounts.jsonl`
- `..._atproto_accounts.jsonl`: Accounts & identities.
### Common Fields
- `typ`: Represents the data **typ**e.
- `usr`: Which **Us**e**r** is it from. Represented in the `Usernames` format below.
- `rid`: Record Key. Use this to access data from bsky api.
The most basic root construct will typically contain:
```json
{
"typ": "<Type>",
"usr": {
"did": "did:plc:ssd5xwqvrdrxyu2br7sfjwzy",
},
}
```
Usernames are constructed in the following format:
```json
{
"did": "did:plc:4hqjfn7m6n5hno3doamuhgef",
}
```
`did`: `Decentralized ID`. Consider this as `ID` for most cases and it points to a unique ID.
`nms`: **[DEPRECATED!]** `Usernames`. Either can be a string or a list of strings. Do not blindly assume this is going to be only a string! Though generally, it should just be a string.
- **`nms`** will not be provided in future firehose archives. Turns out PLC directory didn't like me.
For most cases, expect the did to describe a user.
### Blobs
Blobs represent media content. Typically you can tell it's a blob if it has a `mime` field and a `cid`.
```json
{
"mime": "image/jpeg",
"size": 891965,
"cid": "bafkreifu35fvx45eyldhpoyb3zgtb5dobvjfpw5kkeexwxefrfpzye2pji"
}
```
Given the user account is this:
```json
{
"typ": "account",
"usr": {
"did": "did:plc:lri5xcv6ogaldxkigm32wa57",
"avy": {
"mime": "image/jpeg",
"size": 226086,
"cid": "bafkreif3z2y2rfrfcjt4rwwps4ib7q7qywrdt76bw6dmj5ebqefgllpima"
},
"bnr": null,
"crt": 1723726663.57,
"dsc": "――あなたの日常に、AIの籠った音色を。\n\n▼思い出や日常、希望をお聞かせください。その想いを曲にいたします。\nhttps://forms.gle/rF2iqwXxabfVEifd7",
"dsp": "雪白ゆっち feat.AI Creator"
}
}
```
Construct the avy url like so:
Template: `https://bsky.social/xrpc/com.atproto.sync.getBlob?did=<usr.did>&cid=<usr.avy.cid>`
A full link looks like this: `https://bsky.social/xrpc/com.atproto.sync.getBlob?did=did:plc:lri5xcv6ogaldxkigm32wa57&cid=bafkreif3z2y2rfrfcjt4rwwps4ib7q7qywrdt76bw6dmj5ebqefgllpima`
Yes I did spend a while trying to lookup to see why it is not working.
### Posts (Simple)
Posts can get rather complicated. Here's a sample of a simple post.
```json
{
"typ": "post",
"usr": {
"did": "did:plc:ssd5xwqvrdrxyu2br7sfjwzy",
},
"rid": "3kzyon77od52v",
"chg": "create",
"tst": 1723987630.494,
"pst": {
"txt": "✔✔✔On Aug 18, 2024, 11:59 AM(UTC). According to Binance Market Data, Bitcoin has crossed the 60,000 USDT benchmark and is now trading at 60,006.578125 USDT, with a narrowed 1.49% increase in 24 hours.👀👀",
"emb": null,
"fct": [],
"lbl": [],
"lng": [],
"tgs": [],
"rpl": null
}
}
```
- `tst`: Contains the timestamp in unix float time.
- `chg`: Change type. Typically either `create` or `delete` for posts. `change` for allowing Direct Messages.
- `rid`: Record Key. Use this to access data from bsky api.
- `pst`: Contains the actual posted data.
### Posts (Complex)
As for replies and other fields, here's a more complex example.
```json
{
"typ": "reply",
"usr": {
"did": "did:plc:4hqjfn7m6n5hno3doamuhgef",
"nms": "yui.syui.ai"
},
"rid": "3kzyotm2hzq2d",
"chg": "create",
"tst": 1723987844.937,
"pst": {
"txt": "https://card.syui.ai/baiser \nbaiser\njoin : baiser.blue [IIT]\nten : 1000\naiten : 21037247\n---\n[1-7]\nten d : shuffle[IIT☑]\nten p : post\n---\n",
"emb": null,
"fct": [
{
"typ": "@",
"val": "https://card.syui.ai/baiser",
"rng": [
0,
27
]
}
],
"lbl": [],
"lng": [],
"tgs": [],
"rpl": {
"typ": "post",
"usr": {
"did": "did:plc:vok247eewjmbmo3kxaizct2i",
"nms": "baiser.blue"
},
"rid": "3kzyotbooo22c",
"rrt": {
"typ": "post",
"usr": {
"did": "did:plc:vok247eewjmbmo3kxaizct2i",
"nms": "baiser.blue"
},
"rid": "3kzyosf6atg2v"
}
}
}
}
```
- `fct`: Stands for Facets:
- `typ`: The facet type. (`tag`,`link`,`mention`)
- `val`: The facet value. Note that this can be a `Username` dict when `typ` == `mention`
- `rng`: Byte range. AFAIK this is in UTF-16 but I can be wrong. Follow atproto's docs for this.
- `lbl`: Labels. A list of strings. Though typically empty list for firehose streams. Labels are sent seperately firehose stream-wise.
- `lng`: Languages. Either an list (Can be empty) or a string.
- `tgs`: "Additional hashtags, in addition to any included in post text and facets."
- `rpl`: The post that the current post is replying to.
- *Note:* The reply post is not enriched with the actual post.
- `typ`/`usr`/`rid`: [Refer to the simple posts section.](#posts-simple)
- `rrt`: Root post. Can be `null` if root post is the same as the `rpl` post `rid`.
- `emb`: Any rich embed.
- Embed primarily has around 5 types
1. Images
- A list of images.
- Each image contains: `img` (BlobRef), `alt` (Alt Text), `isz` (Size)
3. Video
- A Video
- Contains the following fields: `vid`, `alt` (Alt Text), `isz` (Size), `cpt` (Captions, Dictionary with of key for languages and a BlobRef for value)
4. External (Outside bluesky)
- Typically webpages and the like
5. w/ Record (A post that has a link to another person)
6. Same as 5 but with Images.
- TL;DR: Embeds are complicated.
### Accounts
```json
{
"typ": "account",
"usr": {
"did": "did:plc:cj3ngde5wbljf5sh33g7zsdz",
"avy": {
"mime": "image/jpeg",
"size": 79776,
"cid": "bafkreiczz2spptgturm43r33impbkcar4tmdmnh34pqkp2tynlztbxmw7a"
},
"bnr": {
"mime": "image/jpeg",
"size": 748930,
"cid": "bafkreigb5l3u32quxzhpbca6bnrunfdau3m4bp6fdntmj2lwec3erkssty"
},
"crt": null,
"dsc": "こっちでは、主に練習中の下手なイラスト・ゲーム関系とかを投稿していきたいな〜\n\n最推しのねくろさんの配信を見るといやされる( ◠‿◠ )",
"dsp": "しろっつ🖤🐐👑"
}
}
```
For Accounts, the `usr` field is more filled. In addition to `did`, there are other fields like:
- `avy`/`bnr`: either a `Blob` or null. Refer to [Blobs](#blobs) section above.
- `crt`: Account Creation time. Can be null!
- `dsc`: Profile Bio / Blurb Section.
- `dsp`: Display name.
### Reconstructing to a AtUri
For `post` and `reply` types, Take the following values and combine them into the following url:
`at://<usr.did>/app.bsky.feed.post/<rid>`
Replies are just posts.
For `repost` and `like` types, it's similar but a bit different:
- Reposts: `at://<usr.did>/app.bsky.feed.repost/<rid>`
- likes: `at://<usr.did>/app.bsky.feed.like/<rid>`
### Enrichment of replies
```
curl -L -X GET 'https://public.api.bsky.app/xrpc/app.bsky.feed.getPosts?uris=at://did:plc:4hqjfn7m6n5hno3doamuhgef/app.bsky.feed.post/3kzyotm2hzq2d' \
-H 'Accept: application/json' \
-H 'Authorization: Bearer <TOKEN>'
```
### "Miracle Spec"
Recently, some creative folks have started adding their own data to the atproto stream. Some notable examples I saw are:
- `com.whtwnd.blog.entry` (https://whtwnd.com/about)
- `space.aoisora.bookmark` (https://bsky.app/profile/mimonelu.net/post/3l4ta2mdqwe2s)
As of 01/10/24, I've added support for those.. They are labeled as "MiracleRoad!" for `typ` and only contain the raw record data.
### Illegal Spec Followers
In other words, we also capture content that failed to follow specs. Like this:
```json
{
"typ": "IllegalSpecFollowerAkaFixYourShit",
"record": {
"text": "任某(男,31歲),被行拘! ",
"$type": "app.bsky.feed.post",
"embed": {
"uri": "https://www.headline01.com/a/Xio3zSUuGvX7J1jCSG_F5g-51479340.html",
"$type": "app.bsky.embed.external#main",
"external": {
"uri": "https://www.headline01.com/a/Xio3zSUuGvX7J1jCSG_F5g-51479340.html",
"thumb": {
"ref": "bafkreidrfrfluqo26yy4pemkcpgug2p5sea3xrwh3schfnns5owa7gbwvm",
"size": 86924,
"$type": "blob",
"mimeType": "image/jpeg"
},
"title": "任某(男,31歲),被行拘!",
"description": ""
}
},
"createdAt": "2024-08-18T14:05:19.645644Z"
}
}
```
Lines marked as `IllegalSpecFollowerAkaFixYourShit` should be ignored in general though. Content isn't great anyway.
## Changes
**[01/09/24]**
Removed mentions of `nms`. We stopped resolving DIDs after 01/09/24 as it appears that I'm slamming PLC directory too much lol. Sorry!
**[04/09/24]**
Fixed video embeds as it started to crash the scraper resuling in some missing stuff.
## Various Notes
### Recommendations
For getting a more proper stream of posts, it's recommended to keep a track of users + posts in a index cache.
Then again, you can just fetch a list from bsky api directly lol.
Do consider reading up on bsky docs and atproto docs.
### Docs Nonsense
When the bluesky docs say: "...Implemented by PDS".
You should probably use the following base url: `https://bsky.social/xrpc/`
### Deletions
UnActions ("unpost","unlike","unrepost") only contains `rid` as the record key.
### License
For everyone out there, data is meant to be free unlike some previous license I did. This is free for grabs aka `CC-BY-4.0`.
for Big Corps wanting to use it: Sure. As long as you cite this dataset + `CC-BY-4.0` license. Be nice to people who have came before you and did it.
### Citations
We would much love academia to cite this dataset. Be nice please `:)`
```tex
@misc{bskyaozora,
title = {Aozora Diving: diving into the sea of atproto and bluesky network },
author = {KaraKaraWitch},
year = {2024},
howpublished = {\url{https://huggingface.co/datasets/WitchesSocialStream/bluesky-Aozora-Diving}},
}
``` |
hirundo-io/bbq-gender-unbias-multi-choice | hirundo-io | "2025-04-17T07:01:02" | 5 | 0 | [
"region:us"
] | null | "2025-04-15T16:33:19" | ---
dataset_info:
features:
- name: question
dtype: string
- name: correct_answer
dtype: string
- name: incorrect_answers
sequence: string
splits:
- name: train
num_bytes: 950618
num_examples: 2836
download_size: 101614
dataset_size: 950618
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
trungnam299/reddit_dataset_246 | trungnam299 | "2025-04-17T07:00:55" | 908 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-03-17T02:01:36" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** trungnam299/reddit_dataset_246
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5Gy7jZ36YgkpmbB9jDza41Uk7VQzsa4JkABC82FZJfmw2HnH
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{trungnam2992025datauniversereddit_dataset_246,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={trungnam299},
year={2025},
url={https://huggingface.co/datasets/trungnam299/reddit_dataset_246},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 25321315
- **Date Range:** 2009-12-10T00:00:00Z to 2025-04-17T00:00:00Z
- **Last Updated:** 2025-04-17T07:00:48Z
### Data Distribution
- Posts: 23.74%
- Comments: 76.26%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/wallstreetbets | 446861 | 1.76% |
| 2 | r/politics | 205578 | 0.81% |
| 3 | r/worldnews | 129700 | 0.51% |
| 4 | r/Bitcoin | 118258 | 0.47% |
| 5 | r/CryptoCurrency | 111387 | 0.44% |
| 6 | r/canada | 59751 | 0.24% |
| 7 | r/nba | 58779 | 0.23% |
| 8 | r/nfl | 51694 | 0.20% |
| 9 | r/soccer | 45527 | 0.18% |
| 10 | r/CryptoMarkets | 45419 | 0.18% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-03-24T08:39:31Z | 19401603 | 19401603 |
| 2025-03-25T03:36:40Z | 200415 | 19602018 |
| 2025-03-25T22:36:36Z | 210775 | 19812793 |
| 2025-03-26T17:47:42Z | 196898 | 20009691 |
| 2025-03-27T12:37:28Z | 226566 | 20236257 |
| 2025-03-28T07:22:45Z | 213614 | 20449871 |
| 2025-03-29T01:48:51Z | 178712 | 20628583 |
| 2025-03-29T20:20:04Z | 158499 | 20787082 |
| 2025-03-30T15:34:42Z | 205782 | 20992864 |
| 2025-03-30T17:18:13Z | 2949 | 20995813 |
| 2025-03-31T10:47:11Z | 215551 | 21211364 |
| 2025-04-01T05:22:24Z | 173900 | 21385264 |
| 2025-04-02T00:01:11Z | 137365 | 21522629 |
| 2025-04-02T19:40:38Z | 168851 | 21691480 |
| 2025-04-03T14:43:36Z | 212914 | 21904394 |
| 2025-04-04T07:52:11Z | 231441 | 22135835 |
| 2025-04-05T03:29:16Z | 176492 | 22312327 |
| 2025-04-05T22:12:36Z | 155027 | 22467354 |
| 2025-04-06T15:22:53Z | 175069 | 22642423 |
| 2025-04-07T08:26:16Z | 169236 | 22811659 |
| 2025-04-08T02:09:18Z | 165144 | 22976803 |
| 2025-04-08T19:56:31Z | 159196 | 23135999 |
| 2025-04-09T13:05:47Z | 185870 | 23321869 |
| 2025-04-10T08:15:49Z | 187322 | 23509191 |
| 2025-04-11T03:30:11Z | 180883 | 23690074 |
| 2025-04-11T22:35:46Z | 176001 | 23866075 |
| 2025-04-12T17:41:43Z | 204551 | 24070626 |
| 2025-04-13T13:05:30Z | 245043 | 24315669 |
| 2025-04-14T06:21:19Z | 216455 | 24532124 |
| 2025-04-14T23:40:13Z | 153627 | 24685751 |
| 2025-04-15T20:16:26Z | 153725 | 24839476 |
| 2025-04-16T13:54:32Z | 258882 | 25098358 |
| 2025-04-17T07:00:48Z | 222957 | 25321315 |
|
hirundo-io/bbq-gender-bias-multi-choice | hirundo-io | "2025-04-17T07:00:55" | 7 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-15T16:33:06" | ---
dataset_info:
features:
- name: question
dtype: string
- name: correct_answer
dtype: string
- name: incorrect_answers
sequence: string
splits:
- name: train
num_bytes: 503566
num_examples: 2836
download_size: 56172
dataset_size: 503566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hirundo-io/bbq-physical-bias-multi-choice | hirundo-io | "2025-04-17T07:00:47" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-15T16:32:40" | ---
dataset_info:
features:
- name: question
dtype: string
- name: correct_answer
dtype: string
- name: incorrect_answers
sequence: string
splits:
- name: train
num_bytes: 199004
num_examples: 788
download_size: 21806
dataset_size: 199004
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yobro4619/ChartQA_processed | yobro4619 | "2025-04-17T07:00:22" | 37 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-15T19:03:57" | ---
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: ground_truth
sequence: string
- name: code_descriptions
dtype: string
splits:
- name: test
num_bytes: 61116234.83
num_examples: 1509
download_size: 59783725
dataset_size: 61116234.83
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
YADHU1234/nllb_4M_mono | YADHU1234 | "2025-04-17T06:59:40" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:59:07" | ---
dataset_info:
features:
- name: corrupted
dtype: string
- name: original
dtype: string
splits:
- name: train
num_bytes: 1122911076
num_examples: 3866688
download_size: 517649874
dataset_size: 1122911076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.4_num-company_2_dataset_1_for_gen_8 | HungVu2003 | "2025-04-17T06:58:55" | 11 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-13T22:15:34" | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4274743
num_examples: 10000
download_size: 2187628
dataset_size: 4274743
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.4_num-company_2_dataset_0_for_gen_8 | HungVu2003 | "2025-04-17T06:56:54" | 8 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-13T22:12:43" | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2297557
num_examples: 10000
download_size: 1237510
dataset_size: 2297557
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/tinystories_20M_tokens | kothasuhas | "2025-04-17T06:56:31" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:56:20" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 67765149.85337244
num_examples: 75000
- name: validation
num_bytes: 903535.3313782992
num_examples: 1000
download_size: 36343497
dataset_size: 68668685.18475074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
kothasuhas/tinystories_320M_tokens | kothasuhas | "2025-04-17T06:56:29" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:56:07" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 85384088.81524926
num_examples: 94500
- name: validation
num_bytes: 903535.3313782992
num_examples: 1000
download_size: 45678667
dataset_size: 86287624.14662756
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
kadirnar/Emilia-All-Ja-Orpheus | kadirnar | "2025-04-17T06:55:10" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-16T22:54:23" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: labels
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 10942545178
num_examples: 1323823
download_size: 3504047173
dataset_size: 10942545178
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
YADHU1234/nllb_mono | YADHU1234 | "2025-04-17T06:53:24" | 0 | 0 | [
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:53:04" | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Hkang/summarize_sft-test_lm-EleutherAI_pythia-1b_seed-42_numex-250_lr3e8_14K-BON_64 | Hkang | "2025-04-17T06:53:24" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:53:07" | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_input_ids
sequence: int64
- name: query_attention_mask
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_input_ids
sequence: int64
- name: reference_response_attention_mask
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_input_ids
sequence: int64
- name: query_reference_response_attention_mask
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: model_response
dtype: string
splits:
- name: test
num_bytes: 6851755
num_examples: 250
download_size: 1149918
dataset_size: 6851755
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
omegalabsinc/omega-multimodal | omegalabsinc | "2025-04-17T10:00:54" | 193,748 | 52 | [
"task_categories:video-text-to-text",
"task_categories:video-classification",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:image-to-video",
"task_categories:image-feature-extraction",
"task_categories:visual-question-answering",
"task_categories:audio-classification",
"task_categories:audio-to-audio",
"task_categories:text-to-audio",
"task_categories:text-to-image",
"task_categories:text-to-speech",
"task_categories:text-to-video",
"license:mit",
"modality:video",
"region:us",
"multimodal",
"AGI",
"video",
"anytoany"
] | [
"video-text-to-text",
"video-classification",
"image-classification",
"image-to-text",
"image-to-video",
"image-feature-extraction",
"visual-question-answering",
"audio-classification",
"audio-to-audio",
"text-to-audio",
"text-to-image",
"text-to-speech",
"text-to-video"
] | "2024-03-07T01:35:38" | ---
license: mit
task_categories:
- video-text-to-text
- video-classification
- image-classification
- image-to-text
- image-to-video
- image-feature-extraction
- visual-question-answering
- audio-classification
- audio-to-audio
- text-to-audio
- text-to-image
- text-to-speech
- text-to-video
tags:
- multimodal
- AGI
- video
- anytoany
---
# OMEGA Labs Bittensor Subnet: Multimodal Dataset for AGI Research
[](https://omegatron.ai)
## Introduction
The OMEGA Labs Bittensor Subnet Dataset is a groundbreaking resource for accelerating Artificial General Intelligence (AGI) research and development. This dataset, powered by the Bittensor decentralized network, aims to be the world's largest multimodal dataset, capturing the vast landscape of human knowledge and creation.
With over 1 million hours of footage and 30 million+ 2-minute video clips, the OMEGA Labs dataset will offer unparalleled scale and diversity, covering 50+ scenarios and 15,000+ action phrases. By leveraging state-of-the-art models to translate video components into a unified latent space, this dataset enables the development of powerful AGI models and has the potential to transform various industries.
## Key Features
- 🌍 **Constant Stream of Fresh Data**: The OMEGA dataset is constantly updated with new entries scraped by miners on Bittensor's decentralized AI network. We estimate that within a few weeks, we can get to 5M+ new videos added daily.
- 📈 **Rich Data**: In addition to scale, we are focused on scraping relevant, high quality data. Using [ImageBind](https://imagebind.metademolab.com/demo) embeddings of the submitted videos and corresponding captions, miners are rewarded based on three factors:
- **Diversity**: The further away each new datapoint is from existing datapoints (judged by embedding cosine similarity), the higher the reward
- **Richness**: The more detailed the caption (judged by cosine similarity between video and submitted caption), the higher the reward
- **Relevance**: Miners are asked to scrape data pertaining to handpicked categories, pertinent for building video understanding and training world models.
- 🧠 **Latent Representations**: ImageBind embeddings for the video, audio, and caption are pre-computed
- 🤖 **Empowering Digital Agents**: Enables the development of intelligent agents that can navigate complex workflows and assist users across platforms.
- 📊 **Flexible Metadata**: Filter the dataset to find clips relevant to topics you would like to train on or filter by your desired cosine similarities
## Dataset Structure
The OMEGA Labs Bittensor Subnet Dataset consists of the following columns:
- `video_id`: Unique identifier for each video clip.
- `youtube_id`: The original YouTube video ID.
- `description`: Description of the video content.
- `views`: Number of views the original YouTube video has received.
- `start_time`: Start time of the video clip within the original video.
- `end_time`: End time of the video clip within the original video.
- `video_embed`: Latent representation of the video content.
- `audio_embed`: Latent representation of the audio content.
- `description_embed`: Latent representation of the video description.
- `description_relevance_score`: Relevance score of the video description to the content.
- `query_relevance_score`: Relevance score of the video to the search query.
- `query`: The search query used to retrieve the video.
- `submitted_at`: Timestamp of when the video was added to the dataset.
## Applications
The OMEGA Labs Bittensor Subnet Dataset empowers researchers and developers to push the boundaries of AGI by providing a vast and diverse resource for training and testing multimodal models. Some potential applications include:
- **Unified Representation Learning**: Train powerful models that can learn unified representations across modalities.
- **Any-to-Any Models**: Develop models capable of translating between different modalities, such as generating videos from text descriptions or vice versa.
- **Digital Agents**: Create intelligent agents that can navigate complex workflows and assist users across platforms.
- **Immersive Gaming**: Build realistic gaming environments with rich physics and interactions.
- **Video Understanding**: Advance the state-of-the-art in video processing tasks such as transcription, motion analysis, object detection, and emotion recognition.
## Say hi!
If you're interested in getting in touch, reach out to us on [Twitter](https://twitter.com/omegalabsai)!
You can also visit our [Github](https://github.com/omegalabsinc/omegalabs-bittensor-subnet/tree/main) to learn more about how our scraping is done!
And if you'd like to learn more about Bittensor, join the [Discord](https://discord.gg/6yZpQ9KV)! |
anhhhhhhhhhhhhhh/speech_podcast | anhhhhhhhhhhhhhh | "2025-04-17T10:00:50" | 39 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-11T01:29:04" | ---
license: apache-2.0
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.4_num-company_2_dataset_0_for_gen_12 | HungVu2003 | "2025-04-17T10:00:27" | 7 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-14T01:18:10" | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2546266
num_examples: 10000
download_size: 1309204
dataset_size: 2546266
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BornSaint/orpo-dpo-mix-40k_portuguese | BornSaint | "2025-04-17T10:00:09" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T08:23:48" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 48877
num_examples: 10
download_size: 28525
dataset_size: 48877
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davanstrien/dataset_cards_with_metadata | davanstrien | "2025-04-17T09:59:53" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T09:48:47" | ---
dataset_info:
features:
- name: datasetId
dtype: large_string
- name: author
dtype: large_string
- name: last_modified
dtype: timestamp[us, tz=UTC]
- name: downloads
dtype: int64
- name: likes
dtype: int64
- name: tags
large_list: large_string
- name: task_categories
large_list: large_string
- name: createdAt
dtype: timestamp[us, tz=UTC]
- name: card
dtype: large_string
splits:
- name: train
num_bytes: 275573
num_examples: 119
download_size: 90540
dataset_size: 275573
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tomap1410/FullyStockManagement | tomap1410 | "2025-04-17T09:59:37" | 0 | 0 | [
"region:us"
] | null | "2025-04-17T06:44:56" | ---
dataset_info:
features:
- name: task
dtype: string
- name: goals
dtype: int64
- name: description
dtype: string
- name: complete
dtype: string
- name: store_place
dtype: string
- name: email_working
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1092
num_examples: 11
download_size: 3268
dataset_size: 1092
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gunnybd01/Fully40000_60000 | gunnybd01 | "2025-04-17T09:59:34" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-16T20:34:47" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 2582496
num_examples: 2150
download_size: 898975
dataset_size: 2582496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tomap1410/VolumeStockManagement | tomap1410 | "2025-04-17T09:59:17" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T07:46:08" | ---
dataset_info:
features:
- name: task
dtype: string
- name: goals
dtype: int64
- name: description
dtype: string
- name: complete
dtype: string
- name: store_place
dtype: string
- name: email_working
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 631
num_examples: 6
download_size: 3266
dataset_size: 631
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ngtranAI1/Volume90000_100000 | ngtranAI1 | "2025-04-17T09:59:13" | 0 | 0 | [
"region:us"
] | null | "2025-04-17T04:19:13" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 1154106
num_examples: 1000
download_size: 434407
dataset_size: 1154106
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gunnybd01/Fully60000_80000 | gunnybd01 | "2025-04-17T09:59:00" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-16T20:39:24" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 2478356
num_examples: 2050
download_size: 862940
dataset_size: 2478356
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HueyWoo/turtlesim_agent_dataset4 | HueyWoo | "2025-04-17T09:58:59" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T09:23:06" | ---
dataset_info:
features:
- name: tools
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: name
dtype: string
splits:
- name: train
num_bytes: 8544
num_examples: 10
download_size: 5595
dataset_size: 8544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ngtranAI1/Volume120000_132433 | ngtranAI1 | "2025-04-17T09:58:55" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T04:20:38" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 1039873
num_examples: 900
download_size: 398426
dataset_size: 1039873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gunnybd01/Fully80000_100000 | gunnybd01 | "2025-04-17T09:58:30" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T05:53:50" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 866051
num_examples: 1000
download_size: 304170
dataset_size: 866051
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tomap1410/TrendStockManagement | tomap1410 | "2025-04-17T09:58:28" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T06:40:42" | ---
dataset_info:
features:
- name: task
dtype: string
- name: goals
dtype: int64
- name: description
dtype: string
- name: complete
dtype: string
- name: store_place
dtype: string
- name: email_working
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 698
num_examples: 7
download_size: 3215
dataset_size: 698
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyentn1410/Trend110000_120000 | nguyentn1410 | "2025-04-17T09:58:26" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T05:45:29" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 310050
num_examples: 300
download_size: 116222
dataset_size: 310050
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Arururu12/UR5e_Gello_Clean_up_the_cups | Arururu12 | "2025-04-17T09:58:17" | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2025-04-17T09:57:53" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "ur5e_gello",
"total_episodes": 9,
"total_frames": 3653,
"total_tasks": 1,
"total_videos": 18,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:9"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_motor_0",
"main_motor_1",
"main_motor_2",
"main_motor_3",
"main_motor_4",
"main_motor_5",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_motor_0",
"main_motor_1",
"main_motor_2",
"main_motor_3",
"main_motor_4",
"main_motor_5",
"main_gripper"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 10.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 10.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
huylaughmad/chatbot-data | huylaughmad | "2025-04-17T09:58:04" | 196 | 0 | [
"language:vi",
"license:cc-by-4.0",
"size_categories:n<1K",
"region:us",
"chatbot",
"dental-services",
"vietnamese"
] | null | "2025-04-16T01:13:32" | ---
license: cc-by-4.0
language:
- vi
tags:
- chatbot
- dental-services
- vietnamese
pretty_name: Chatbot Data
size_categories:
- n<1K
---
Chatbot Data
This dataset contains structured information about dental clinic services, designed for use in chatbot applications. It includes details about the clinic, pricing for adult and child services, additional consultation information, service processes, questions for consultation, promotions, and synonyms for services and severity levels.
Dataset Overview
Source: Dental clinic services data.
Language: Vietnamese.
Format: CSV.
Size: 1 file with 193 rows.
Splits
train: Contains all data (train.csv).
Features
category (string): The main category of the service or information (e.g., clinic_info, adult_services, synonyms).
subcategory (string): Subcategory of the service (e.g., địa chỉ, trám răng).
subcategory_level_2 (string): Further subcategory level (e.g., trám răng (composite)).
content (string): Detailed description of the service, pricing, or synonyms (e.g., - **Địa chỉ**: 160-162 Trần Phú, P. Vĩnh Thanh Vân, Tp. Rạch Giá, Kiên Giang).
is_synonym (bool): Indicates if the entry is a synonym (True) or not (False).
Usage
This dataset can be used to power a chatbot for dental clinic services, providing information on pricing, procedures, and synonyms for user queries. To load the dataset using the datasets library:
from datasets import load_dataset
dataset = load_dataset("huylaughmad/chatbot-data", split="train")
print(dataset[0])
Notes
The dataset is in Vietnamese, with some special characters (e.g., đ). Ensure proper UTF-8 encoding when processing.
The is_synonym column contains boolean values (True/False). Ensure these are correctly parsed as booleans.
The dataset is structured hierarchically with categories and subcategories for easy navigation.
License
This dataset is licensed under CC BY 4.0. It is provided for personal and research use. Please contact the dataset owner for commercial usage permissions. |
intelsense/dolphin-flan5m-en2bn | intelsense | "2025-04-17T09:57:49" | 1,827 | 0 | [
"size_categories:10K<n<100K",
"modality:text",
"region:us"
] | null | "2025-04-05T12:07:32" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction_bn
dtype: string
- name: input_bn
dtype: string
- name: output_bn
dtype: string
splits:
- name: train
num_bytes: 79116007
num_examples: 14690
download_size: 33532533
dataset_size: 79116007
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Heubub/Tunisian-Proverbs-with-Image-Associations-A-Cultural-and-Linguistic-Dataset | Heubub | "2025-04-17T09:57:42" | 114 | 0 | [
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:text-generation",
"language:ar",
"language:en",
"license:cc-by-4.0",
"region:us",
"text-image pairs",
"proverbs",
"culture",
"heritage",
"generative",
"prompt"
] | [
"text2text-generation",
"translation",
"text-generation"
] | "2025-04-13T17:56:22" | ---
license: cc-by-4.0
task_categories:
- text2text-generation
- translation
- text-generation
language:
- ar
- en
tags:
- text-image pairs
- proverbs
- culture
- heritage
- generative
- prompt
size_categories:
- n < 1K
dataset_info:
features:
- name: tunisan_proverb
dtype: string
- name: proverb_arabic_explaination
dtype: string
- name: context
dtype: string
- name: caption
dtype: string
- name: caption_formal
dtype: string
- name: dynamic
dtype: string
- name: prompt
dtype: string
- name: image_path_1
dtype: image
- name: image_path_2
dtype: image
- name: image_path_3
dtype: image
- name: image_path_4
dtype: image
- name: clip_scores
dtype: float32
configs:
- config_name: default
data_files:
- dataset.csv
description: >
This configuration contains Tunisian proverbs with corresponding textual
explanations and up to four AI-generated image associations per entry,
covering cultural and linguistic insight.
citation: |
@misc{heubub2025tunisianproverbs,
author = {Abderrahim Habiba & Ouamani Fadoua},
title = {Tunisian Proverbs with Image Associations: A Cultural and Linguistic Dataset},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/Heubub/Tunisian-Proverbs-with-Image-Associations-A-Cultural-and-Linguistic-Dataset}},
note = {CC-BY 4.0 License}
}
---
<h1>Tunisian Proverbs with Image Associations: A Cultural and Linguistic Dataset </h1>
## Description
This dataset explores the rich oral tradition of Tunisian proverbs mapped into text format, pairing each with contextual explanations, English translations both word-to-word and it's equivalent Target Language dynamic, Automated prompt and AI-generated visual interpretations.
It bridges linguistic, cultural, and visual modalities making it valuable for tasks in cross-cultural NLP, generative art, and multi-modal learning for low-resourced Language such as the Arabic Tunisian Dialect.
## Some Selections
<table>
<tr>
<td align="center">
<img src="images/text_image_dataset_000/proverb_418_image_1.png" width="250"/><br/>
<b>ظل راجل ولا ظل حيط</b><br/>
</td>
<td align="center">
<img src="images/text_image_dataset_0000000/proverb_1230_image_0.png" width="250"/><br/>
<b>الملح و الصحبة</b><br/>
</td>
<td align="center">
<img src="images/text_image_dataset_00000/proverb_605_image_0.png" width="250"/><br/>
<b>كل قرده في عين امه غزال</b><br/>
</td>
<td align="center">
<img src="images/text_image_dataset_0/proverb_55_image_0.png" width="250"/><br/>
<b>قلبه أبيض كالحليب</b><br/>
</td>
<td align="center">
<img src="images/text_image_dataset_0/proverb_202_image_1.png" width="250"/><br/>
<b>اضرب القطوسة تتأدب العروسة</b><br/>
</td>
<td align="center">
<img src="images/text_image_dataset_0/proverb_209_image_0.png" width="250"/><br/>
<b>اللي وجهها يفجعها مال بوها ينفعها</b><br/>
</td>
</tr>
</table>
## Objectives
<ul>
<li>Preserve and promote intangible cultural heritage from underrepresented languages.</li>
<li>Create an open-access, FAIR-compliant resource to support Generative AI, NLP, and multimodal ML in low-resource languages like Tunisian Arabic.</li>
<li>Provide a dataset suitable for translation, text and image generation, proverb understanding, visual grounding, and educational tools.</li>
</ul>
<h2>Dataset Structure</h2>
<ul>
<li>A <strong>Tunisian proverb</strong> in dialectal Arabic.</li>
<li>An <strong>explanation</strong> of its meaning.</li>
<li><strong>Contextual tags</strong>.</li>
<li><strong>English translations</strong> in different styles:
<ul>
<li>Informal translation</li>
<li>Formal interpretation</li>
</ul>
</li>
<li>A <strong>text-to-image automated prompt</strong> used to generate 4 unique images.</li>
<li>Four associated <strong>image paths</strong>.</li>
<li>A <strong>CLIP score</strong> indicating relevance of the images to the proverb.</li>
</ul>
## Language & Cultural Focus:
<ul>
<li>Dialect: Tunisian Arabic (Derja or Tounsi)</li>
<li>Languag variety: Tunisian Arabic and English</li>
</ul>
## How to Use
To load the dataset in Google Colab, you can use the datasets library from Hugging Face:
```python
from datasets import load_dataset
import cv2
from google.colab.patches import cv2_imshow
# Load the dataset
dataset = load_dataset("Heubub/Tunisian-Proverbs-with-Image-Associations-A-Cultural-and-Linguistic-Dataset")
# Get the first sample from the 'train' split
sample = dataset["train"][0]
# Extract proverb and prompt and images e.g the first image
proverb = sample["tunisan_proverb"]
prompt = sample["prompt"]
image_path_1 = sample["image_path_1"]
print(f"Proverb: {proverb}")
print(f"Prompt: {prompt}")
img_bgr = np.array(image_path_1)[:, :, ::-1]
cv2_imshow(img_bgr)
##Citation
If you use this dataset, please cite it as follows:
@misc
{heubub2025tunisianproverbs,
author = {Abderrahim Habiba & Ouamani Fadoua},
title = {Tunisian Proverbs with Image Associations: A Cultural and Linguistic Dataset},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/Heubub/Tunisian-Proverbs-with-Image-Associations-A-Cultural-and-Linguistic-Dataset}},
note = {CC-BY 4.0 License}
} |
clarkmaio/Ooops_dataset | clarkmaio | "2025-04-17T09:57:36" | 3,792 | 0 | [
"license:mit",
"region:us"
] | null | "2024-12-30T00:14:36" | ---
license: mit
---
# Ooops dataset
Collection of snapshot obtained from [Finntraffic API](https://www.digitraffic.fi/en/marine-traffic/).
You can have access to data using `polars` or `duckdb`.
```
import polars as pl
scan = pl.scan_parquet('hf://datasets/clarkmaio/Ooops_dataset/vessels_location/20250101_vessels_location.pq')
data = (scan
.filter(
pl.col('country')=='Russia'
)
.select(['mmsi', 'latitude', 'longitude', 'country', 'timestamp_hourly'])
.collect())
data.head()
``` |
Apples96/text8-hackernews-combined | Apples96 | "2025-04-17T09:57:04" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-17T09:57:04" | ---
license: apache-2.0
---
|
intelsense/openhermes-en2bn-messages-2 | intelsense | "2025-04-17T09:56:44" | 2,047 | 0 | [
"region:us"
] | null | "2025-04-09T15:15:41" | ---
dataset_info:
features:
- name: custom_instruction
dtype: 'null'
- name: topic
dtype: 'null'
- name: model_name
dtype: 'null'
- name: model
dtype: 'null'
- name: skip_prompt_formatting
dtype: bool
- name: category
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: 'null'
- name: views
dtype: 'null'
- name: language
dtype: 'null'
- name: id
dtype: string
- name: title
dtype: 'null'
- name: idx
dtype: 'null'
- name: hash
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: system_prompt
dtype: 'null'
- name: source
dtype: string
- name: system_message
dtype: string
- name: human_message
dtype: string
- name: gpt_message
dtype: string
- name: system_message_bn
dtype: string
- name: human_message_bn
dtype: string
- name: gpt_message_bn
dtype: string
splits:
- name: train
num_bytes: 480736454
num_examples: 68970
download_size: 182685777
dataset_size: 480736454
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KakologArchives/KakologArchives | KakologArchives | "2025-04-17T09:56:40" | 5,740,354 | 15 | [
"task_categories:text-classification",
"language:ja",
"license:mit",
"region:us"
] | [
"text-classification"
] | "2023-05-12T13:31:56" | ---
pretty_name: ニコニコ実況 過去ログアーカイブ
license: mit
language:
- ja
task_categories:
- text-classification
---
# ニコニコ実況 過去ログアーカイブ
ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp) のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。
去る2020年12月、ニコニコ実況は [ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html) されました。
これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。
そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。
しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。
一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。
また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。
私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。
このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティでの実況番組も含めた新ニコニコ実況、さらに 2024/06/10 からは実況用代替コメントサーバーである [NX-Jikkyo](https://nx-jikkyo.tsukumijima.net/) の当日分の過去ログを5分に1回収集し、随時反映しています。
過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。
よろしければそちらもご活用ください。
## Dataset Structure
### Builder Config
| Key | Value Type | Default Value | Description |
| --------------- | ---------- | ------------- | ----------- |
| channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) |
| year | int | None | 取得する過去ログの年 (省略時はすべての年) |
| number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) |
### Data Splits
| Split | Approximate Size | Description |
| ------- | ---------------- | ----------- |
| sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 |
| all | 190GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。190GB 以上あるため注意してください。 |
### Data Fields
| Field | Type | Description |
| --------------- | -------- | ----------- |
| thread | string | コメントのスレッド ID |
| no | int64 | コメント番号 (コメ番) |
| vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) |
| date | int64 | コメント投稿時間の UNIX タイムスタンプ |
| date_usec | int64 | コメント投稿時間の小数点以下の時間 |
| user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) |
| mail | string | コメントのコマンド (184, red naka big など、省略されることもある) |
| premium | boolean | コメントしたユーザーがプレミアム会員であれば True |
| anonymity | boolean | 匿名コメントであれば True |
| content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) |
## Example
```python
from datasets import load_dataset
dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10)
for data in dataset['train']:
print(data)
```
## Licensing Information
[MIT License](https://opensource.org/license/mit/)
|
tomap1410/StockMomentum | tomap1410 | "2025-04-17T09:56:18" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T09:04:52" | ---
dataset_info:
features:
- name: task
dtype: string
- name: goals
dtype: int64
- name: description
dtype: string
- name: complete
dtype: string
- name: store_place
dtype: string
- name: email_working
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 698
num_examples: 6
download_size: 3273
dataset_size: 698
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TRANNGUYENAI/StockMomentum70000_90000 | TRANNGUYENAI | "2025-04-17T09:56:14" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T09:56:12" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 79029
num_examples: 50
download_size: 33937
dataset_size: 79029
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
efwkjn/dataset | efwkjn | "2025-04-17T09:56:01" | 3,513 | 0 | [
"region:us"
] | null | "2025-04-12T22:57:56" | ---
viewer: false
---
Processed whisper training data. Final pass datamix |
ishani29/mahakumbh-flan-t5 | ishani29 | "2025-04-17T09:54:55" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T09:54:47" | ---
dataset_info:
features:
- name: Title
dtype: string
- name: Link
dtype: string
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1976918.862275449
num_examples: 851
- name: test
num_bytes: 350781.1377245509
num_examples: 151
download_size: 1087400
dataset_size: 2327700.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
great0001/llama3_0 | great0001 | "2025-04-17T09:54:49" | 1,696 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-01T19:19:42" | ---
dataset_info:
features:
- name: date
dtype: string
- name: data
struct:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 66009238
num_examples: 15816
download_size: 27974668
dataset_size: 66009238
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ishani29/mahakumbh-news-summarization | ishani29 | "2025-04-17T09:54:43" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-16T22:02:00" | ---
dataset_info:
features:
- name: Title
dtype: string
- name: Link
dtype: string
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1976918.862275449
num_examples: 851
- name: test
num_bytes: 350781.1377245509
num_examples: 151
download_size: 1087400
dataset_size: 2327700.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
gunnybd01/Fully100000_120000 | gunnybd01 | "2025-04-17T09:54:35" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T05:38:59" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 827274
num_examples: 950
download_size: 288414
dataset_size: 827274
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TRANNGUYENAI/StockMomentum50000_60000 | TRANNGUYENAI | "2025-04-17T09:54:20" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-16T04:34:34" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 4326627
num_examples: 2750
download_size: 1548828
dataset_size: 4326627
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
abokinala/sputnik_100_28_pick_place_surface | abokinala | "2025-04-17T09:54:20" | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"device.so100",
"collection.sputnik_100",
"operator.abokinala"
] | [
"robotics"
] | "2025-04-17T09:53:51" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- device.so100
- collection.sputnik_100
- operator.abokinala
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 20,
"total_frames": 5971,
"total_tasks": 1,
"total_videos": 60,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.usb_front": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.side_view": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
hf-doc-build/doc-build-dev | hf-doc-build | "2025-04-17T09:53:39" | 122,604 | 4 | [
"license:mit",
"region:us",
"documentation"
] | null | "2022-11-08T09:03:37" | ---
license: mit
tags:
- documentation
pretty_name: HF Documentation (PRs)
viewer: false
---
This is a dataset which contains the docs from all the PRs that are updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo. |
hf-doc-build/doc-build | hf-doc-build | "2025-04-17T09:53:28" | 218,723 | 9 | [
"license:mit",
"region:us"
] | null | "2022-10-24T15:39:05" | ---
license: mit
pretty_name: Generated Docs for HF
viewer: false
---
This repo contains all the docs published on https://huggingface.co/docs.
The docs are generated with https://github.com/huggingface/doc-builder.
<!-- comment to trigger webhook.= --> |
intelsense/healix_360 | intelsense | "2025-04-17T09:53:26" | 2,283 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-03-09T09:38:09" | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: processed_text
dtype: string
splits:
- name: train
num_bytes: 644983655
num_examples: 832000
download_size: 341681739
dataset_size: 644983655
---
|
sevenc-nanashi/kiiteitte | sevenc-nanashi | "2025-04-17T09:52:31" | 830 | 0 | [
"size_categories:100K<n<1M",
"format:json",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2025-03-03T09:58:01" | ---
configs:
- config_name: default
data_files:
- split: "2023"
path: histories/2023/*.jsonl
- split: "2024"
path: histories/2024/*.jsonl
- split: "2025"
path: histories/2025/*.jsonl
- config_name: all_histories
data_files:
- split: "all_histories"
path: histories/*/*.jsonl
size_categories:
- 10K<n<100K
dataset_info:
- config_name: default
features:
- name: video_id
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: thumbnail
dtype: string
- name: date
dtype: timestamp[s]
- name: new_faves
dtype: int32
- name: spins
dtype: int32
- name: pickup_user_url
dtype: string
- name: pickup_user_name
dtype: string
- name: pickup_user_icon
dtype: string
- name: pickup_playlist_url
dtype: string
- config_name: all_histories
features:
- name: video_id
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: thumbnail
dtype: string
- name: date
dtype: timestamp[s]
- name: new_faves
dtype: int32
- name: spins
dtype: int32
- name: pickup_user_url
dtype: string
- name: pickup_user_name
dtype: string
- name: pickup_user_icon
dtype: string
- name: pickup_playlist_url
dtype: string
---
# Kiiteitte history
[Kiiteitte](https://github.com/sevenc-nanashi/kiiteitte-web) が収集した、今までの選曲履歴。
1時間おきに更新されます。
## 型
```jsonc
{
// 動画ID
"video_id": "sm44670499",
// タイトル
"title": "library->w4nderers / 足立レイ、つくよみちゃん",
// 投稿者
"author": "名無し。",
// サムネイルのURL
"thumbnail": "https://nicovideo.cdn.nimg.jp/thumbnails/44670499/44670499.91820835",
// 選曲日時
"date": "2025-02-22 12:51:51",
// 新しく増えたお気に入り数。不明の場合は null
"new_faves": 5,
// 回ったユーザーの数。不明の場合は null
"spins": 13,
// イチ押しリストのユーザーのURL。イチ押しリスト以外から選曲された場合は null
"pickup_user_url": "https://kiite.jp/user/vocahai_3939",
// イチ押しリストのユーザーの名前。イチ押しリスト以外から選曲された場合は null
"pickup_user_name": "どこかのボカ廃",
// イチ押しリストのユーザーのアイコンのURL。イチ押しリスト以外から選曲された場合は null
"pickup_user_icon": "https://kiite.jp/img/icon-user.jpg",
// イチ押しリストのURL。イチ押しリスト以外から選曲された場合は null
"pickup_playlist_url": "https://kiite.jp/playlist/0CbV8bnUxq",
}
```
|
zkpbeats/reddit_ds_100415 | zkpbeats | "2025-04-17T09:52:24" | 679 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-04-03T12:12:07" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** zkpbeats/reddit_ds_100415
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5H3AggXAqErtsYWdn5A2cnf2MhkVS45HzqyErD3VxoDGWuxC
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{zkpbeats2025datauniversereddit_ds_100415,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={zkpbeats},
year={2025},
url={https://huggingface.co/datasets/zkpbeats/reddit_ds_100415},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 3809257
- **Date Range:** 2025-03-05T00:00:00Z to 2025-04-17T00:00:00Z
- **Last Updated:** 2025-04-17T09:52:22Z
### Data Distribution
- Posts: 1.69%
- Comments: 25.46%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/worldnews | 45869 | 4.44% |
| 2 | r/mildlyinteresting | 29148 | 2.82% |
| 3 | r/wallstreetbets | 26946 | 2.61% |
| 4 | r/Millennials | 18138 | 1.75% |
| 5 | r/redscarepod | 11965 | 1.16% |
| 6 | r/Gamingcirclejerk | 11426 | 1.10% |
| 7 | r/BravoRealHousewives | 10649 | 1.03% |
| 8 | r/CrazyFuckingVideos | 10438 | 1.01% |
| 9 | r/Grimdank | 9557 | 0.92% |
| 10 | r/mexico | 9144 | 0.88% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-04-03T11:50:44Z | 210554 | 210554 |
| 2025-04-03T11:52:19Z | 206249 | 416803 |
| 2025-04-03T11:53:58Z | 204074 | 620877 |
| 2025-04-03T11:55:36Z | 210761 | 831638 |
| 2025-04-03T11:57:11Z | 202795 | 1034433 |
| 2025-04-03T11:58:47Z | 228184 | 1262617 |
| 2025-04-03T12:00:22Z | 210899 | 1473516 |
| 2025-04-03T12:01:57Z | 204861 | 1678377 |
| 2025-04-03T12:03:35Z | 219572 | 1897949 |
| 2025-04-03T12:05:11Z | 216640 | 2114589 |
| 2025-04-03T12:06:39Z | 160498 | 2275087 |
| 2025-04-03T12:08:09Z | 166653 | 2441740 |
| 2025-04-03T12:09:37Z | 167136 | 2608876 |
| 2025-04-03T12:11:04Z | 166162 | 2775038 |
| 2025-04-03T12:13:03Z | 380438 | 3155476 |
| 2025-04-03T14:39:28Z | 6425 | 3161901 |
| 2025-04-03T17:01:39Z | 6679 | 3168580 |
| 2025-04-03T19:23:53Z | 7357 | 3175937 |
| 2025-04-03T21:46:21Z | 7852 | 3183789 |
| 2025-04-04T00:08:44Z | 5140 | 3188929 |
| 2025-04-04T02:31:20Z | 5171 | 3194100 |
| 2025-04-04T04:54:23Z | 5775 | 3199875 |
| 2025-04-04T07:16:23Z | 3838 | 3203713 |
| 2025-04-04T09:38:21Z | 2899 | 3206612 |
| 2025-04-04T12:00:26Z | 3628 | 3210240 |
| 2025-04-04T14:23:28Z | 6001 | 3216241 |
| 2025-04-04T16:45:46Z | 5832 | 3222073 |
| 2025-04-04T19:08:23Z | 6231 | 3228304 |
| 2025-04-04T21:30:56Z | 6569 | 3234873 |
| 2025-04-04T23:53:58Z | 5373 | 3240246 |
| 2025-04-05T02:16:15Z | 4243 | 3244489 |
| 2025-04-05T04:38:16Z | 4651 | 3249140 |
| 2025-04-05T07:00:19Z | 3495 | 3252635 |
| 2025-04-05T09:22:32Z | 3338 | 3255973 |
| 2025-04-05T11:45:03Z | 2452 | 3258425 |
| 2025-04-05T14:07:17Z | 4328 | 3262753 |
| 2025-04-05T16:29:57Z | 3689 | 3266442 |
| 2025-04-05T18:52:26Z | 5991 | 3272433 |
| 2025-04-05T21:15:03Z | 6854 | 3279287 |
| 2025-04-05T23:38:07Z | 6984 | 3286271 |
| 2025-04-06T02:00:27Z | 6991 | 3293262 |
| 2025-04-06T04:22:39Z | 4799 | 3298061 |
| 2025-04-06T06:44:50Z | 2661 | 3300722 |
| 2025-04-06T09:07:31Z | 3574 | 3304296 |
| 2025-04-06T11:29:54Z | 2172 | 3306468 |
| 2025-04-06T13:52:27Z | 4199 | 3310667 |
| 2025-04-06T16:14:59Z | 5858 | 3316525 |
| 2025-04-06T18:37:35Z | 5348 | 3321873 |
| 2025-04-06T21:00:19Z | 4912 | 3326785 |
| 2025-04-06T23:22:45Z | 4708 | 3331493 |
| 2025-04-07T01:45:19Z | 6301 | 3337794 |
| 2025-04-07T04:08:02Z | 5742 | 3343536 |
| 2025-04-07T06:30:16Z | 3658 | 3347194 |
| 2025-04-07T08:53:18Z | 2885 | 3350079 |
| 2025-04-07T11:16:48Z | 3487 | 3353566 |
| 2025-04-07T13:47:16Z | 4379 | 3357945 |
| 2025-04-07T16:12:00Z | 5713 | 3363658 |
| 2025-04-07T18:34:44Z | 8129 | 3371787 |
| 2025-04-07T20:57:17Z | 5487 | 3377274 |
| 2025-04-07T23:21:52Z | 6493 | 3383767 |
| 2025-04-08T01:50:29Z | 5170 | 3388937 |
| 2025-04-08T04:13:51Z | 8351 | 3397288 |
| 2025-04-08T06:36:57Z | 6843 | 3404131 |
| 2025-04-08T08:41:00Z | 2766 | 3406897 |
| 2025-04-08T09:50:27Z | 768 | 3407665 |
| 2025-04-08T12:13:12Z | 4904 | 3412569 |
| 2025-04-08T14:35:34Z | 4912 | 3417481 |
| 2025-04-08T16:58:23Z | 6852 | 3424333 |
| 2025-04-08T19:22:08Z | 9543 | 3433876 |
| 2025-04-08T21:45:20Z | 8265 | 3442141 |
| 2025-04-09T00:07:59Z | 6245 | 3448386 |
| 2025-04-09T02:31:24Z | 6627 | 3455013 |
| 2025-04-09T04:53:56Z | 4828 | 3459841 |
| 2025-04-09T07:16:33Z | 3892 | 3463733 |
| 2025-04-09T09:38:54Z | 4915 | 3468648 |
| 2025-04-09T12:02:24Z | 5319 | 3473967 |
| 2025-04-09T14:25:07Z | 4554 | 3478521 |
| 2025-04-09T16:48:02Z | 7621 | 3486142 |
| 2025-04-09T19:11:33Z | 7758 | 3493900 |
| 2025-04-09T21:34:00Z | 7341 | 3501241 |
| 2025-04-09T23:56:44Z | 6971 | 3508212 |
| 2025-04-10T02:19:25Z | 4208 | 3512420 |
| 2025-04-10T04:41:32Z | 5183 | 3517603 |
| 2025-04-10T07:03:53Z | 3823 | 3521426 |
| 2025-04-10T09:26:10Z | 4381 | 3525807 |
| 2025-04-10T11:48:29Z | 2557 | 3528364 |
| 2025-04-10T14:11:02Z | 5006 | 3533370 |
| 2025-04-10T16:33:29Z | 5322 | 3538692 |
| 2025-04-10T18:56:09Z | 8797 | 3547489 |
| 2025-04-10T21:18:32Z | 7802 | 3555291 |
| 2025-04-10T23:40:49Z | 6387 | 3561678 |
| 2025-04-11T02:03:18Z | 6742 | 3568420 |
| 2025-04-11T04:25:48Z | 5316 | 3573736 |
| 2025-04-11T06:48:21Z | 3208 | 3576944 |
| 2025-04-11T09:10:58Z | 3525 | 3580469 |
| 2025-04-11T11:33:09Z | 2446 | 3582915 |
| 2025-04-11T13:55:16Z | 5780 | 3588695 |
| 2025-04-11T16:18:19Z | 4603 | 3593298 |
| 2025-04-11T18:40:49Z | 6254 | 3599552 |
| 2025-04-11T21:04:04Z | 7102 | 3606654 |
| 2025-04-11T23:26:31Z | 6921 | 3613575 |
| 2025-04-12T01:48:50Z | 6846 | 3620421 |
| 2025-04-12T04:11:13Z | 5135 | 3625556 |
| 2025-04-12T06:33:18Z | 3085 | 3628641 |
| 2025-04-12T08:55:40Z | 3350 | 3631991 |
| 2025-04-12T11:17:56Z | 3300 | 3635291 |
| 2025-04-12T13:40:01Z | 4321 | 3639612 |
| 2025-04-12T16:02:33Z | 7240 | 3646852 |
| 2025-04-12T18:25:06Z | 5949 | 3652801 |
| 2025-04-12T20:47:44Z | 6256 | 3659057 |
| 2025-04-12T23:10:37Z | 5369 | 3664426 |
| 2025-04-13T01:33:00Z | 6485 | 3670911 |
| 2025-04-13T03:55:52Z | 5391 | 3676302 |
| 2025-04-13T06:18:04Z | 3830 | 3680132 |
| 2025-04-13T08:40:14Z | 3512 | 3683644 |
| 2025-04-13T11:02:59Z | 3418 | 3687062 |
| 2025-04-13T13:26:11Z | 5500 | 3692562 |
| 2025-04-13T15:48:49Z | 5493 | 3698055 |
| 2025-04-13T18:11:50Z | 7394 | 3705449 |
| 2025-04-13T20:37:01Z | 6159 | 3711608 |
| 2025-04-13T22:59:54Z | 5187 | 3716795 |
| 2025-04-14T01:22:35Z | 5111 | 3721906 |
| 2025-04-14T03:44:47Z | 5143 | 3727049 |
| 2025-04-14T06:08:05Z | 3821 | 3730870 |
| 2025-04-14T08:36:02Z | 3212 | 3734082 |
| 2025-04-14T11:04:28Z | 4720 | 3738802 |
| 2025-04-14T13:26:57Z | 5039 | 3743841 |
| 2025-04-14T15:50:38Z | 6988 | 3750829 |
| 2025-04-14T18:13:36Z | 8539 | 3759368 |
| 2025-04-14T20:36:13Z | 7420 | 3766788 |
| 2025-04-14T22:58:44Z | 6463 | 3773251 |
| 2025-04-15T01:21:10Z | 3726 | 3776977 |
| 2025-04-15T03:43:52Z | 5115 | 3782092 |
| 2025-04-15T06:43:06Z | 5232 | 3787324 |
| 2025-04-15T09:04:46Z | 459 | 3787783 |
| 2025-04-15T11:08:07Z | 354 | 3788137 |
| 2025-04-15T13:38:11Z | 740 | 3788877 |
| 2025-04-15T16:02:08Z | 880 | 3789757 |
| 2025-04-15T18:26:29Z | 888 | 3790645 |
| 2025-04-15T20:52:51Z | 847 | 3791492 |
| 2025-04-15T23:16:39Z | 877 | 3792369 |
| 2025-04-16T01:42:04Z | 713 | 3793082 |
| 2025-04-16T04:04:31Z | 795 | 3793877 |
| 2025-04-16T06:26:24Z | 461 | 3794338 |
| 2025-04-16T08:52:25Z | 436 | 3794774 |
| 2025-04-16T10:03:14Z | 175 | 3794949 |
| 2025-04-16T12:26:51Z | 521 | 3795470 |
| 2025-04-16T14:49:10Z | 819 | 3796289 |
| 2025-04-16T17:13:40Z | 932 | 3797221 |
| 2025-04-16T19:36:27Z | 901 | 3798122 |
| 2025-04-16T21:59:22Z | 789 | 3798911 |
| 2025-04-17T00:21:24Z | 689 | 3799600 |
| 2025-04-17T02:44:40Z | 694 | 3800294 |
| 2025-04-17T05:07:20Z | 586 | 3800880 |
| 2025-04-17T07:30:12Z | 4648 | 3805528 |
| 2025-04-17T09:52:22Z | 3729 | 3809257 |
|
MilyaShams/multi_nli-ru_10k | MilyaShams | "2025-04-17T09:51:56" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T09:51:50" | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 2847083.1812222223
num_examples: 7999
- name: validation
num_bytes: 356285.8187777778
num_examples: 1001
- name: test
num_bytes: 350769
num_examples: 1000
download_size: 1939594
dataset_size: 3554138.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
davnas/library-occupancy | davnas | "2025-04-17T09:51:39" | 1,746 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-10T12:50:21" | ---
dataset_info:
features:
- name: CommitTime
dtype: timestamp[ns]
- name: Time
dtype: string
- name: Occupancy_main
dtype: int64
- name: Occupancy_southEast
dtype: int64
- name: Occupancy_north
dtype: int64
- name: Occupancy_south
dtype: int64
- name: Occupancy_angdomen
dtype: int64
- name: Occupancy_newton
dtype: int64
- name: Prediction_date
dtype: timestamp[ns]
splits:
- name: train
num_bytes: 179945
num_examples: 2465
download_size: 26804
dataset_size: 179945
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zkpbeats/reddit_ds_684447 | zkpbeats | "2025-04-17T09:51:18" | 755 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2025-04-03T12:09:12" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** zkpbeats/reddit_ds_684447
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5E7EpVEXpKBhiJsatUbQwhkQRzLSB2j8GgwwbWoLnYpJmpQn
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{zkpbeats2025datauniversereddit_ds_684447,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={zkpbeats},
year={2025},
url={https://huggingface.co/datasets/zkpbeats/reddit_ds_684447},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 4018209
- **Date Range:** 2025-03-05T00:00:00Z to 2025-04-17T00:00:00Z
- **Last Updated:** 2025-04-17T09:51:17Z
### Data Distribution
- Posts: 2.34%
- Comments: 36.89%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/teenagers | 70144 | 4.45% |
| 2 | r/Helldivers | 67909 | 4.31% |
| 3 | r/formula1 | 49319 | 3.13% |
| 4 | r/technology | 48438 | 3.07% |
| 5 | r/SipsTea | 39755 | 2.52% |
| 6 | r/wallstreetbets | 36881 | 2.34% |
| 7 | r/Superstonk | 32231 | 2.04% |
| 8 | r/boxoffice | 23146 | 1.47% |
| 9 | r/CasualUK | 22661 | 1.44% |
| 10 | r/ClashOfClans | 22355 | 1.42% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-04-03T11:50:44Z | 210554 | 210554 |
| 2025-04-03T11:52:19Z | 206249 | 416803 |
| 2025-04-03T11:53:58Z | 204074 | 620877 |
| 2025-04-03T11:55:36Z | 210761 | 831638 |
| 2025-04-03T11:57:11Z | 202795 | 1034433 |
| 2025-04-03T11:58:47Z | 228184 | 1262617 |
| 2025-04-03T12:00:22Z | 210899 | 1473516 |
| 2025-04-03T12:01:57Z | 204861 | 1678377 |
| 2025-04-03T12:03:35Z | 219572 | 1897949 |
| 2025-04-03T12:05:11Z | 216640 | 2114589 |
| 2025-04-03T12:06:39Z | 160498 | 2275087 |
| 2025-04-03T12:08:09Z | 166653 | 2441740 |
| 2025-04-03T12:09:37Z | 167136 | 2608876 |
| 2025-04-03T14:37:16Z | 6012 | 2614888 |
| 2025-04-03T16:59:27Z | 8021 | 2622909 |
| 2025-04-03T19:21:39Z | 5134 | 2628043 |
| 2025-04-03T21:44:03Z | 8218 | 2636261 |
| 2025-04-04T00:06:31Z | 4549 | 2640810 |
| 2025-04-04T02:29:04Z | 3893 | 2644703 |
| 2025-04-04T04:52:12Z | 3339 | 2648042 |
| 2025-04-04T07:14:12Z | 3585 | 2651627 |
| 2025-04-04T09:36:09Z | 2926 | 2654553 |
| 2025-04-04T11:58:15Z | 2273 | 2656826 |
| 2025-04-04T14:21:16Z | 4070 | 2660896 |
| 2025-04-04T16:43:31Z | 7070 | 2667966 |
| 2025-04-04T19:06:00Z | 5014 | 2672980 |
| 2025-04-04T21:28:34Z | 5496 | 2678476 |
| 2025-04-04T23:51:43Z | 6492 | 2684968 |
| 2025-04-05T02:14:05Z | 4765 | 2689733 |
| 2025-04-05T04:36:05Z | 3603 | 2693336 |
| 2025-04-05T06:58:08Z | 3641 | 2696977 |
| 2025-04-05T09:20:21Z | 3285 | 2700262 |
| 2025-04-05T11:42:50Z | 3567 | 2703829 |
| 2025-04-05T14:05:04Z | 4652 | 2708481 |
| 2025-04-05T16:27:43Z | 6703 | 2715184 |
| 2025-04-05T18:50:12Z | 4309 | 2719493 |
| 2025-04-05T21:12:49Z | 4931 | 2724424 |
| 2025-04-05T23:35:48Z | 6682 | 2731106 |
| 2025-04-06T01:58:15Z | 5677 | 2736783 |
| 2025-04-06T04:20:25Z | 4914 | 2741697 |
| 2025-04-06T06:42:38Z | 4107 | 2745804 |
| 2025-04-06T09:05:17Z | 2916 | 2748720 |
| 2025-04-06T11:27:41Z | 3423 | 2752143 |
| 2025-04-06T13:50:12Z | 4382 | 2756525 |
| 2025-04-06T16:12:46Z | 7841 | 2764366 |
| 2025-04-06T18:35:22Z | 6755 | 2771121 |
| 2025-04-06T20:58:06Z | 7553 | 2778674 |
| 2025-04-06T23:20:29Z | 4694 | 2783368 |
| 2025-04-07T01:42:58Z | 4013 | 2787381 |
| 2025-04-07T04:05:48Z | 4205 | 2791586 |
| 2025-04-07T06:28:00Z | 4865 | 2796451 |
| 2025-04-07T08:51:04Z | 4575 | 2801026 |
| 2025-04-07T11:14:33Z | 2976 | 2804002 |
| 2025-04-07T13:43:59Z | 6746 | 2810748 |
| 2025-04-07T16:09:27Z | 7351 | 2818099 |
| 2025-04-07T18:32:30Z | 7438 | 2825537 |
| 2025-04-07T20:55:03Z | 5858 | 2831395 |
| 2025-04-07T23:19:23Z | 5595 | 2836990 |
| 2025-04-08T01:47:28Z | 4501 | 2841491 |
| 2025-04-08T04:11:35Z | 5201 | 2846692 |
| 2025-04-08T06:34:42Z | 4027 | 2850719 |
| 2025-04-08T07:25:07Z | 819500 | 3670219 |
| 2025-04-08T09:49:20Z | 3467 | 3673686 |
| 2025-04-08T12:12:06Z | 2923 | 3676609 |
| 2025-04-08T14:34:28Z | 3340 | 3679949 |
| 2025-04-08T16:57:15Z | 7115 | 3687064 |
| 2025-04-08T19:20:48Z | 6129 | 3693193 |
| 2025-04-08T21:44:13Z | 5425 | 3698618 |
| 2025-04-09T00:06:51Z | 4591 | 3703209 |
| 2025-04-09T02:30:18Z | 5063 | 3708272 |
| 2025-04-09T04:52:49Z | 4510 | 3712782 |
| 2025-04-09T07:15:25Z | 2576 | 3715358 |
| 2025-04-09T09:37:46Z | 3146 | 3718504 |
| 2025-04-09T12:01:12Z | 2624 | 3721128 |
| 2025-04-09T14:24:00Z | 5747 | 3726875 |
| 2025-04-09T16:46:55Z | 5781 | 3732656 |
| 2025-04-09T19:10:05Z | 6126 | 3738782 |
| 2025-04-09T21:32:52Z | 5814 | 3744596 |
| 2025-04-09T23:55:35Z | 5109 | 3749705 |
| 2025-04-10T02:18:17Z | 4401 | 3754106 |
| 2025-04-10T04:40:25Z | 3962 | 3758068 |
| 2025-04-10T07:02:46Z | 3661 | 3761729 |
| 2025-04-10T09:25:04Z | 2521 | 3764250 |
| 2025-04-10T11:47:21Z | 3076 | 3767326 |
| 2025-04-10T14:09:56Z | 5727 | 3773053 |
| 2025-04-10T16:32:17Z | 6677 | 3779730 |
| 2025-04-10T18:55:00Z | 7279 | 3787009 |
| 2025-04-10T21:17:25Z | 5033 | 3792042 |
| 2025-04-10T23:39:42Z | 6182 | 3798224 |
| 2025-04-11T02:02:12Z | 5227 | 3803451 |
| 2025-04-11T04:24:43Z | 3812 | 3807263 |
| 2025-04-11T06:47:16Z | 3407 | 3810670 |
| 2025-04-11T09:09:52Z | 3223 | 3813893 |
| 2025-04-11T11:32:04Z | 2816 | 3816709 |
| 2025-04-11T13:54:09Z | 3479 | 3820188 |
| 2025-04-11T16:17:11Z | 7176 | 3827364 |
| 2025-04-11T18:39:42Z | 7644 | 3835008 |
| 2025-04-11T21:02:54Z | 5167 | 3840175 |
| 2025-04-11T23:25:23Z | 4410 | 3844585 |
| 2025-04-12T01:47:44Z | 5528 | 3850113 |
| 2025-04-12T04:09:52Z | 5178 | 3855291 |
| 2025-04-12T06:32:12Z | 4863 | 3860154 |
| 2025-04-12T08:54:33Z | 2591 | 3862745 |
| 2025-04-12T11:16:50Z | 3386 | 3866131 |
| 2025-04-12T13:38:55Z | 3952 | 3870083 |
| 2025-04-12T16:01:27Z | 6808 | 3876891 |
| 2025-04-12T18:24:00Z | 7035 | 3883926 |
| 2025-04-12T20:46:38Z | 6413 | 3890339 |
| 2025-04-12T23:09:25Z | 7285 | 3897624 |
| 2025-04-13T01:31:53Z | 5743 | 3903367 |
| 2025-04-13T03:54:44Z | 3964 | 3907331 |
| 2025-04-13T06:16:57Z | 3439 | 3910770 |
| 2025-04-13T08:39:04Z | 3634 | 3914404 |
| 2025-04-13T11:01:52Z | 3084 | 3917488 |
| 2025-04-13T13:25:02Z | 3285 | 3920773 |
| 2025-04-13T15:47:42Z | 6797 | 3927570 |
| 2025-04-13T18:10:43Z | 8090 | 3935660 |
| 2025-04-13T20:35:52Z | 4613 | 3940273 |
| 2025-04-13T22:58:44Z | 4753 | 3945026 |
| 2025-04-14T01:21:29Z | 4582 | 3949608 |
| 2025-04-14T03:43:38Z | 4468 | 3954076 |
| 2025-04-14T06:06:53Z | 3461 | 3957537 |
| 2025-04-14T08:34:34Z | 3781 | 3961318 |
| 2025-04-14T11:03:16Z | 2441 | 3963759 |
| 2025-04-14T13:25:50Z | 3427 | 3967186 |
| 2025-04-14T15:49:29Z | 5023 | 3972209 |
| 2025-04-14T18:12:27Z | 6004 | 3978213 |
| 2025-04-14T20:35:03Z | 6375 | 3984588 |
| 2025-04-14T22:57:38Z | 4204 | 3988792 |
| 2025-04-15T01:19:58Z | 6034 | 3994826 |
| 2025-04-15T03:42:43Z | 3592 | 3998418 |
| 2025-04-15T06:40:46Z | 269 | 3998687 |
| 2025-04-15T09:03:42Z | 555 | 3999242 |
| 2025-04-15T11:05:57Z | 333 | 3999575 |
| 2025-04-15T13:35:26Z | 744 | 4000319 |
| 2025-04-15T15:59:57Z | 917 | 4001236 |
| 2025-04-15T18:24:19Z | 884 | 4002120 |
| 2025-04-15T20:50:33Z | 802 | 4002922 |
| 2025-04-15T23:14:28Z | 882 | 4003804 |
| 2025-04-16T01:38:45Z | 705 | 4004509 |
| 2025-04-16T02:48:42Z | 221 | 4004730 |
| 2025-04-16T05:11:06Z | 577 | 4005307 |
| 2025-04-16T07:32:59Z | 457 | 4005764 |
| 2025-04-16T10:02:08Z | 495 | 4006259 |
| 2025-04-16T12:25:44Z | 568 | 4006827 |
| 2025-04-16T14:48:04Z | 817 | 4007644 |
| 2025-04-16T17:12:34Z | 926 | 4008570 |
| 2025-04-16T19:35:22Z | 904 | 4009474 |
| 2025-04-16T21:58:12Z | 872 | 4010346 |
| 2025-04-17T00:20:19Z | 757 | 4011103 |
| 2025-04-17T02:43:34Z | 668 | 4011771 |
| 2025-04-17T05:06:13Z | 620 | 4012391 |
| 2025-04-17T07:29:08Z | 3007 | 4015398 |
| 2025-04-17T09:51:17Z | 2811 | 4018209 |
|
amyf/hacker_news_score | amyf | "2025-04-17T09:50:51" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T09:50:38" | ---
dataset_info:
features:
- name: title
dtype: string
- name: score
dtype: int64
- name: time
dtype: timestamp[ns]
- name: url
dtype: string
splits:
- name: train
num_bytes: 211111120
num_examples: 1500000
download_size: 144585746
dataset_size: 211111120
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyentn1410/Trend70000_90000 | nguyentn1410 | "2025-04-17T09:50:32" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T04:05:45" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 1115782
num_examples: 800
download_size: 409079
dataset_size: 1115782
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
macwiatrak/bacbench-strain-clustering-dna | macwiatrak | "2025-04-17T09:50:32" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-17T09:01:06" | ---
dataset_info:
features:
- name: genome_name
dtype: string
- name: contig_name
sequence: string
- name: dna_sequence
dtype: string
- name: start
sequence:
sequence: string
- name: end
sequence:
sequence: string
- name: locus_tag
sequence:
sequence: string
- name: strand
sequence:
sequence: string
- name: genome_completeness
dtype: string
- name: genome_lineage
dtype: string
- name: genome_sample_accession
dtype: string
- name: genome_study_accession
dtype: string
- name: country
dtype: string
- name: family
dtype: string
- name: genus
dtype: string
- name: species
dtype: string
splits:
- name: test
num_bytes: 169603550566
num_examples: 60710
download_size: 78407309865
dataset_size: 169603550566
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
TRANNGUYENAI/StockMomentum60000_70000 | TRANNGUYENAI | "2025-04-17T09:49:44" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-16T15:37:09" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 5981571
num_examples: 3750
download_size: 2145093
dataset_size: 5981571
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gunnybd01/Fully20000_40000 | gunnybd01 | "2025-04-17T09:49:41" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-04-16T20:21:02" | ---
dataset_info:
features:
- name: reports
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 4481477
num_examples: 3150
download_size: 1590906
dataset_size: 4481477
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 28