File size: 2,187 Bytes
c05a7f9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
language:
- en
license: cc-by-sa-3.0
tags:
- treecorpus
- treecorpuscleaned
- wikipedia
- encyclopedia
- knowledge-base
- factual-knowledge
- training-data
- conversational-ai
- nlp
- language-model
- text-corpus
- qa-dataset
- structured-data
- clean-text
pretty_name: 'TreeCorpusCleaned: Basic Wikipedia Text Cleanup for AI Models'
size_categories:
- 10M<n<100M
---
# TreeCorpusCleaned
TreeCorpusCleaned is a modestly improved version of the TreeCorpus dataset with some additional basic cleaning to help reduce Wikipedia markup artifacts. This dataset provides a slightly cleaner version of Wikipedia content for training AI models.
## Dataset Statistics
- **Size**: 26.27 GB (26,272,580,250 bytes)
- **Examples**: 2,882,766 articles
- **Download Size**: 13.33 GB (13,326,529,312 bytes)
- **Language**: English
## Data Structure
Each entry in the dataset contains:
- `id` (string): Unique Wikipedia article identifier
- `title` (string): Article title
- `text` (string): Text with basic additional cleanup
- `url` (string): Source Wikipedia URL
- `timestamp` (string): Processing timestamp
## Basic Additional Cleaning
TreeCorpusCleaned includes some simple improvements over the original TreeCorpus:
- **Basic Reference Cleanup**: Additional removal of common reference markers
- **Simple Template Handling**: Some extra template cleanup
- **Formatting Standardization**: Basic text formatting standardization
## Usage
This dataset is suitable for:
- Training language models with slightly cleaner Wikipedia text
- NLP tasks that benefit from reduced markup artifacts
- Projects requiring basic preprocessing of Wikipedia content
## License and Citation
TreeCorpusCleaned is derived from Wikipedia content available under the CC BY-SA 3.0 license. When using this dataset, please provide appropriate attribution to both this dataset and Wikipedia.
## Dataset Configuration
The dataset is configured with a default split:
- Split name: train
- Data files pattern: data/train-*
## Creation Process
TreeCorpusCleaned was created by applying some additional basic cleaning steps to the original TreeCorpus dataset to remove common artifacts and improve text quality. |