language:
- en
license: cc-by-sa-3.0
tags:
- treecorpus
- treecorpuscleaned
- wikipedia
- encyclopedia
- knowledge-base
- factual-knowledge
- training-data
- conversational-ai
- nlp
- language-model
- text-corpus
- qa-dataset
- structured-data
- clean-text
pretty_name: 'TreeCorpusCleaned: Basic Wikipedia Text Cleanup for AI Models'
size_categories:
- 10M<n<100M
TreeCorpusCleaned
TreeCorpusCleaned is a modestly improved version of the TreeCorpus dataset with some additional basic cleaning to help reduce Wikipedia markup artifacts. This dataset provides a slightly cleaner version of Wikipedia content for training AI models.
Dataset Statistics
- Size: 26.27 GB (26,272,580,250 bytes)
- Examples: 2,882,766 articles
- Download Size: 13.33 GB (13,326,529,312 bytes)
- Language: English
Data Structure
Each entry in the dataset contains:
id
(string): Unique Wikipedia article identifiertitle
(string): Article titletext
(string): Text with basic additional cleanupurl
(string): Source Wikipedia URLtimestamp
(string): Processing timestamp
Basic Additional Cleaning
TreeCorpusCleaned includes some simple improvements over the original TreeCorpus:
- Basic Reference Cleanup: Additional removal of common reference markers
- Simple Template Handling: Some extra template cleanup
- Formatting Standardization: Basic text formatting standardization
Usage
This dataset is suitable for:
- Training language models with slightly cleaner Wikipedia text
- NLP tasks that benefit from reduced markup artifacts
- Projects requiring basic preprocessing of Wikipedia content
License and Citation
TreeCorpusCleaned is derived from Wikipedia content available under the CC BY-SA 3.0 license. When using this dataset, please provide appropriate attribution to both this dataset and Wikipedia.
Dataset Configuration
The dataset is configured with a default split:
- Split name: train
- Data files pattern: data/train-*
Creation Process
TreeCorpusCleaned was created by applying some additional basic cleaning steps to the original TreeCorpus dataset to remove common artifacts and improve text quality.