File size: 5,066 Bytes
cf748d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
---
license: apache-2.0
task_categories:
- text-classification
tags:
- text-moderation
language:
  - en
  - de
  - fr
  - es
  - it
  - sv
  - fi
  - pl
  - cs
  - lv
  - zh
  - ja
  - ko
  - ru
  - uk
  - be
  - kk
---

# Text-Moderation-Multilingual

A comprehensive multilingual text moderation dataset combining multiple high-quality sources for training robust content moderation classifiers.

## Dataset Summary

This dataset aggregates text moderation data from multiple sources to create a large-scale, diverse training corpus for content moderation systems. It includes text samples labeled across multiple harmful content categories, supporting both multilingual and English-specific moderation use cases.

**Total Size:** ~1.7M entries  
**Languages:** Multilingual (primary focus on English)  
**Task:** Multi-label text classification for content moderation

## Dataset Structure

### Data Fields

- `prompt` (string): The input text to be classified
- `S` (int): Sexual content (0 = safe, 1 = harmful)
- `H` (int): Hate speech (0 = safe, 1 = harmful)  
- `V` (int): Violence (0 = safe, 1 = harmful)
- `HR` (int): Harassment (0 = safe, 1 = harmful)
- `SH` (int): Self-harm (0 = safe, 1 = harmful)
- `S3` (int): Sexual content involving minors (0 = safe, 1 = harmful)
- `H2` (int): Hate speech (alternative labeling) (0 = safe, 1 = harmful)
- `V2` (int): Violence (alternative labeling) (0 = safe, 1 = harmful)

### Data Splits

- **Train:** 1459350 samples
- **Validation:** 162150 samples

*Note: Split created with 90/10 train/validation ratio using random seed 42*

## Source Datasets

This dataset combines and harmonizes data from:

- **[ifmain's multilingual dataset](https://huggingface.co/datasets/ifmain/text-moderation-02-multilingual)** - Multilingual moderation examples
- **[OpenAI's English evaluation dataset](https://huggingface.co/datasets/mmathys/openai-moderation-api-evaluation)** - High-quality English evaluation samples  
- **[ifmain's English dataset](https://huggingface.co/datasets/ifmain/text-moderation-01)** - English moderation examples

## Intended Use

### Primary Use Cases
- Training text moderation classifiers
- Benchmarking content moderation systems
- Research into automated content moderation
- Multi-label classification model development

### Out-of-Scope Uses
- This dataset is **not intended** for any purpose other than training content moderation systems
- Should not be used to generate harmful content
- Not suitable for general text classification tasks outside of moderation

## Considerations for Using the Data

### Content Warning
This dataset contains examples of harmful content including hate speech, harassment, violence, and other potentially disturbing material. Users should exercise appropriate caution when working with this data.

### Bias and Limitations
- The dataset reflects the biases present in the source datasets
- Content moderation standards may vary across different platforms and cultures
- Label consistency across merged datasets may vary
- Primarily English-focused despite multilingual components

### Ethical Considerations
- This dataset should only be used to improve content moderation and safety systems
- Researchers and developers should implement appropriate safeguards when working with this data
- The goal is to reduce harmful content online, not to amplify it

## Example Usage

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("KoalaAI/Text-Moderation-Multilingual")

# Access splits
train_data = dataset["train"]
val_data = dataset["validation"]

# Example entry
print(train_data[0])
# {
#   'prompt': 'Example text...',
#   'S': 0, 'H': 0, 'V': 0, 'HR': 0, 
#   'SH': 0, 'S3': 0, 'H2': 0, 'V2': 0
# }
```

## Dataset Creation

### Curation Process
1. Source datasets were identified and downloaded
2. Data was harmonized to use consistent labeling schema
3. Entries were merged and deduplicated where appropriate
4. Train/validation split was created using stratified sampling

### Quality Control
- Labels were preserved from original high-quality sources
- Data integrity checks were performed during merging process
- Consistent schema applied across all entries

## License

Please refer to the licenses of the individual source datasets:
- Check ifmain datasets for their respective licensing terms
- OpenAI evaluation dataset licensing applies to that portion
- Usage should comply with all source dataset requirements

## Citation

If you use this dataset, please cite the original source datasets:

```bibtex
@misc{text-moderation-large,
  title={Text-Moderation-Multilingual: A Multilingual Text Moderation Dataset},
  author={[KoalaAI]},
  year={2025},
  note={Aggregated from ifmain's and OpenAI's moderation datasets}
}
```

## Contact

For questions about this dataset compilation, please open an issue on this repository.

---

**Disclaimer:** This dataset is provided for research and safety purposes only. Users are responsible for ensuring ethical use and compliance with applicable laws and regulations.