File size: 6,061 Bytes
73d3621
27f441e
 
 
c323a60
27f441e
c323a60
 
27f441e
c323a60
 
27f441e
c323a60
73d3621
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c323a60
73d3621
27f441e
3336d9d
 
 
 
 
c323a60
3336d9d
 
 
 
 
27f441e
 
 
 
 
 
 
 
 
 
 
 
 
 
c323a60
 
 
 
 
 
 
27f441e
 
 
 
 
 
 
 
 
 
 
 
 
e1ad928
 
 
 
df4e3bc
 
18c38fe
df4e3bc
5227aa3
 
8ab5238
640e079
18c38fe
e1ad928
c323a60
e1ad928
c323a60
df4e3bc
c323a60
 
5227aa3
c323a60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
language: pt
license: cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
size_categories:
- 100M<n<1B
dataset_info:
  features:
  - name: url
    dtype: string
  - name: content_languages
    dtype: string
  - name: warc_filename
    dtype: string
  - name: warc_record_offset
    dtype: int64
  - name: warc_record_length
    dtype: int64
  - name: text
    dtype: string
  - name: crawl_timestamp
    dtype: string
  splits:
  - name: train
    num_bytes: 1087519823221
    num_examples: 342818651
  download_size: 1087713663056
  dataset_size: 1087519823221
pretty_name: Canarim
---

<p align="center">
  <img width="250" alt="Camarim Logo" src="https://raw.githubusercontent.com/DominguesM/Canarim-Instruct-PTBR/main/assets/canarim.png">
</p>

<p align="center">
  <a href="https://huggingface.co/datasets/dominguesm/canarim">[🐱 HuggingFace]</a>
</p>

<hr>


# Canarim: A Large-Scale Dataset of Web Pages in the Portuguese Language

## Introduction

Canarim is a database encompassing over 342 million Portuguese language documents, sourced from multiple iterations of CommonCrawl. This nearly 1 terabyte database stands as one of the most extensive Portuguese language data collections available. It underwent initial deduplication using URLs, with plans for further text-based deduplication and filtering of potentially harmful content. The data, originally in HTML, has been converted to Markdown with the `Trafilatura` library to enhance readability and quality. Canarim is poised to be a crucial resource for NLP research, particularly in Portuguese language applications, filling the gap in large-scale, high-quality data for languages other than English.

## Dataset Structure

### Data Instances

An example looks as follows:

```json
{
  "url": "...",
  "content_languages": "por",
  "warc_filename": "crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00352.warc.gz",
  "warc_record_offset": 971279893,
  "warc_record_length": 3873,
  "text": "...",
  "crawl_timestamp": "2023-02-02T20:28:21Z"
}
```

### Data Fields

- `url`: URL of the page
- `content_languages`: Language of the page
- `warc_filename`: Name of the WARC file
- `warc_record_offset`: Offset of the WARC record
- `warc_record_length`: Length of the WARC record
- `text`: Text of the page, in Markdown format
- `crawl_timestamp`: Timestamp of the crawl

## Text Extraction Overview

The Canarim database employs the [`Trafilatura`](https://trafilatura.readthedocs.io) library for extracting textual content from HTML data, converting it into Markdown format. This tool focuses on preserving key textual elements like titles, subtitles, bold, and italic formatting in Markdown, ensuring the retention of the original document structure. During the extraction process, Trafilatura discards comments and other non-essential information, streamlining the content to include only the main body of the web pages.

</br>

<p align="center">
  <img width="800" alt="Text Extraction Example" src="https://raw.githubusercontent.com/DominguesM/canarim/main/assets/canarim-text-extraction-preview.png">
</p>
<p align="center">
  <a href="https://g1.globo.com/ac/acre/natureza/amazonia/noticia/2023/01/03/para-comemorar-40-anos-do-parque-zoobotanico-da-ufac-livro-vai-reunir-depoimentos-de-envolvidos-no-inicio-do-projeto.ghtml" target="_blank">Original Web Page</a> and 
  <a href="https://github.com/DominguesM/canarim/blob/main/assets/extracted_text.md" target="_blank">Extracted Text</a>
</p>

## Usage

Below is an example of how to quickly explore just a few samples from a dataset using the `datasets` library.

```python
!pip install -q datasets

from datasets import load_dataset

ds = load_dataset(
    "dominguesm/canarim",
    # Filter only the data from the `train split`
    split="train",
    # Filter only the files that contain the prefix `train/data-0019` and the suffix `-of-00192.arrow`
    data_files="train/data-0019*-of-00192.arrow",
    # Load the dataset without downloading the data (Streaming mode)
    streaming=True
)

# From the returned data, filter only the data where the `url` value starts with `https://g1.globo.com/`
ds_globo = ds.filter(
    lambda example: example['url'].startswith("https://g1.globo.com/")
)

# Return the first 10 examples from the applied filter.
data = list(ds_globo.take(10))

print(data[0])

# {
#     "url": "https://g1.globo.com/ac/acre/(...)",
#     "content_languages": "por",
#     "warc_filename": "crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00552.warc.gz",
#     "warc_record_offset": 281625400,
#     "warc_record_length": 192934,
#     "text": "Parque Zoobotânico da Ufac guarda uma grande variedade espécies de árvores em Rio Branco — Foto: Arquivo/Ufac (...)",
#     "crawl_timestamp": "2023-02-01T10:38:52Z"
# }
```

## Dataset Statistics

| Split  | # Samples | # Size (bytes) | # Size (GB) |
| ------ | --------- | -------------- | ----------- |
| Train  | 342,818,651 | 1,087,519,823,221 | 1087,51 |

## Citing

If you use Canarim in your research, please cite the following.
  
```bibtex
  @misc {maicon_domingues_2024,
    author       = { {Maicon Domingues} },
    title        = { canarim (Revision 640e079) },
    year         = 2024,
    url          = { https://huggingface.co/datasets/dominguesm/canarim },
    doi          = { 10.57967/hf/1605 },
    publisher    = { Hugging Face }
  }
```

## License

This dataset is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/). You can use the dataset for any purpose, but you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

## Contact

For any questions or suggestions, please contact [Maicon Domingues](https://nlp.rocks/).