alea-institute commited on
Commit
3537940
·
verified ·
1 Parent(s): ab687fb

Update README and config files - README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -30,6 +30,12 @@ configs:
30
 
31
  This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
32
 
 
 
 
 
 
 
33
  ## Abstract
34
 
35
  Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
@@ -44,11 +50,6 @@ The foundation of this project is a corpus of over 132 million documents and tri
44
 
45
  All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
46
 
47
- ## Dataset Details
48
-
49
- - **Format**: Parquet files containing document text and metadata
50
- - **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
51
-
52
  ## Legal Basis
53
 
54
  This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation:
 
30
 
31
  This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
32
 
33
+ ## Dataset Details
34
+
35
+ - **Format**: Parquet files containing document text and metadata
36
+ - **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
37
+ - **Tokenizer**: The `tokens` field uses the [kl3m-004-128k-cased](https://huggingface.co/alea-institute/kl3m-004-128k-cased) tokenizer, a case-sensitive 128K vocabulary tokenizer optimized for legal, financial, and enterprise documents
38
+
39
  ## Abstract
40
 
41
  Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
 
50
 
51
  All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
52
 
 
 
 
 
 
53
  ## Legal Basis
54
 
55
  This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation: