--- dataset_info: features: - name: input_ids sequence: int32 - name: idx dtype: int64 splits: - name: train configs: - config_name: default data_files: - split: train path: data/train-* license: apache-2.0 language: - en pretty_name: 'Pretokenized Dolma: Pre-tokenized, Pre-shuffled Dolma' size_categories: - 100B) token. After tokenization, we shuffled and evenly sampled from the token stream to create 100 uniform shards. These were then further divided into 10,000 smaller shards to support fast loading and parallel training. Only full-length sequences are retained to ensure consistency across samples. The dataset is stored as Parquet files, each containing token sequences under the key input_ids. We release the exact scripts we use to create this dataset in our [pico-lm/pico-dataset](https://github.com/pico-lm/pico-dataset) GitHub repo. ### Usage ``` from datasets import load_dataset dataset = load_dataset("pico-lm/pretokenized-dolma", streaming=True) ```