Text Generation
Transformers
Safetensors
English
qwen2
conversational
text-generation-inference
nielsr's picture
nielsr HF Staff
Add link to code
67f0eed verified
|
raw
history blame
1.53 kB
metadata
datasets:
  - monology/pile-uncopyrighted
  - MiniLLM/pile-tokenized
language:
  - en
library_name: transformers
license: apache-2.0
metrics:
  - accuracy
pipeline_tag: text-generation

Ref-Pretrain-Qwen-104M

paper | code

Ref-Pretrain-Qwen-104M is a 104M model with Qwen achitecture conventionally pre-trained from scratch on the Pile for 5B tokens.

We also open-source the tokenized pre-training corpus for reproducibility.

It is used as the reference model in the MiniPLM knwoledge distillation framework to construct the refined pre-training corpus. The data is then used to train MiniPLM models.

Evaluation

MiniPLM models achieves better performance given the same computation and scales well across model sizes:

Citation

@article{miniplm,
    title={MiniPLM: Knowledge Distillation for Pre-Training Language Models}, 
    author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
    journal={arXiv preprint arXiv:2410.17215},
    year={2024}
}