PSCManual_CPT_Model / README.md
sharadsin's picture
Update README.md
ae01b74 verified
metadata
library_name: transformers
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
  - generated_from_trainer
model-index:
  - name: PSCManual Pre Trained Model
    results: []

PSCManual Pre Trained Model

This model is a CPT version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the NHSN 2025 Patient Safety Component Manual.

Intended uses & limitations

This is a Continued Pre-Training (CPT) model designed to function primarily as an autocomplete system. It was developed as an experimental exercise to evaluate knowledge injection into a language model, with continued pre-training on the NHSN 2025 Patient Safety Component Manual. This model is not intended for production use. Its outputs may be suboptimal because it was not trained with enough data to meet Chinchilla scaling laws, which recommend approximately 20 tokens per parameter for optimal performance.

Training procedure

CPT (Continued Pre Training) for knowledge injection.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 16

Framework versions

  • Transformers 4.50.0
  • Pytorch 2.5.0+cu121
  • Datasets 3.4.1
  • Tokenizers 0.21.1