LoRA Adapter for TinyLlama-1.1B-Chat specialized on Motorcycle Repair QA
This repository contains LoRA adapter weights fine-tuned from the base model TinyLlama/TinyLlama-1.1B-Chat-v1.0
. The goal was to enhance the model's knowledge and question-answering capabilities specifically within the domain of motorcycle repair and maintenance, while leveraging the efficiency of the compact TinyLlama architecture.
This adapter was trained using QLoRA for memory efficiency.
Model Description
- Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Adapter Task: Question Answering / Instruction Following on Motorcycle Repair topics.
- Fine-tuning Method: QLoRA (4-bit quantization) via
trl
'sSFTTrainer
. - Dataset: cahlen/cdg-motorcycle-repair-qa-data-85x10 (880 synthetically generated QA pairs).
Key Features
- Domain Specialization: Improved performance on questions related to motorcycle repair compared to the base model.
- Efficiency: Builds upon the small and efficient TinyLlama (1.1B parameters). The adapter itself is only ~580MB.
- QLoRA Trained: Enables loading the base model in 4-bit precision for reduced memory footprint during inference.
How to Use
You need to load the base model (TinyLlama/TinyLlama-1.1B-Chat-v1.0
) and then apply this LoRA adapter on top. Ensure you have transformers
, peft
, accelerate
, and bitsandbytes
installed.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
from peft import PeftModel
import os
# --- Configuration ---
base_model_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_id = "cahlen/tinyllama-motorcycle-repair-qa-adapter" # This is the adapter you are using
device_map = "auto"
# --- Load Tokenizer ---
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
# --- Configure Quantization ---
use_4bit = True # Set to False if not using 4-bit
compute_dtype = getattr(torch, "float16") # Default compute dtype
quantization_config = None
if use_4bit and torch.cuda.is_available():
print("Using 4-bit quantization")
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=False,
)
quantization_config = bnb_config
else:
print("Not using 4-bit quantization or CUDA not available")
compute_dtype = torch.float32 # Use default float32 on CPU
# --- Load Base Model ---
print(f"Loading base model: {base_model_id}")
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
quantization_config=quantization_config,
device_map=device_map,
trust_remote_code=True,
torch_dtype=compute_dtype # Set appropriate dtype
)
base_model.config.use_cache = True
# --- Load LoRA Adapter ---
print(f"Loading LoRA adapter: {adapter_id}")
model = PeftModel.from_pretrained(base_model, adapter_id)
model.eval()
print("Adapter loaded successfully.")
# --- Prepare Prompt ---
# Example prompt
topic = "Brake System"
question = "What are the signs of worn brake pads?"
system_prompt = "You are a helpful assistant knowledgeable about motorcycle repair."
user_query = f"Topic: {topic}\nQuestion: {question}"
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_query},
]
formatted_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
print(f"--- Prompt ---\n{formatted_prompt}")
# --- Generate Response ---
print("Generating...")
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=100,
do_sample=True,
temperature=0.7,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
result = pipe(formatted_prompt)
# --- Print Response ---
print("\n--- Output ---")
print(result[0]['generated_text'])
# Extract only the assistant's response
assistant_response = result[0]['generated_text'][len(formatted_prompt):].strip()
print("\n--- Assistant Only ---")
print(assistant_response)
Training Details
- Base Model:
TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Dataset:
cahlen/cdg-motorcycle-repair-qa-data-85x10
(880 examples) - Training Method: QLoRA using
trl.SFTTrainer
. - QLoRA Config: 4-bit NF4 quantization,
float16
compute dtype. - LoRA Config:
r=64
,lora_alpha=16
,lora_dropout=0.1
, target modules:["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
. - Training Arguments:
- Epochs: 1
- Batch Size (per device): 4
- Gradient Accumulation: 2 (Effective Batch Size: 8)
- Optimizer: PagedAdamW (32-bit)
- Learning Rate: 2e-4
- LR Scheduler: Cosine
- Sequence Length: 512
- Precision: Mixed (FP16)
Example Usage & Responses
Here are some examples of prompts and the responses generated by this fine-tuned adapter:
Example 1: Tire Pressure
<|system|>
You are a helpful assistant knowledgeable about motorcycle repair.</s>
<|user|>
Topic: Tire Maintenance
Question: What is the first step in checking tire pressure?</s>
<|assistant|>
The first step is to check the tire pressure, then check the tire pressure gauge, and finally check the tire tread depth.
Example 2: Oil Change Frequency
<|system|>
You are a helpful assistant knowledgeable about motorcycle repair.</s>
<|user|>
Topic: Oil Maintenance
Question: How often should I change my motorcycle oil?</s>
<|assistant|>
It is recommended to change your motorcycle oil every 5,000 to 10,000 miles, or as recommended by the manufacturer.
Example 3: Worn Brake Pads
<|system|>
You are a helpful assistant knowledgeable about motorcycle repair.</s>
<|user|>
Topic: Brake System
Question: What are the signs of worn brake pads?</s>
<|assistant|>
Worn brake pads can be felt in the brake pedal, resulting in a rough or jerky braking action, or a noticeable decrease in braking performance.
Example 4: General Knowledge Retention
<|system|>
You are a helpful assistant knowledgeable about motorcycle repair.</s>
<|user|>
Topic: Geography
Question: What is the capital of France?</s>
<|assistant|>
The capital of France is Paris.
Limitations and Bias
- This model inherits limitations and potential biases from the base
TinyLlama
model and the synthetically generated dataset. - The motorcycle repair knowledge, while enhanced, is based on AI-generated data and has not been verified by human experts. Do not rely solely on this model for critical repair decisions. Always consult official service manuals and qualified mechanics.
- Performance on topics outside of motorcycle repair may be degraded compared to the base model.
Citation
If you use this adapter, please cite the base model and consider citing this repository:
@misc{cahlen_tinyllama_motorcycle_repair_qa_adapter,
author = {Cahlen},
title = {LoRA Adapter for TinyLlama-1.1B-Chat specialized on Motorcycle Repair QA},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.co/cahlen/tinyllama-motorcycle-repair-qa-adapter}}
}
@misc{zhang2024tinyllama,
title={TinyLlama: An Open-Source Small Language Model},
author={Peiyuan Zhang and Guangxuan Xiao and Ning Tuan Anh Tran and Xin (Notus) Li and Hao Tan and Yaowen Zhang and Philipp F. Hoefer and Hong Mo Kuan and Benn Tan and Ponnuchamy Muthu Ilakkuvan and Associated Professor Nan Yang and Dr. Si-Qing Qin and Dr. Bin Lin and Dr. Zhengin Li and Dr. Ramesha Karunasena and Dr. Ajay Kumar Jha and Mohamed Ahmed Hassan and ARIES AI},
year={2024},
eprint={2401.02385},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 0
Model tree for cahlen/tinyllama-motorcycle-repair-qa-adapter
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0