metadata
library_name: transformers
license: apache-2.0
datasets:
- UCSC-VLAA/STAR-1
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
🌟 STAR-1: Safer Alignment of Reasoning LLMs with 1K Data
📃 Paper |🤗 STAR-1 Data | 🤗 STAR-1 Model | 📚 Project Page
Introduction
STAR-1 is a high-quality safety dataset designed to enhance safety alignment in large reasoning models (LRMs) like DeepSeek-R1.
- Built on the principles of diversity, deliberative reasoning, and rigorous filtering, STAR-1 integrates and refines data from multiple sources to provide policy-grounded reasoning samples.
- The dataset contains 1,000 carefully selected examples, each aligned with best safety practices through GPT-4o-based evaluation.
- Fine-tuning with STAR-1 leads to significant safety improvements across multiple benchmarks, with minimal impact on reasoning capabilities.
We open-sourced our STAR1-R1-Distill-14B model here, which is fine-tuned on STAR-1 dataset.
Artifacts
Data
Dataset | Num. of Sample | URL |
---|---|---|
STAR-1 | 1K | 🤗 UCSC-VLAA/STAR-1 |
STAR 41K | 41K | 🤗 UCSC-VLAA/STAR-41K |
STAR-benign-915 | 915 | 🤗 UCSC-VLAA/STAR-benign-915 |
Model
Model | Type | URL |
---|---|---|
STAR1 -R1-Distill-1.5B |
R1-Distill-Qwen-1.5B trained on STAR-1 | 🤗 UCSC-VLAA/STAR1-R1-Distill-1.5B |
STAR1 -R1-Distill-7B |
R1-Distill-Qwen-7B trained on STAR-1 | 🤗 UCSC-VLAA/STAR1-R1-Distill-7B |
STAR1 -R1-Distill-8B |
R1-Distill-Llama-8B trained on STAR-1 | 🤗 UCSC-VLAA/STAR1-R1-Distill-8B |
STAR1 -R1-Distill-14B |
R1-Distill-Qwen-14B trained on STAR-1 | 🤗 UCSC-VLAA/STAR1-R1-Distill-14B |
STAR1 -R1-Distill-32B |
R1-Distill-Qwen-32B trained on STAR-1 | 🤗 UCSC-VLAA/STAR1-R1-Distill-32B |
Evaluation
See our github repo.
Acknowledgement
This work is partially supported by a gift from Open Philanthropy. We thank the NAIRR Pilot Program and the Microsoft Accelerate Foundation Models Research Program for supporting our computing needs.
Citation
@article{wang2025star1saferalignmentreasoning,
title={STAR-1: Safer Alignment of Reasoning LLMs with 1K Data},
author={Zijun Wang and Haoqin Tu and Yuhan Wang and Juncheng Wu and Jieru Mei and Brian R. Bartoldson and Bhavya Kailkhura and Cihang Xie},
year={2025},
journal = {arXiv preprint arXiv:2504.01903}
}