File size: 1,506 Bytes
6cdc212 d432330 6cdc212 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
license: cc-by-4.0
language:
- en
base_model: Qwen/Qwen2-VL-7B-Instruct
---
# Safe-o1-V Model Card π€β¨
## Model Overview π
`Safe-o1-V` is an innovative multi-modal language model that introduces a **self-monitoring thinking process** to detect and filter unsafe content, achieving more robust safety performance π.
---
## Features and Highlights π
- **Safety First** π: Through a self-monitoring mechanism, it detects potential unsafe content in the thinking process in real-time, ensuring outputs consistently align with ethical and safety standards.
- **Enhanced Robustness** π‘: Compared to traditional models, `Safe-o1-V` performs more stably in complex scenarios, reducing unexpected "derailments."
- **User-Friendly** π: Designed to provide users with a trustworthy conversational partner, suitable for various application scenarios, striking a balance between helpfulness and harmfulness.

---
## Usage π
You can load `Safe-o1-V` using the Hugging Face `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("PKU-Alignment/Safe-o1-V")
model = AutoModelForCausalLM.from_pretrained("PKU-Alignment/Safe-o1-V")
input_text = "Hello, World!"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |