File size: 4,040 Bytes
d95edcc d738e90 d95edcc d738e90 d95edcc d738e90 fc2a8f4 b2d61af ed550a0 b2d61af 8828550 fc2a8f4 8828550 fc2a8f4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
---
model_name: QwenMedic-v1
language: en
license: apache-2.0
pipeline_tag: text-generation
tags:
- medical
- clinical
- question-answering
- summarization
- decision-support
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
- jtatman/medical-sci-instruct-1m-sharegpt
---
## Model Card: QwenMedic-v1
<p align="center">
<img
src="https://huggingface.co/ross-dev/QwenMedic-v1/resolve/main/assets/model_image.png"
alt="Model Image"
width="350"
/>
</p>
### Overview
QwenMedic-v1 is a medical-specialty adaptation of the Qwen3-1.7B causal language model, fine-tuned for clinical reasoning and instruction-following tasks. It was trained for **1 epoch** on two curated medical datasets to improve diagnostic Q&A and clinical summarization.
### Base Model
- **Architecture:** Qwen3-1.7B (28 layers, 16 Q / 8 KV attention heads, 32 768-token context)
- **Parameters:** 1.7 billion
- **Quantization:** float16 and int4 supported
### Fine-Tuning Data
1. **Medical Reasoning SFT** (`FreedomIntelligence/medical-o1-reasoning-SFT`)
- Chain-of-thought reasoning examples on verifiable medical problems
- Language: English
- Split used: `train`
2. **General Medical Instruction** (`jtatman/medical-sci-instruct-1m-sharegpt`)
- Conversational Q&A prompts across medical topics
- Sampled first 100 000 examples via `train[:100000]`
### Training Configuration
- **Framework:** PyTorch + Hugging Face Transformers
- **Optimizer:** AdamW
- **Learning Rate:** 2 × 10⁻⁵
- **Batch Size:** 16 (with gradient accumulation)
- **Precision:** bfloat16 mixed precision
- **Hardware:** NVIDIA RTX 3090 (24 GB)
### Intended Uses
- Clinical question answering & differential diagnosis
- Summarization of patient notes
- Medical education & decision support
### Limitations & Risks
- May produce **hallucinations** or plausible-sounding but incorrect advice
- **Biases** due to training-data coverage
- **Not FDA-approved**—should not replace professional medical judgment
- Avoid feeding **patient-identifiable** data without proper de-identification
### Summary of Final Training Metrics
| Metric | Step | Smoothed | Raw Value |
|------------------:|-----:|---------:|----------:|
| **Epoch** | 1539 | 0.9979 | 0.9997 |
| **Gradient Norm** | 1539 | 0.3882 | 0.3974 |
| **Learning Rate** | 1539 | — | 0 |
| **Training Loss** | 1539 | 1.5216 | 1.4703 |
### Inference Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwenMedic-v1"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "A 55-year-old male with Type 2 diabetes presents with sudden chest pain "
"and diaphoresis. What are the top differential diagnoses?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
### Contact
- **Creator:** Andre Ross
- **Company:** Ross Technologies
- **Email:** [email protected]
|