π¦ Fino1-14B
Fino1-14B is a fine-tuned version of Qwen2.5-14B-Instruct, designed to improve performance on [financial reasoning tasks]. This model has been trained using SFT and RF on TheFinAI/Fino1_Reasoning_Path_FinQA_v2, enhancing its capabilities in financial reasoning tasks. Check our paper arxiv.org/abs/2502.08127 for more details.
π Model Details
- Model Name:
Fino1-14B
- Base Model:
Qwen2.5-14B-Instruct
- Fine-Tuned On:
TheFinAI/Fino1_Reasoning_Path_FinQA_v2
Derived from multiple financial dataset. - Training Method: SFT and RF
- Objective:
[Enhance performance on specific tasks such as financial mathemtical reasoning]
- Tokenizer: Inherited from
Qwen/Qwen2.5-14B-Instruct
π Training Configuration
- Training Hardware:
GPU: [e.g., 4xH100]
- Batch Size:
[e.g., 16]
- Learning Rate:
[e.g., 2e-5]
- Epochs:
[e.g., 3]
- Optimizer:
[e.g., AdamW, LAMB]
π§ Usage
To use Fino1-14B
with Hugging Face's transformers
library:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "TheFinAI/Fino1-14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "What is the results of 3-5?"
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
π‘ Citation
If you use this model in your research, please cite:
@article{qian2025fino1,
title={Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance},
author={Qian, Lingfei and Zhou, Weipeng and Wang, Yan and Peng, Xueqing and Huang, Jimin and Xie, Qianqian},
journal={arXiv preprint arXiv:2502.08127},
year={2025}
}
- Downloads last month
- 22
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.