nielsr HF Staff commited on
Commit
b889e97
·
verified ·
1 Parent(s): be0eb77

Improve model card with paper link and updated citation

Browse files

This PR improves the model card by:

- Adding a direct link to the paper in the introduction.
- Correcting the year in the BibTeX citation to reflect the information provided in the Github README (2025 instead of 2024).

Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
  datasets:
5
  - liuhaotian/LLaVA-Instruct-150K
 
 
6
  pipeline_tag: image-text-to-text
7
  ---
8
 
@@ -10,7 +10,7 @@ pipeline_tag: image-text-to-text
10
 
11
  ```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.
12
 
13
- In this model space, you will find the stage two (finetuning) weights of LLaVA-MORE LLaMA 3.1 8B.
14
 
15
  For more information, visit our [LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE) repository.
16
 
@@ -25,10 +25,10 @@ python -u llava/eval/run_llava.py
25
  If you make use of our work, please cite our repo:
26
 
27
  ```bibtex
28
- @misc{cocchi2024llavamore,
29
- title={{LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1}},
30
- author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
31
- url={https://github.com/aimagelab/LLaVA-MORE},
32
- year={2024}
33
  }
34
  ```
 
1
  ---
 
 
2
  datasets:
3
  - liuhaotian/LLaVA-Instruct-150K
4
+ library_name: transformers
5
+ license: apache-2.0
6
  pipeline_tag: image-text-to-text
7
  ---
8
 
 
10
 
11
  ```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.
12
 
13
+ In this model space, you will find the stage two (finetuning) weights of LLaVA-MORE LLaMA 3.1 8B, as described in [this paper](https://huggingface.co/papers/2503.15621).
14
 
15
  For more information, visit our [LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE) repository.
16
 
 
25
  If you make use of our work, please cite our repo:
26
 
27
  ```bibtex
28
+ @inproceedings{cocchi2025llavamore,
29
+ title={{LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning}},
30
+ author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Baraldi, Lorenzo and Cornia, Marcella and Cucchiara, Rita},
31
+ booktitle={arxiv},
32
+ year={2025}
33
  }
34
  ```