Update README.md
Browse files
README.md
CHANGED
@@ -85,7 +85,7 @@ Note: other environments may also work.
|
|
85 |
- `gqa`: [GQA images](https://nlp.stanford.edu/data/gqa/images.zip).
|
86 |
- `llava_cap`:[images](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/blob/main/images.zip) .
|
87 |
- `v3det`:The V3Det dataset can be downloaded from [opendatalab](https://opendatalab.com/V3Det/V3Det).
|
88 |
-
- Our generated jsonls can be found [huggingface](https://huggingface.co/fushh7/LLMDet) or [modelscope](https://modelscope.cn/models/fushh7/LLMDet).
|
89 |
- For other evalation datasets, please refer to [MM-GDINO](https://github.com/open-mmlab/mmdetection/blob/main/configs/mm_grounding_dino/dataset_prepare.md).
|
90 |
|
91 |
### 5 Usage
|
@@ -112,7 +112,7 @@ If you find our work helpful for your research, please consider citing our paper
|
|
112 |
|
113 |
```
|
114 |
@article{fu2025llmdet,
|
115 |
-
title={
|
116 |
author={Fu, Shenghao and Yang, Qize and Mo, Qijie and Yan, Junkai and Wei, Xihan and Meng, Jingke and Xie, Xiaohua and Zheng, Wei-Shi},
|
117 |
journal={arXiv preprint arXiv:2501.18954},
|
118 |
year={2025}
|
|
|
85 |
- `gqa`: [GQA images](https://nlp.stanford.edu/data/gqa/images.zip).
|
86 |
- `llava_cap`:[images](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/blob/main/images.zip) .
|
87 |
- `v3det`:The V3Det dataset can be downloaded from [opendatalab](https://opendatalab.com/V3Det/V3Det).
|
88 |
+
- Our generated jsonls can be found in [huggingface](https://huggingface.co/fushh7/LLMDet) or [modelscope](https://modelscope.cn/models/fushh7/LLMDet).
|
89 |
- For other evalation datasets, please refer to [MM-GDINO](https://github.com/open-mmlab/mmdetection/blob/main/configs/mm_grounding_dino/dataset_prepare.md).
|
90 |
|
91 |
### 5 Usage
|
|
|
112 |
|
113 |
```
|
114 |
@article{fu2025llmdet,
|
115 |
+
title={LLMDet: Learning Strong Open-Vocabulary Object Detectors under the Supervision of Large Language Models},
|
116 |
author={Fu, Shenghao and Yang, Qize and Mo, Qijie and Yan, Junkai and Wei, Xihan and Meng, Jingke and Xie, Xiaohua and Zheng, Wei-Shi},
|
117 |
journal={arXiv preprint arXiv:2501.18954},
|
118 |
year={2025}
|