language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: single_presence
path: data/single_presence-*
- split: co_occurrence
path: data/co_occurrence-*
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
- name: boxed_image
dtype: image
- name: box_xmin
dtype: float64
- name: box_xmax
dtype: float64
- name: box_ymin
dtype: float64
- name: box_ymax
dtype: float64
- name: is_coco
dtype: int64
- name: label_name
dtype: string
- name: co_occurrence
dtype: int64
- name: written_descriptions
sequence: string
- name: spoken_descriptions
sequence: string
splits:
- name: single_presence
num_bytes: 174917975.06868687
num_examples: 492
- name: co_occurrence
num_bytes: 367184530.93131316
num_examples: 993
download_size: 429876515
dataset_size: 542102506
💬 VLM-REG: Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation
Overview
Referring Expression Generation (REG)—the task of producing a concise and unambiguous description that allows a listener to identify a target object—lies at the heart of pragmatic communication in vision-language systems. However, existing benchmarks suffer from two major limitations:
- Data leakage in RefCOCO/RefCOCO+, which raises concerns about evaluation contamination, especially for VLMs trained on MSCOCO.
- Lack of spoken data, despite the fact that real-world referring is often real-time and spontaneous, unlike written language, which benefits from planning and revision.
To address these gaps, we introduce RefOI, a curated dataset built from the OpenImages V7 Instance Segmentation validation set.
Key features:
- 1,485 real-world object instances, equally distributed across COCO (744) and non-COCO (741) classes.
- Includes single presence and co-occurrence images for each class.
- Each instance annotated with 3 written and 2 spoken human referring expressions.
Using RefOI, we evaluate several state-of-the-art VLMs and uncover three tiers of pragmatic failure:
- Ambiguity: Generated expressions often fail to uniquely identify the referent.
- Redundancy: Models include excessive or irrelevant details, violating principles of informativeness and efficiency.
- Misalignment: Model preferences diverge from human pragmatics, favoring visual complexity over minimal spatial cues.
For token-level annotation of referring expressions, see the companion dataset RefOI-TLHF, which provides minimal span supervision for both human- and model-generated descriptions.
Dataset Schema and Split
Data Fields
Each entry in the dataset contains the following fields:
image
: The original image file.mask
: A binary segmentation mask isolating the target object.boxed_image
: The original image overlaid with a red bounding box highlighting the target object.box_xmin
,box_xmax
,box_ymin
,box_ymax
: The normalized bounding‑box coordinates.is_coco
: A binary flag (1 for COCO-class, 0 for non‑COCO).label_name
: The object’s class label (e.g., “muffin,” “giraffe”).co_occurrence
: The number of same‑class instances in the image (1 = no distractors; >1 = multiple).written_descriptions
: Three human‑typed referring expressions.spoken_descriptions
: Two human‑spoken expressions (transcribed and optionally corrected by annotators).
Dataset Split
single_presence
(co_occurrence = 1
):
Only one object of the target class appears (no same‑class distractors in the image).co_occurrence
(co_occurrence > 1
):
Multiple objects of the same class appear in the image, introducing potential referential ambiguity.
Usage
from datasets import load_dataset
# only one object of the class
ds_single = load_dataset("Seed42Lab/RefOI", split="single_presence")
# multiple objects of the class
ds_multi = load_dataset("Seed42Lab/RefOI", split="co_occurrence")
print(ds_single[0])
print(ds_multi[0])
Experiments
We compare multiple models across standard metrics, listener-based accuracy, and human judgment. Humans outperform all models by large margins (e.g., >90% vs. ~50%). Automatic metrics such as BLEU and CIDEr show poor correlation with human judgment, frequently ranking verbose models higher. Even listener-based scores (REC) fail to consistently match human preferences, indicating that existing metrics do not capture pragmatic competence effectively.
Model | Instr. | BLEU-1 | BLEU-4 | ROUGE-1 | ROUGE-L | METEOR | CIDEr | SPICE | BERT | CLIP | REC | Human | Irrel% |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LLaVA-7B | Dft. | 13.27 | 1.60 | 18.09 | 16.30 | 19.29 | 2.10 | 10.50 | 85.51 | 79.02 | 32.41 | 39.46 | 87.30 |
Brf. | 28.74 | 6.05 | 36.46 | 35.50 | 19.15 | 10.80 | 24.59 | 89.02 | 70.72 | 25.51 | 30.57 | 41.95 | |
LLaVA-13B | Dft. | 8.17 | 1.07 | 11.98 | 10.94 | 16.89 | 0.77 | 7.92 | 84.61 | 79.85 | 30.13 | 46.40 | 91.85 |
Brf. | 28.96 | 5.81 | 36.44 | 35.64 | 20.13 | 8.14 | 21.63 | 88.42 | 72.99 | 28.92 | 32.53 | 49.65 | |
LLaVA-34B | Dft. | 6.29 | 0.78 | 9.82 | 9.11 | 16.15 | 0.07 | 7.61 | 84.39 | 79.86 | 33.42 | 46.53 | 92.90 |
Brf. | 28.55 | 6.38 | 32.99 | 31.67 | 20.48 | 9.60 | 16.50 | 88.50 | 74.95 | 35.24 | 36.77 | 56.11 | |
XComposer | Dft. | 5.25 | 0.65 | 8.38 | 7.81 | 14.58 | 3.10 | 6.37 | 84.11 | 79.86 | 38.06 | 52.19 | 92.81 |
Brf. | 13.59 | 2.17 | 17.77 | 16.69 | 19.95 | 5.52 | 10.63 | 85.52 | 79.66 | 38.47 | 51.65 | 80.36 | |
MiniCPM-V | Dft. | 6.38 | 0.67 | 9.86 | 8.78 | 15.28 | 0.05 | 6.30 | 84.29 | 80.38 | 37.93 | 45.12 | 92.97 |
Brf. | 16.03 | 3.15 | 19.56 | 18.19 | 18.77 | 6.36 | 11.16 | 86.29 | 78.55 | 35.04 | 45.79 | 72.87 | |
GLaMM | Dft. | 15.01 | 3.32 | 16.69 | 16.29 | 11.49 | 9.08 | 3.90 | 86.42 | 58.26 | 5.78 | 3.84 | 74.68 |
Brf. | 18.46 | 4.45 | 20.92 | 20.46 | 14.18 | 10.48 | 4.44 | 86.65 | 58.60 | 5.72 | 4.85 | 70.52 | |
CogVLM | Dft. | 31.13 | 8.70 | 33.89 | 32.32 | 23.50 | 41.62 | 24.09 | 89.78 | 66.54 | 33.29 | 26.67 | 26.39 |
Brf. | 31.39 | 8.69 | 34.70 | 32.94 | 24.87 | 41.41 | 24.74 | 90.00 | 69.15 | 38.80 | 33.53 | 29.88 | |
GPT-4o | Dft. | 7.47 | 0.85 | 11.61 | 10.43 | 17.39 | 0.03 | 7.21 | 84.57 | 80.81 | 41.29 | 59.80 | 89.81 |
Brf. | 25.30 | 5.78 | 28.76 | 27.36 | 19.02 | 8.17 | 15.31 | 88.11 | 76.58 | 40.08 | 51.72 | 52.75 | |
Human | Spk. | 66.18 | 22.58 | 70.15 | 66.45 | 48.28 | 112.04 | 42.35 | 93.89 | 71.60 | 64.56 | 92.20 | 9.15 |
Wrt. | - | - | - | - | - | - | - | - | 70.43 | 63.69 | 89.29 | 7.29 |
Model performance under different Instr. (Instruction) settings: Dft. (Default) prompt and Brf. (Brief) prompt. All model predictions are evaluated against Human Wrt. (Written) results as the reference texts. We also compute Human Spk. (Spoken) data in comparison with human-written data. Irrel% refers to the percentage of irrelevant words in the referring expression of the examples evaluated as successful.
Recommended Use of Our Dataset
The RefOI
dataset is designed for fine-grained REG/REC analysis. It distinguishes between COCO and non-COCO classes, and between scenes with single presence vs. co-occurrence of the same class.
We encourage users to leverage these distinctions for deeper insights and invite community contributions to expand non-COCO annotations.
Citation
If you find our dataset helpful, please cite our work:
@misc{ma2025visionlanguagemodelspragmaticallycompetent,
title={Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation},
author={Ziqiao Ma and Jing Ding and Xuejun Zhang and Dezhi Luo and Jiahe Ding and Sihan Xu and Yuchen Huang and Run Peng and Joyce Chai},
year={2025},
eprint={2504.16060},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.16060},
}