Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
Dataset Viewer
Auto-converted to Parquet
image
image
mask
image
boxed_image
image
box_xmin
float64
box_xmax
float64
box_ymin
float64
box_ymax
float64
is_coco
int64
label_name
string
co_occurrence
int64
written_descriptions
sequence
spoken_descriptions
sequence
0.35625
0.509375
0.28103
0.819672
1
person
1
[ "person", "woman", "the woman." ]
[ "woman in green", "a person." ]
0
0.092647
0.368071
0.651885
1
person
1
[ "second person fro left", "left man in white", "the person in the left wearing a black trousers, obscured by another person in red." ]
[ "man in white shirt on the left", "the left man in red." ]
0.4125
0.489062
0.14375
0.3625
1
person
1
[ "person in red", "person in red shirt", "the person in red shirt." ]
[ "man in red", "person in red." ]
0.057192
0.469671
0.148776
1
1
person
1
[ "left person", "left person", "the person on the left." ]
[ "man on the left", "the left person." ]
0.454688
0.598437
0.239583
0.495833
1
person
1
[ "person", "man in green", "the person." ]
[ "man in green", "a person on the tree." ]
0.196875
0.45
0.454167
0.716667
1
person
1
[ "person on green kayak", "man in green", "the person in green suit." ]
[ "man in green", "athlete on a green kayak." ]
0.650957
0.927835
0.722838
0.911308
1
person
1
[ "man in gray", "man in black shirt", "the person in black shirt." ]
[ "man in black shirt", "a man in dark teaser." ]
0.498525
0.914454
0.022124
1
1
person
1
[ "man", "man", "the man." ]
[ "man", "the person." ]
0
0.555882
0
1
1
person
1
[ "music", "man", "the man." ]
[ "man", "a trombonist." ]
0.579795
0.653001
0.033408
0.222717
1
person
1
[ "back man", "upper man in white", "the player in the background with a green helmet." ]
[ "man in white shirt on the top of the picture", "the back rider of a horse." ]
0.280236
0.750737
0.30531
0.924779
1
motorcycle
1
[ "motorcycle", "motorcycle", "the motorcycle" ]
[ "motorcycle", "motorcycle." ]
0.153716
0.947635
0.104247
0.708494
1
motorcycle
1
[ "sidecar", "the motorcycle", "the motorcycle with sidecar." ]
[ "motorcycle.", "a sidecar." ]
0.188791
0.771386
0.318584
0.938053
1
motorcycle
1
[ "electric bike", "motorcycle", "the motorcycle." ]
[ "motorcycle", "electric bike." ]
0.03125
0.989062
0.085417
1
1
motorcycle
1
[ "motorcycle", "motorcycle", "the motorcycle." ]
[ "motorcycle", "a motorcycle." ]
0.692194
1
0
0.196903
1
motorcycle
1
[ "motorcycle", "motorcycle", "the motorcycle." ]
[ "motorcycle", "motorcycle." ]
0
0.159763
0.476821
0.713024
1
motorcycle
1
[ "motorcycle", "motorcycle", "the red motorcycle." ]
[ "motorcycle", "motorcycle." ]
0.053125
0.965625
0.032787
1
1
motorcycle
1
[ "motorcycle", "motorcycle", "the motorcycle." ]
[ "motorcycle", "motorcycle." ]
0.039062
0.904688
0
0.883333
1
motorcycle
1
[ "electric bike", "motorcycle", "the motorcycle." ]
[ "motorcycle", "a electric bicycle." ]
0.229904
0.770096
0.314402
0.821501
1
motorcycle
1
[ "electric bike", "motorcycle", "the motorcycle." ]
[ "motorcycle", "a ebike." ]
0
0.124814
0.379386
0.611842
1
motorcycle
1
[ "motorcycle", "motorcycle", "the motorcycle on the left." ]
[ "motorcycle", "motorcycle." ]
0.513732
0.55412
0.133333
0.256566
1
vase
1
[ "vase", "vase", "the vase with flowers in it." ]
[ "vase", "a vase." ]
0.408333
0.616667
0.646875
0.795313
1
vase
1
[ "vase", "vase", "the vase." ]
[ "vase", "a vase." ]
0.16875
0.495833
0.256651
0.690141
1
vase
1
[ "vase", "vase", "the vase with a flower in it." ]
[ "vase on the upper left", "the vast with a flower in it." ]
0.555957
0.998195
0.543321
0.776173
1
vase
1
[ "vase", "vase", "the upper part of leaves of the flower in the bottom right corner which is just above the flower." ]
[ "vase", "the vase." ]
0.095833
0.866667
0.003125
0.971875
1
vase
1
[ "pot", "vase", "the vase." ]
[ "ceramic", "apart." ]
0.467187
0.598437
0.5
0.916667
1
vase
1
[ "vase", "vase", "the vase." ]
[ "vase", "a vase." ]
0.337325
0.832335
0.081699
0.95915
1
vase
1
[ "porcelain teapot", "the vase", "the teapot." ]
[ "the ceramic.", "a porcelain pot." ]
0.353201
0.724062
0.717873
1
1
vase
1
[ "vase", "vase", "the vase." ]
[ "vase", "a vase." ]
0.517699
0.920354
0.181416
0.488201
1
vase
1
[ "front glass ball with plant", "right glass ball", "the glass air plant holder hanging in a pink sling that is not blurry." ]
[ "glass ball on the right", "purple accessory." ]
0.072917
0.920833
0.148438
0.979688
1
vase
1
[ "vase", "vase", "the bell krater." ]
[ "ceramic", "an artifact." ]
0
0.947253
0.479228
0.998516
1
book
1
[ "magazine with cory kennedy", "biggest magazine", "the open magazine with words cory kennedy on it." ]
[ "biggest magazine", "magazine." ]
0
0.740413
0.207965
1
1
book
1
[ "folded fan", "book", "the book." ]
[ "book", "a book." ]
0.428571
0.488954
0.579646
0.65708
1
book
1
[ "blue card with a fish", "green photo frame", "the photo frame in the second grid of the first row of the bookshelf." ]
[ "photo frame", "a card with a fish." ]
0.307584
0.742977
0.055684
0.49884
1
book
1
[ "sheet music", "music score", "the piano score." ]
[ "music score", "sheet music." ]
0.018703
0.983791
0.046997
0.963446
1
book
1
[ "book", "book", "the book." ]
[ "book", "opened the book." ]
0.618026
0.658798
0.316591
0.395738
1
book
1
[ "keyboard", "white book in front of the man", "the printer box near the armrest of the seat and to the left of the man's hand." ]
[ "book in front of the man", "a white box beside the left hand of the man." ]
0.041394
0.949891
0.033228
0.969937
1
book
1
[ "book cover", "the book with patterns and a circle in the middle", "the book" ]
[ "the book with pattern on it.", "a book cover." ]
0
0.326642
0.019465
0.328467
1
book
1
[ "row of book on the top left corner", "upper left bookshelf", "the row of books on the upper left corner of the bookshelf." ]
[ "the books in the top left corner.", "the top left bookshelf." ]
0.50625
0.629687
0.766667
0.902083
1
book
1
[ "the second grid from right on the fourth row from top", "lowest part of the second bookshelf from the right", "the second row from the bottom to the top of the bookshelf, the second grid of books from the right." ]
[ "the book in the fourth grid from right, second row from bottom to up.", "the second grade of bookshelf from right on the fourth row from top." ]
0.348437
0.709375
0.027083
0.464583
1
book
1
[ "book", "book", "the book." ]
[ "book", "a book." ]
0.384047
0.992614
0
1
1
dog
1
[ "dog", "dog", "the dog." ]
[ "dog", "a dog." ]
0.318584
0.985251
0.247788
1
1
dog
1
[ "fox", "white fox", "the fox." ]
[ "white fox", "a white fox." ]
0.098438
0.704687
0.11875
0.95625
1
dog
1
[ "dog", "white dog", "the dog." ]
[ "dog", "a dog." ]
0.175824
0.843014
0.151452
0.829876
1
dog
1
[ "dog", "dog", "the dog." ]
[ "dog", "a dog." ]
0.196211
0.909337
0.214458
0.759036
1
dog
1
[ "dog", "dog", "the dog." ]
[ "dog", "a dog." ]
0
0.493892
0.007477
0.493458
1
dog
1
[ "top left grid of image", "upper left dog", "the dog in the top left corner." ]
[ "dog on the upper left", "the top left photo of a dog" ]
0.039474
0.439145
0.18254
0.922619
1
dog
1
[ "left dog", "left dog", "the left dog." ]
[ "the dog in the front.", "the left, dog." ]
0.42796
1
0.285388
1
1
dog
1
[ "right dog", "right dog", "the right dog." ]
[ "dog on the right", "the right puppy." ]
0.197302
0.809444
0.001934
1
1
dog
1
[ "dog", "dog", "the dog." ]
[ "dog", "a dog." ]
0
1
0
1
1
dog
1
[ "dog", "dog", "the dog." ]
[ "dog", "good." ]
0.440177
0.747415
0.298013
0.604856
1
airplane
1
[ "right plane", "right plane", "the aircraft on the right." ]
[ "plane", "the right." ]
0
1
0.015521
0.851441
1
airplane
1
[ "aircraft", "plane", "the airplane in the center." ]
[ "plane", "airplane." ]
0
0.991189
0.21286
0.7051
1
airplane
1
[ "plane", "plane", "the plane." ]
[ "plane", "airplane." ]
0.306475
0.610072
0.290249
0.718821
1
airplane
1
[ "fighter plane", "plane", "the aircraft." ]
[ "plane", "a jet." ]
0
0.896755
0.210177
0.619469
1
airplane
1
[ "plane", "plane", "the plane." ]
[ "plane", "airplane." ]
0.029687
0.75625
0.414583
0.704167
1
airplane
1
[ "left plane", "plane", "the plane in the center." ]
[ "plane on the left", "the plane in the center of the image." ]
0.067188
1
0.20625
0.6625
1
airplane
1
[ "a fighter plane", "the plane", "the plane" ]
[ "the plane", "a jet." ]
0.067548
1
0.448889
0.72
1
airplane
1
[ "jet", "plane", "the plane." ]
[ "plane", "a jet" ]
0
0.728125
0.0375
1
1
airplane
1
[ "aircraft", "plane", "the biggest airplane." ]
[ "white plane.", "airplane." ]
0
1
0.235417
0.7875
1
airplane
1
[ "mcm plane", "plane", "the aircraft." ]
[ "plane", "a jet." ]
0.256452
0.596774
0.480808
0.935354
1
knife
1
[ "knife", "knife", "the folding knife." ]
[ "knife", "a folded knife." ]
0.081001
0.998527
0
0.634146
1
knife
1
[ "knife", "knife", "the knife." ]
[ "knife", "knife." ]
0.088172
0.845161
0.086817
0.78135
1
knife
1
[ "a knife", "knife", "the knife." ]
[ "knife.", "a knife." ]
0.019118
1
0.18847
0.747228
1
knife
1
[ "sword", "knife", "the curved sword." ]
[ "knife", "a knife." ]
0.45354
0.705752
0.103245
0.828909
1
knife
1
[ "knife", "knife", "the knife." ]
[ "knife", "knife." ]
0.017188
0.979688
0.3
0.616667
1
knife
1
[ "knife", "knife", "the knife." ]
[ "knife", "a knife." ]
0.698671
1
0.339956
0.655629
1
knife
1
[ "knife", "knife", "the knife." ]
[ "vase", "a knife." ]
0.165625
0.939062
0.345833
0.641667
1
knife
1
[ "knife", "knife", "the knife." ]
[ "knife", "a knife." ]
0.053125
0.890625
0.270833
0.6
1
knife
1
[ "knife", "knife", "the knift." ]
[ "knife", "a knife." ]
0.159292
0.921829
0.050885
0.887168
1
knife
1
[ "fork", "knife", "a leather stitching chisel." ]
[ "knife", "a for" ]
0.120944
0.333333
0
0.339956
1
bottle
1
[ "sake", "wine", "the wine with character on its bottle." ]
[ "wine", "a bottle of sake." ]
0.264062
0.61875
0
1
1
bottle
1
[ "glass", "bottle", "the bottle of drink.." ]
[ "bottle", "a bottle of liquid." ]
0.248658
0.778175
0
0.970803
1
bottle
1
[ "glass", "bottle", "the bottle." ]
[ "glass", "a bottle." ]
0
0.119469
0.660767
0.79056
1
bottle
1
[ "blue bottle", "blue bottle", "the red and blue bottle." ]
[ "blue bottle", "a water bottle." ]
0.128603
0.609756
0.105882
1
1
bottle
1
[ "bottle of water", "bottle", "the bottle." ]
[ "bottle", "the bottle of water." ]
0.296875
0.496875
0.089583
0.916667
1
bottle
1
[ "left bottle", "left bottle", "the left botte." ]
[ "bottle on the left", "left bottle of water." ]
0.366426
0.723827
0
0.965704
1
bottle
1
[ "wine", "wine", "the bottle." ]
[ "wine", "the bottle of wine." ]
0.420339
0.825424
0.223077
0.453846
1
bottle
1
[ "bottle of water", "bottle", "the bottle." ]
[ "bottle", "bottle of water." ]
0.149398
0.681928
0.00542
1
1
bottle
1
[ "bottle of water", "the plastic bottle", "the plastic bottle." ]
[ "the bottle.", "the bottle of water." ]
0
0.548454
0
0.99697
1
bottle
1
[ "perfume", "bottle with \"hugo\"", "a hugo boss energise perfume bottle." ]
[ "bottle", "perfume." ]
0.187316
0.89528
0.168142
0.752212
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake", "a cake." ]
0.215625
0.69375
0.6125
0.854167
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake", "a cake." ]
0.170312
0.875
0.058333
1
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake", "a cupcake." ]
0.504425
0.60472
0.283186
0.384956
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake", "cake." ]
0.220089
0.799114
0.092715
0.695364
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake", "cheesecake." ]
0.094395
0.911504
0.075221
0.969027
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake", "a cake." ]
0.152993
0.798226
0.544118
0.975
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake", "cheesecake." ]
0.281296
1
0.143805
1
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake", "cake." ]
0.00202
0.957576
0.125806
0.904839
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake.", "cake." ]
0.014583
0.98125
0.06875
0.940625
1
cake
1
[ "cake", "cake", "the cake." ]
[ "cake.", "cake." ]
0.0875
0.896875
0
1
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "a cat." ]
0.192366
0.929771
0.15812
0.814103
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "a cat." ]
0.133574
0.850181
0
0.761733
1
cat
1
[ "cat", "cat", "the white cat." ]
[ "cat", "a cat." ]
0.229687
1
0
1
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "a cat." ]
0.09882
1
0
0.98234
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "kept." ]
0
1
0.008869
1
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "a cat." ]
0
0.967188
0
1
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "a cat." ]
0
1
0
1
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "a cat." ]
0.035937
1
0.045833
1
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "a cat." ]
0
0.945326
0.027726
1
1
cat
1
[ "cat", "cat", "the cat." ]
[ "cat", "a cat." ]
End of preview. Expand in Data Studio

💬 VLM-REG: Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation

📃 Paper |🏠 Homepage

Overview

Referring Expression Generation (REG)—the task of producing a concise and unambiguous description that allows a listener to identify a target object—lies at the heart of pragmatic communication in vision-language systems. However, existing benchmarks suffer from two major limitations:

  1. Data leakage in RefCOCO/RefCOCO+, which raises concerns about evaluation contamination, especially for VLMs trained on MSCOCO.
  2. Lack of spoken data, despite the fact that real-world referring is often real-time and spontaneous, unlike written language, which benefits from planning and revision.

To address these gaps, we introduce RefOI, a curated dataset built from the OpenImages V7 Instance Segmentation validation set.

Key features:

  • 1,485 real-world object instances, equally distributed across COCO (744) and non-COCO (741) classes.
  • Includes single presence and co-occurrence images for each class.
  • Each instance annotated with 3 written and 2 spoken human referring expressions.

Using RefOI, we evaluate several state-of-the-art VLMs and uncover three tiers of pragmatic failure:

  • Ambiguity: Generated expressions often fail to uniquely identify the referent.
  • Redundancy: Models include excessive or irrelevant details, violating principles of informativeness and efficiency.
  • Misalignment: Model preferences diverge from human pragmatics, favoring visual complexity over minimal spatial cues.

Overview

For token-level annotation of referring expressions, see the companion dataset RefOI-TLHF, which provides minimal span supervision for both human- and model-generated descriptions.

Dataset Schema and Split

Data Fields

Each entry in the dataset contains the following fields:

  • image: The original image file.
  • mask: A binary segmentation mask isolating the target object.
  • boxed_image: The original image overlaid with a red bounding box highlighting the target object.
  • box_xmin, box_xmax, box_ymin, box_ymax: The normalized bounding‑box coordinates.
  • is_coco: A binary flag (1 for COCO-class, 0 for non‑COCO).
  • label_name: The object’s class label (e.g., “muffin,” “giraffe”).
  • co_occurrence: The number of same‑class instances in the image (1 = no distractors; >1 = multiple).
  • written_descriptions: Three human‑typed referring expressions.
  • spoken_descriptions: Two human‑spoken expressions (transcribed and optionally corrected by annotators).

Dataset Split

  • single_presence (co_occurrence = 1):
    Only one object of the target class appears (no same‑class distractors in the image).

  • co_occurrence (co_occurrence > 1):
    Multiple objects of the same class appear in the image, introducing potential referential ambiguity.

Usage

from datasets import load_dataset

# only one object of the class
ds_single = load_dataset("Seed42Lab/RefOI", split="single_presence")
# multiple objects of the class
ds_multi = load_dataset("Seed42Lab/RefOI", split="co_occurrence")

print(ds_single[0])
print(ds_multi[0])

Experiments

We compare multiple models across standard metrics, listener-based accuracy, and human judgment. Humans outperform all models by large margins (e.g., >90% vs. ~50%). Automatic metrics such as BLEU and CIDEr show poor correlation with human judgment, frequently ranking verbose models higher. Even listener-based scores (REC) fail to consistently match human preferences, indicating that existing metrics do not capture pragmatic competence effectively.

Model Instr. BLEU-1 BLEU-4 ROUGE-1 ROUGE-L METEOR CIDEr SPICE BERT CLIP REC Human Irrel%
LLaVA-7B Dft. 13.27 1.60 18.09 16.30 19.29 2.10 10.50 85.51 79.02 17.28 39.46 87.30
Brf. 28.74 6.05 36.46 35.50 19.15 10.80 24.59 89.02 70.72 13.58 30.57 41.95
LLaVA-13B Dft. 8.17 1.07 11.98 10.94 16.89 0.77 7.92 84.61 79.85 15.27 46.40 91.85
Brf. 28.96 5.81 36.44 35.64 20.13 8.14 21.63 88.42 72.99 15.33 32.53 49.65
LLaVA-34B Dft. 6.29 0.78 9.82 9.11 16.15 0.07 7.61 84.39 79.86 16.21 46.53 92.90
Brf. 28.55 6.38 32.99 31.67 20.48 9.60 16.50 88.50 74.95 17.22 36.77 56.11
XComposer Dft. 5.25 0.65 8.38 7.81 14.58 3.10 6.37 84.11 79.86 18.56 52.19 92.81
Brf. 13.59 2.17 17.77 16.69 19.95 5.52 10.63 85.52 79.66 18.36 51.65 80.36
MiniCPM-V Dft. 6.38 0.67 9.86 8.78 15.28 0.05 6.30 84.29 80.38 19.10 45.12 92.97
Brf. 16.03 3.15 19.56 18.19 18.77 6.36 11.16 86.29 78.55 17.15 45.79 72.87
GLaMM Dft. 15.01 3.32 16.69 16.29 11.49 9.08 3.90 86.42 58.26 3.70 3.84 74.68
Brf. 18.46 4.45 20.92 20.46 14.18 10.48 4.44 86.65 58.60 3.77 4.85 70.52
CogVLM Dft. 31.13 8.70 33.89 32.32 23.50 41.62 24.09 89.78 66.54 15.97 26.67 26.39
Brf. 31.39 8.69 34.70 32.94 24.87 41.41 24.74 90.00 69.15 18.06 33.53 29.88
GPT-4o Dft. 7.47 0.85 11.61 10.43 17.39 0.03 7.21 84.57 80.81 21.65 59.80 89.81
Brf. 25.30 5.78 28.76 27.36 19.02 8.17 15.31 88.11 76.58 19.03 51.72 52.75
Human Spk. 66.18 22.58 70.15 66.45 48.28 112.04 42.35 93.89 71.60 30.46 92.20 9.15
Wrt. - - - - - - - - 70.43 30.06 89.29 7.29

Model performance under different Instr. (Instruction) settings: Dft. (Default) prompt and Brf. (Brief) prompt. All model predictions are evaluated against Human Wrt. (Written) results as the reference texts. We also compute Human Spk. (Spoken) data in comparison with human-written data. Irrel% refers to the percentage of irrelevant words in the referring expression of the examples evaluated as successful.

Recommended Use of Our Dataset

The RefOI dataset is designed for fine-grained REG/REC analysis. It distinguishes between COCO and non-COCO classes, and between scenes with single presence vs. co-occurrence of the same class. We encourage users to leverage these distinctions for deeper insights and invite community contributions to expand non-COCO annotations.

Citation

If you find our dataset helpful, please cite our work:

@misc{ma2025visionlanguagemodelspragmaticallycompetent,
      title={Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation}, 
      author={Ziqiao Ma and Jing Ding and Xuejun Zhang and Dezhi Luo and Jiahe Ding and Sihan Xu and Yuchen Huang and Run Peng and Joyce Chai},
      year={2025},
      eprint={2504.16060},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.16060}, 
}
Downloads last month
244

Collection including Seed42Lab/RefOI