Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
W-Bench / README.md
Shilin-LU's picture
Update README.md
9601e01 verified
|
raw
history blame
1.49 kB
metadata
license: mit
language:
  - en
pretty_name: W-Bench
size: 10,000 instances

What is it?

W-Bench is the first holistic benchmark that incorporates four types of image editing techniques to assess the robustness of watermarking methods. Eleven representative watermarking methods are evaluated on the W-Bench. The W-Bench contains 10,000 images sourced from datasets such as COCO, Flickr, ShareGPT4V, etc.

Dataset Structure

The evaluation set is divided into 6 different categories:

  • 1,000 samples for stochastic regeneration
  • 1,000 samples for deterministic regeneration
  • 1,000 samples for global editing
  • 5,000 samples for local editing (divided into five sets, each containing 1,000 images, with different mask sizes ranging from 10–60% of the image area)
  • 1,000 samples for image-to-video generation
  • 1,000 samples for testing conventional distortion

How to download and use 🍷 W-Bench

Using huggingface_hub

from huggingface_hub import snapshot_download
folder = snapshot_download(
"Shilin-LU/W-Bench",
repo_type="dataset",
local_dir="./W-Bench/",
allow_patterns="DET_INVERSION_1K/image/*")

For faster downloads, make sure to install pip install huggingface_hub[hf_transfer] and set the environment variable HF_HUB_ENABLE_HF_TRANSFER=1.

Using datasets

from datasets import load_dataset
wbench = load_dataset("Shilin-LU/W-Bench", streaming=True)

Citation Information

Paper on arXiv