Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
Shilin-LU commited on
Commit
e0e9139
·
verified ·
1 Parent(s): f43a017

Update README.md

Browse files

# What is it?
W-Bench is the first holistic benchmark that incorporates four types of image editing techniques to assess the robustness of watermarking methods. Eleven representative watermarking methods are evaluated on the W-Bench. The W-Bench contains 10,000 images sourced from datasets such as COCO, Flickr, ShareGPT4V, etc.

# Dataset Structure

The evaluation set is divided into 6 different categories:
- 1,000 samples for stochastic regeneration
- 1,000 samples for deterministic regeneration
- 1,000 samples for global editing
- 5,000 samples for local editing (divided into five sets, each containing 1,000 images, with different mask sizes ranging from 10–60% of the image area)
- 1,000 samples for image-to-video generation
- 1,000 samples for testing conventional distortion

# How to download and use 🍷 W-Bench

## Using `huggingface_hub`

```python
from huggingface_hub import snapshot_download
folder = snapshot_download(
"Shilin-LU/W-Bench",
repo_type="dataset",
local_dir="./W-Bench/",
allow_patterns="DET_INVERSION_1K/image/*")
```

For faster downloads, make sure to install `pip install huggingface_hub[hf_transfer]` and set the environment variable `HF_HUB_ENABLE_HF_TRANSFER=1`.

## Using `datasets`

```python
from datasets import load_dataset
wbench = load_dataset("Shilin-LU/W-Bench", streaming=True)
```

# Citation Information
Paper on [arXiv](https://arxiv.org/abs/2410.18775)

Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -1,3 +1,9 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ pretty_name: W-Bench
6
+ size: 10,000 instances (for evaluation)
7
+ description: |
8
+ The W-Bench is designed for the evaluation of image watermarking models, containing photographs sourced from the COCO, Flickr, and ShareGPT4V datasets.
9
+ ---