Datasets:
Tasks:
Multiple Choice
Sub-tasks:
multiple-choice-coreference-resolution
Languages:
English
Size:
n<1K
ArXiv:
License:
metadata
languages:
- en
paperswithcode_id: null
Dataset Card for "mwsc"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: http://decanlp.com
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 0.02 MB
- Size of the generated dataset: 0.04 MB
- Total amount of disk used: 0.06 MB
Dataset Summary
Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context. This modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing.
Supported Tasks and Leaderboards
Languages
Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
Data Instances
default
- Size of downloaded dataset files: 0.02 MB
- Size of the generated dataset: 0.04 MB
- Total amount of disk used: 0.06 MB
An example of 'validation' looks as follows.
{
"answer": "example",
"options": ["test", "example"],
"question": "What is this sentence?",
"sentence": "This is a example sentence."
}
Data Fields
The data fields are the same among all splits.
default
sentence
: astring
feature.question
: astring
feature.options
: alist
ofstring
features.answer
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
default | 80 | 82 | 100 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@article{McCann2018decaNLP,
title={The Natural Language Decathlon: Multitask Learning as Question Answering},
author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher},
journal={arXiv preprint arXiv:1806.08730},
year={2018}
}
Contributions
Thanks to @thomwolf, @lewtun, @ghomasHudson, @lhoestq for adding this dataset.