Datasets:
Tasks:
Token Classification
Modalities:
Text
Languages:
English
Size:
100K - 1M
ArXiv:
Tags:
abbreviation-detection
License:
annotations_creators: | |
- Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan | |
language_creators: | |
- found | |
languages: | |
- en | |
licenses: | |
- cc-by-sa4.0 | |
multilinguality: | |
- monolingual | |
paperswithcode_id: acronym-identification | |
pretty_name: 'PLOD: An Abbreviation Detection Dataset' | |
size_categories: | |
- 100K<n<1M | |
source_datasets: | |
- original | |
task_categories: | |
- token-classification | |
task_ids: | |
- named-entity-recognition | |
# PLOD: An Abbreviation Detection Dataset | |
This is the repository for PLOD Dataset submitted to LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection. | |
### Dataset | |
We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here. | |
1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/> | |
2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/> | |
# Dataset Card for PLOD-unfiltered | |
## Table of Contents | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Supported Tasks](#supported-tasks-and-leaderboards) | |
- [Languages](#languages) | |
- [Dataset Structure](#dataset-structure) | |
- [Data Instances](#data-instances) | |
- [Data Fields](#data-instances) | |
- [Data Splits](#data-instances) | |
- [Dataset Creation](#dataset-creation) | |
- [Curation Rationale](#curation-rationale) | |
- [Source Data](#source-data) | |
- [Annotations](#annotations) | |
- [Personal and Sensitive Information](#personal-and-sensitive-information) | |
- [Considerations for Using the Data](#considerations-for-using-the-data) | |
- [Social Impact of Dataset](#social-impact-of-dataset) | |
- [Discussion of Biases](#discussion-of-biases) | |
- [Other Known Limitations](#other-known-limitations) | |
- [Additional Information](#additional-information) | |
- [Dataset Curators](#dataset-curators) | |
- [Licensing Information](#licensing-information) | |
- [Citation Information](#citation-information) | |
## Dataset Description | |
- **Homepage:** [Needs More Information] | |
- **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection | |
- **Paper:** XX | |
- **Leaderboard:** YY | |
- **Point of Contact:** [Diptesh Kanojia](mailto:[email protected]) | |
### Dataset Summary | |
This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain. | |
### Supported Tasks and Leaderboards | |
This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022. | |
### Languages | |
English | |
## Dataset Structure | |
### Data Instances | |
A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`. | |
An example from the dataset: | |
{'id': '1', | |
'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'], | |
'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13], | |
'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] | |
} | |
### Data Fields | |
- id: the row identifier for the dataset point. | |
- tokens: The tokens contained in the text. | |
- pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER. | |
- ner_tags: The tags for abbreviations and long-forms. | |
### Data Splits | |
| | Train | Valid | Test | | |
| ----- | ------ | ----- | ---- | | |
| Filtered | 112652 | 24140 | 24140| | |
| Unfiltered | 113860 | 24399 | 24399| | |
## Dataset Creation | |
### Source Data | |
#### Initial Data Collection and Normalization | |
Extracting the data from PLOS Journals online and then tokenization, normalization. | |
#### Who are the source language producers? | |
PLOS Journal | |
## Additional Information | |
### Dataset Curators | |
The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma, | |
Diptesh Kanojia, Constantin Orasan. | |
### Licensing Information | |
CC-BY-SA 4.0 | |
### Citation Information | |
[Needs More Information] | |
### Installation | |
We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/> | |
Please see the instructions at these websites to setup your own custom training with our dataset. | |
### Model(s) | |
The working model is present [here at this link](https://huggingface.co/surrey-nlp/en_abbreviation_detection_roberta_lar).<br/> | |
On the link provided above, the model can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/> | |
#### Usage (in Python) | |
You can use the HuggingFace Model link above to find the instructions for using this model in Python locally. | |