ArneBinder's picture
https://github.com/ArneBinder/pie-document-level/pull/312
af21245 verified
raw
history blame
3.06 kB
# model details
default_model_name: "ArneBinder/sam-pointer-bart-base-v0.3.1"
default_model_revision: "d090d5385380692933e8a3bc466236e3a905492d"
# Whether to handle segmented entities in the document. If True, labeled_spans are converted
# to labeled_multi_spans and binary_relations with label "parts_of_same" are used to merge them.
# This requires the networkx package to be installed.
handle_parts_of_same: true
# Split the document text into sections that are processed separately.
default_split_regex: "\n\n\n+"
# retriever details
default_retriever_config_path: "configs/retriever/related_span_retriever_with_relations_from_other_docs.yaml"
default_min_similarity: 0.95
# data import details
default_arxiv_id: "1706.03762"
default_load_pie_dataset_kwargs:
path: "pie/sciarg"
name: "resolve_parts_of_same"
split: "train"
# for better readability in the UI
render_mode_captions:
displacy: "displaCy + highlighted arguments"
pretty_table: "Pretty Table"
layer_caption_mapping:
labeled_multi_spans: "adus"
binary_relations: "relations"
labeled_partitions: "partitions"
relation_name_mapping:
supports_reversed: "supported by"
contradicts_reversed: "contradicts"
default_render_mode: "displacy"
default_render_kwargs:
entity_options:
# we need to have the keys as uppercase because the spacy rendering function converts the labels to uppercase
colors:
OWN_CLAIM: "#009933"
BACKGROUND_CLAIM: "#99ccff"
DATA: "#993399"
colors_hover:
selected: "#ffa"
# tail options for relationships
tail:
# green
supports: "#9f9"
# red
contradicts: "#f99"
# do not highlight
parts_of_same: null
head: null # "#faf"
other: null
example_text: >
Scholarly Argumentation Mining (SAM) has recently gained attention due to its
potential to help scholars with the rapid growth of published scientific literature.
It comprises two subtasks: argumentative discourse unit recognition (ADUR) and
argumentative relation extraction (ARE), both of which are challenging since they
require e.g. the integration of domain knowledge, the detection of implicit statements,
and the disambiguation of argument structure.
While previous work focused on dataset construction and baseline methods for
specific document sections, such as abstract or results, full-text scholarly argumentation
mining has seen little progress. In this work, we introduce a sequential pipeline model
combining ADUR and ARE for full-text SAM, and provide a first analysis of the
performance of pretrained language models (PLMs) on both subtasks.
We establish a new SotA for ADUR on the Sci-Arg corpus, outperforming the previous best
reported result by a large margin (+7% F1). We also present the first results for ARE, and
thus for the full AM pipeline, on this benchmark dataset. Our detailed error analysis reveals
that non-contiguous ADUs as well as the interpretation of discourse connectors pose major
challenges and that data annotation needs to be more consistent.