system HF Staff commited on
Commit
8ad453c
·
1 Parent(s): b8f9d81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "en"
3
+ tags:
4
+ - gpt2
5
+ - exbert
6
+ - commonsense
7
+ - semeval2020
8
+ - comve
9
+ license: "mit"
10
+ datasets:
11
+ - ComVE
12
+ metrics:
13
+ - bleu
14
+ widget:
15
+ - text: "Chicken can swim in water. <|continue|>"
16
+ ---
17
+
18
+ # ComVE-gpt2-medium
19
+
20
+ ## Model description
21
+
22
+ Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective.
23
+ The model is able to generate a reason why a given natural language statement is against commonsense.
24
+
25
+ ## Intended uses & limitations
26
+
27
+ You can use the raw model for text generation to generate reasons why natural language statements are against commonsense.
28
+
29
+ #### How to use
30
+
31
+ You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script.
32
+
33
+ *Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again.
34
+
35
+ #### Limitations and bias
36
+
37
+ The model biased to negate the entered sentence usually instead of producing a factual reason.
38
+
39
+ ## Training data
40
+
41
+ The model is initialized from the [gpt2-medium](https://github.com/huggingface/transformers/blob/master/model_cards/gpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons.
42
+
43
+ ## Training procedure
44
+
45
+ Each natural language statement that against commonsense is concatenated with its reference reason with `<|continue|>` as a separator, then the model finetuned using CLM objective.
46
+ The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 5 epochs, 128 maximum sequence length and 64 batch size.
47
+
48
+ <center>
49
+ <img src="https://i.imgur.com/xKbrwBC.png">
50
+ </center>
51
+
52
+ ## Eval results
53
+
54
+ The model achieved fifth place with 16.7153/16.1187 BLEU scores and third place with 1.94 Human Evaluation score on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset.
55
+
56
+ These are some examples generated by the model:
57
+
58
+ | Against Commonsense Statement | Generated Reason |
59
+ |:-----------------------------------------------------:|:--------------------------------------------:|
60
+ | Chicken can swim in water. | Chicken can't swim. |
61
+ | shoes can fly | Shoes are not able to fly. |
62
+ | Chocolate can be used to make a coffee pot | Chocolate is not used to make coffee pots. |
63
+ | you can also buy tickets online with an identity card | You can't buy tickets with an identity card. |
64
+ | a ball is square and can roll | A ball is round and cannot roll. |
65
+ | You can use detergent to dye your hair. | Detergent is used to wash clothes. |
66
+ | you can eat mercury | mercury is poisonous |
67
+ | A gardener can follow a suspect | gardener is not a police officer |
68
+ | cars can float in the ocean just like a boat | Cars are too heavy to float in the ocean. |
69
+ | I am going to work so I can lose money. | Working is not a way to lose money. |
70
+
71
+ ### BibTeX entry and citation info
72
+
73
+ ```bibtex
74
+ @article{fadel2020justers,
75
+ title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation},
76
+ author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik},
77
+ year={2020}
78
+ }
79
+ ```
80
+
81
+ <a href="https://huggingface.co/exbert/?model=aliosm/ComVE-gpt2-medium">
82
+ <img width="300px" src="https://hf-dinosaur.huggingface.co/exbert/button.png">
83
+ </a>