|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
metrics: |
|
- spearmanr |
|
base_model: |
|
- FacebookAI/roberta-large |
|
--- |
|
|
|
# Model Card for GCSE |
|
|
|
<p align="center"> |
|
<p align="center"> |
|
<a href="https://github.com/aleversn/GCSE"> |
|
<img alt="Static Badge" src="https://img.shields.io/badge/GCSE-black?logo=github"> |
|
</a> |
|
</p> |
|
</p> |
|
|
|
[Model](https://huggingface.co/aleversn/GCSE-RoBERTa-large/) | [Paper](https://arxiv.org/abs/2409.12887) | [Code](https://github.com/aleversn/GCSE) |
|
|
|
### Model Checkpoints |
|
|
|
We release our model checkpoints in huggingface as listed below: |
|
| Model | Avg. STS | |
|
| :-------------------------------------------------------------------------------- | :------: | |
|
| [aleversn/GCSE-BERT-base](https://huggingface.co/aleversn/GCSE-BERT-base) | 81.98 | |
|
| [aleversn/GCSE-BERT-large](https://huggingface.co/aleversn/GCSE-BERT-large) | 83.07 | |
|
| [aleversn/GCSE-RoBERTa-base](https://huggingface.co/aleversn/GCSE-RoBERTa-base) | 82.12 | |
|
| [aleversn/GCSE-RoBERTa-large](https://huggingface.co/aleversn/GCSE-RoBERTa-large) | 83.82 | |
|
|
|
### Usage |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModel |
|
tokenizer = AutoTokenizer.from_pretrained("aleversn/GCSE-BERT-base") |
|
model = AutoModel.from_pretrained("aleversn/GCSE-RoBERTa-large") |
|
``` |