Add new SentenceTransformer model
Browse files- .gitattributes +1 -0
- 1_Pooling/config.json +10 -0
- README.md +136 -0
- config.json +50 -0
- config_sentence_transformers.json +10 -0
- model.safetensors +3 -0
- modules.json +20 -0
- sentence_bert_config.json +4 -0
- special_tokens_map.json +51 -0
- tokenizer.json +3 -0
- tokenizer_config.json +62 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
1_Pooling/config.json
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 768,
|
3 |
+
"pooling_mode_cls_token": true,
|
4 |
+
"pooling_mode_mean_tokens": false,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false,
|
7 |
+
"pooling_mode_weightedmean_tokens": false,
|
8 |
+
"pooling_mode_lasttoken": false,
|
9 |
+
"include_prompt": true
|
10 |
+
}
|
README.md
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: sentence-transformers
|
3 |
+
pipeline_tag: sentence-similarity
|
4 |
+
tags:
|
5 |
+
- sentence-transformers
|
6 |
+
- feature-extraction
|
7 |
+
- sentence-similarity
|
8 |
+
- transformers
|
9 |
+
- phobert
|
10 |
+
- vietnamese
|
11 |
+
- sentence-embedding
|
12 |
+
license: apache-2.0
|
13 |
+
language:
|
14 |
+
- vi
|
15 |
+
metrics:
|
16 |
+
- pearsonr
|
17 |
+
- spearmanr
|
18 |
+
---
|
19 |
+
## Model Description:
|
20 |
+
[**vietnamese-document-embedding**](https://huggingface.co/dangvantuan/vietnamese-document-embedding) is the Document Embedding Model for Vietnamese language with context length up to 8096 tokens. This model is a specialized long text-embedding trained specifically for the Vietnamese language, which is built upon [gte-multilingual](Alibaba-NLP/gte-multilingual-base) and trained using the Multi-Negative Ranking Loss, Matryoshka2dLoss and SimilarityLoss.
|
21 |
+
|
22 |
+
## Full Model Architecture
|
23 |
+
```
|
24 |
+
SentenceTransformer(
|
25 |
+
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: VietnameseModel
|
26 |
+
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
27 |
+
(2): Normalize()
|
28 |
+
)
|
29 |
+
```
|
30 |
+
## Training and Fine-tuning process
|
31 |
+
The model underwent a rigorous four-stage training and fine-tuning process, each tailored to enhance its ability to generate precise and contextually relevant sentence embeddings for the Vietnamese language. Below is an outline of these stages:
|
32 |
+
#### Stage 1: Training NLI on dataset XNLI:
|
33 |
+
- Dataset: [XNLI-vn ](https://huggingface.co/datasets/xnli/viewer/vi)
|
34 |
+
- Method: Training using Multi-Negative Ranking Loss and Matryoshka2dLoss. This stage focused on improving the model's ability to discern and rank nuanced differences in sentence semantics.
|
35 |
+
### Stage 2: Fine-tuning for Semantic Textual Similarity on STS Benchmark
|
36 |
+
- Dataset: [STSB-vn](https://huggingface.co/datasets/doanhieung/vi-stsbenchmark)
|
37 |
+
- Method: Fine-tuning specifically for the semantic textual similarity benchmark using Siamese BERT-Networks configured with the 'sentence-transformers' library. This stage honed the model's precision in capturing semantic similarity across various types of Vietnamese texts.
|
38 |
+
|
39 |
+
|
40 |
+
## Usage:
|
41 |
+
|
42 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
43 |
+
|
44 |
+
```
|
45 |
+
pip install -U sentence-transformers
|
46 |
+
```
|
47 |
+
|
48 |
+
Then you can use the model like this:
|
49 |
+
|
50 |
+
```python
|
51 |
+
from sentence_transformers import SentenceTransformer
|
52 |
+
sentences = ["Hà Nội là thủ đô của Việt Nam", "Đà Nẵng là thành phố du lịch"]
|
53 |
+
|
54 |
+
|
55 |
+
model = SentenceTransformer('dangvantuan/vietnamese-document-embedding', trust_remote_code=True)
|
56 |
+
embeddings = model.encode(sentences)
|
57 |
+
print(embeddings)
|
58 |
+
|
59 |
+
```
|
60 |
+
|
61 |
+
|
62 |
+
## Evaluation
|
63 |
+
The model can be evaluated as follows on the [Vienamese data of stsb](https://huggingface.co/datasets/doanhieung/vi-stsbenchmark).
|
64 |
+
|
65 |
+
```python
|
66 |
+
from sentence_transformers import SentenceTransformer
|
67 |
+
from sentence_transformers.readers import InputExample
|
68 |
+
from datasets import load_dataset
|
69 |
+
def convert_dataset(dataset):
|
70 |
+
dataset_samples=[]
|
71 |
+
for df in dataset:
|
72 |
+
score = float(df['score'])/5.0 # Normalize score to range 0 ... 1
|
73 |
+
inp_example = InputExample(texts=[df['sentence1'], df['sentence2']], label=score)
|
74 |
+
dataset_samples.append(inp_example)
|
75 |
+
return dataset_samples
|
76 |
+
|
77 |
+
# Loading the dataset for evaluation
|
78 |
+
vi_sts = load_dataset("doanhieung/vi-stsbenchmark")["train"]
|
79 |
+
df_dev = vi_sts.filter(lambda example: example['split'] == 'dev')
|
80 |
+
df_test = vi_sts.filter(lambda example: example['split'] == 'test')
|
81 |
+
|
82 |
+
# Convert the dataset for evaluation
|
83 |
+
|
84 |
+
# For Dev set:
|
85 |
+
dev_samples = convert_dataset(df_dev)
|
86 |
+
val_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(dev_samples, name='sts-dev')
|
87 |
+
val_evaluator(model, output_path="./")
|
88 |
+
|
89 |
+
# For Test set:
|
90 |
+
test_samples = convert_dataset(df_test)
|
91 |
+
test_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(test_samples, name='sts-test')
|
92 |
+
test_evaluator(model, output_path="./")
|
93 |
+
```
|
94 |
+
|
95 |
+
|
96 |
+
|
97 |
+
|
98 |
+
### Metric for all dataset of [Semantic Textual Similarity on STS Benchmark](https://huggingface.co/datasets/anti-ai/ViSTS)
|
99 |
+
|
100 |
+
**Spearman score**
|
101 |
+
| Model | [STSB] | [STS12]| [STS13] | [STS14] | [STS15] | [STS16] | [SICK] | Mean |
|
102 |
+
|-----------------------------------------------------------|---------|----------|----------|----------|----------|----------|---------|--------|
|
103 |
+
| [dangvantuan/vietnamese-embedding](https://huggingface.co/dangvantuan/vietnamese-embedding) |84.84| 79.04| 85.30| 81.38| 87.06| 79.95| 79.58| 82.45|
|
104 |
+
| [dangvantuan/vietnamese-embedding-LongContext](https://huggingface.co/dangvantuan/vietnamese-embedding-LongContext) |85.25| 75.77| 83.82| 81.69| 88.48| 81.5| 78.2| 82.10|
|
105 |
+
|
106 |
+
## Citation
|
107 |
+
|
108 |
+
|
109 |
+
@article{reimers2019sentence,
|
110 |
+
title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks},
|
111 |
+
author={Nils Reimers, Iryna Gurevych},
|
112 |
+
journal={https://arxiv.org/abs/1908.10084},
|
113 |
+
year={2019}
|
114 |
+
}
|
115 |
+
|
116 |
+
|
117 |
+
@article{zhang2024mgte,
|
118 |
+
title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval},
|
119 |
+
author={Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Wen and Dai, Ziqi and Tang, Jialong and Lin, Huan and Yang, Baosong and Xie, Pengjun and Huang, Fei and others},
|
120 |
+
journal={arXiv preprint arXiv:2407.19669},
|
121 |
+
year={2024}
|
122 |
+
}
|
123 |
+
|
124 |
+
@article{li2023towards,
|
125 |
+
title={Towards general text embeddings with multi-stage contrastive learning},
|
126 |
+
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
|
127 |
+
journal={arXiv preprint arXiv:2308.03281},
|
128 |
+
year={2023}
|
129 |
+
}
|
130 |
+
|
131 |
+
@article{li20242d,
|
132 |
+
title={2d matryoshka sentence embeddings},
|
133 |
+
author={Li, Xianming and Li, Zongxi and Li, Jing and Xie, Haoran and Li, Qing},
|
134 |
+
journal={arXiv preprint arXiv:2402.14776},
|
135 |
+
year={2024}
|
136 |
+
}
|
config.json
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "Chow05/fine-tune-embedding-v2",
|
3 |
+
"architectures": [
|
4 |
+
"VietnameseModel"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.0,
|
7 |
+
"auto_map": {
|
8 |
+
"AutoConfig": "dangvantuan/Vietnamese_impl--configuration.VietnameseConfig",
|
9 |
+
"AutoModel": "dangvantuan/Vietnamese_impl--modeling.VietnameseModel",
|
10 |
+
"AutoModelForMaskedLM": "dangvantuan/Vietnamese_impl--modeling.VietnameseForMaskedLM",
|
11 |
+
"AutoModelForMultipleChoice": "dangvantuan/Vietnamese_impl--modeling.VietnameseForMultipleChoice",
|
12 |
+
"AutoModelForQuestionAnswering": "dangvantuan/Vietnamese_impl--modeling.VietnameseForQuestionAnswering",
|
13 |
+
"AutoModelForSequenceClassification": "dangvantuan/Vietnamese_impl--modeling.VietnameseForSequenceClassification",
|
14 |
+
"AutoModelForTokenClassification": "dangvantuan/Vietnamese_impl--modeling.VietnameseForTokenClassification"
|
15 |
+
},
|
16 |
+
"classifier_dropout": 0.0,
|
17 |
+
"hidden_act": "gelu",
|
18 |
+
"hidden_dropout_prob": 0.1,
|
19 |
+
"hidden_size": 768,
|
20 |
+
"id2label": {
|
21 |
+
"0": "LABEL_0"
|
22 |
+
},
|
23 |
+
"initializer_range": 0.02,
|
24 |
+
"intermediate_size": 3072,
|
25 |
+
"label2id": {
|
26 |
+
"LABEL_0": 0
|
27 |
+
},
|
28 |
+
"layer_norm_eps": 1e-12,
|
29 |
+
"layer_norm_type": "layer_norm",
|
30 |
+
"logn_attention_clip1": false,
|
31 |
+
"logn_attention_scale": false,
|
32 |
+
"max_position_embeddings": 8192,
|
33 |
+
"model_type": "Vietnamese",
|
34 |
+
"num_attention_heads": 12,
|
35 |
+
"num_hidden_layers": 12,
|
36 |
+
"pack_qkv": true,
|
37 |
+
"pad_token_id": 1,
|
38 |
+
"position_embedding_type": "rope",
|
39 |
+
"rope_scaling": {
|
40 |
+
"factor": 8.0,
|
41 |
+
"type": "ntk"
|
42 |
+
},
|
43 |
+
"rope_theta": 20000,
|
44 |
+
"torch_dtype": "float32",
|
45 |
+
"transformers_version": "4.47.0",
|
46 |
+
"type_vocab_size": 1,
|
47 |
+
"unpad_inputs": false,
|
48 |
+
"use_memory_efficient_attention": false,
|
49 |
+
"vocab_size": 250048
|
50 |
+
}
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "3.3.1",
|
4 |
+
"transformers": "4.47.0",
|
5 |
+
"pytorch": "2.5.1+cu121"
|
6 |
+
},
|
7 |
+
"prompts": {},
|
8 |
+
"default_prompt_name": null,
|
9 |
+
"similarity_fn_name": "cosine"
|
10 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:95ff07f5909468355f321b6e771af472fedd32ffb5ecb07456039ccc736d8d03
|
3 |
+
size 1221487872
|
modules.json
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
},
|
14 |
+
{
|
15 |
+
"idx": 2,
|
16 |
+
"name": "2",
|
17 |
+
"path": "2_Normalize",
|
18 |
+
"type": "sentence_transformers.models.Normalize"
|
19 |
+
}
|
20 |
+
]
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 8192,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<s>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"cls_token": {
|
10 |
+
"content": "<s>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"eos_token": {
|
17 |
+
"content": "</s>",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
},
|
23 |
+
"mask_token": {
|
24 |
+
"content": "<mask>",
|
25 |
+
"lstrip": true,
|
26 |
+
"normalized": false,
|
27 |
+
"rstrip": false,
|
28 |
+
"single_word": false
|
29 |
+
},
|
30 |
+
"pad_token": {
|
31 |
+
"content": "<pad>",
|
32 |
+
"lstrip": false,
|
33 |
+
"normalized": false,
|
34 |
+
"rstrip": false,
|
35 |
+
"single_word": false
|
36 |
+
},
|
37 |
+
"sep_token": {
|
38 |
+
"content": "</s>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false
|
43 |
+
},
|
44 |
+
"unk_token": {
|
45 |
+
"content": "<unk>",
|
46 |
+
"lstrip": false,
|
47 |
+
"normalized": false,
|
48 |
+
"rstrip": false,
|
49 |
+
"single_word": false
|
50 |
+
}
|
51 |
+
}
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aa7a6ad87a7ce8fe196787355f6af7d03aee94d19c54a5eb1392ed18c8ef451a
|
3 |
+
size 17082988
|
tokenizer_config.json
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {
|
3 |
+
"0": {
|
4 |
+
"content": "<s>",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false,
|
9 |
+
"special": true
|
10 |
+
},
|
11 |
+
"1": {
|
12 |
+
"content": "<pad>",
|
13 |
+
"lstrip": false,
|
14 |
+
"normalized": false,
|
15 |
+
"rstrip": false,
|
16 |
+
"single_word": false,
|
17 |
+
"special": true
|
18 |
+
},
|
19 |
+
"2": {
|
20 |
+
"content": "</s>",
|
21 |
+
"lstrip": false,
|
22 |
+
"normalized": false,
|
23 |
+
"rstrip": false,
|
24 |
+
"single_word": false,
|
25 |
+
"special": true
|
26 |
+
},
|
27 |
+
"3": {
|
28 |
+
"content": "<unk>",
|
29 |
+
"lstrip": false,
|
30 |
+
"normalized": false,
|
31 |
+
"rstrip": false,
|
32 |
+
"single_word": false,
|
33 |
+
"special": true
|
34 |
+
},
|
35 |
+
"250001": {
|
36 |
+
"content": "<mask>",
|
37 |
+
"lstrip": true,
|
38 |
+
"normalized": false,
|
39 |
+
"rstrip": false,
|
40 |
+
"single_word": false,
|
41 |
+
"special": true
|
42 |
+
}
|
43 |
+
},
|
44 |
+
"bos_token": "<s>",
|
45 |
+
"clean_up_tokenization_spaces": true,
|
46 |
+
"cls_token": "<s>",
|
47 |
+
"eos_token": "</s>",
|
48 |
+
"extra_special_tokens": {},
|
49 |
+
"mask_token": "<mask>",
|
50 |
+
"max_length": 8192,
|
51 |
+
"model_max_length": 8192,
|
52 |
+
"pad_to_multiple_of": null,
|
53 |
+
"pad_token": "<pad>",
|
54 |
+
"pad_token_type_id": 0,
|
55 |
+
"padding_side": "right",
|
56 |
+
"sep_token": "</s>",
|
57 |
+
"stride": 0,
|
58 |
+
"tokenizer_class": "XLMRobertaTokenizer",
|
59 |
+
"truncation_side": "right",
|
60 |
+
"truncation_strategy": "longest_first",
|
61 |
+
"unk_token": "<unk>"
|
62 |
+
}
|