--- language: - de - en - es - fr - it - nl - pl - pt - ru - zh tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:51741 - loss:CoSENTLoss base_model: RomainDarous/pre_training_original_model widget: - source_sentence: Starsza para azjatycka pozuje z noworodkiem przy stole obiadowym. sentences: - Koszykarz ma zamiar zdobyć punkty dla swojej drużyny. - Grupa starszych osób pozuje wokół stołu w jadalni. - Możliwe, że układ słoneczny taki jak nasz może istnieć poza galaktyką. - source_sentence: Englisch arbeitet überall mit Menschen, die Dinge kaufen und verkaufen, und in der Gastfreundschaft und im Tourismusgeschäft. sentences: - Ich bin in Maharashtra (einschließlich Mumbai) und Andhra Pradesh herumgereist, und ich hatte kein Problem damit, nur mit Englisch auszukommen. - 'Ein griechischsprachiger Sklave (δούλος, doulos) würde seinen Herrn, glaube ich, κύριος nennen (translit: kurios; Herr, Herr, Herr, Herr; Vokativform: κύριε).' - Das Paar lag auf dem Bett. - source_sentence: Si vous vous comprenez et comprenez votre ennemi, vous aurez beaucoup plus de chances de gagner n'importe quelle bataille. sentences: - 'Outre les probabilités de gagner une bataille théorique, cette citation a une autre signification : l''importance de connaître/comprendre les autres.' - Une femme et un chien se promènent ensemble. - Un homme joue de la guitare. - source_sentence: Un homme joue de la harpe. sentences: - Une femme joue de la guitare. - une femme a un enfant. - Un groupe de personnes est debout et assis sur le sol la nuit. - source_sentence: Dois cães a lutar na neve. sentences: - Dois cães brincam na neve. - Pode sempre perguntar, então é a escolha do autor a aceitar ou não. - Um gato está a caminhar sobre chão de madeira dura. datasets: - PhilipMay/stsb_multi_mt pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on RomainDarous/pre_training_original_model results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts eval type: sts-eval metrics: - type: pearson_cosine value: 0.649351613026743 name: Pearson Cosine - type: spearman_cosine value: 0.6712113629733555 name: Spearman Cosine - type: pearson_cosine value: 0.6648874938903813 name: Pearson Cosine - type: spearman_cosine value: 0.6859979455545288 name: Spearman Cosine - type: pearson_cosine value: 0.6574990404767099 name: Pearson Cosine - type: spearman_cosine value: 0.6819347305734045 name: Spearman Cosine - type: pearson_cosine value: 0.6482851200513846 name: Pearson Cosine - type: spearman_cosine value: 0.6739057551228634 name: Spearman Cosine - type: pearson_cosine value: 0.657747388798702 name: Pearson Cosine - type: spearman_cosine value: 0.6797522820481435 name: Spearman Cosine - type: pearson_cosine value: 0.580138787555855 name: Pearson Cosine - type: spearman_cosine value: 0.6025843591291092 name: Spearman Cosine - type: pearson_cosine value: 0.6445711160678915 name: Pearson Cosine - type: spearman_cosine value: 0.6738244742184887 name: Spearman Cosine - type: pearson_cosine value: 0.6060638359389463 name: Pearson Cosine - type: spearman_cosine value: 0.6210827296807453 name: Spearman Cosine - type: pearson_cosine value: 0.6672294139281439 name: Pearson Cosine - type: spearman_cosine value: 0.6864882079409924 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.6279093972489541 name: Pearson Cosine - type: spearman_cosine value: 0.6320355986028895 name: Spearman Cosine - type: pearson_cosine value: 0.6433522116833627 name: Pearson Cosine - type: spearman_cosine value: 0.658000076471118 name: Spearman Cosine - type: pearson_cosine value: 0.6271929274305698 name: Pearson Cosine - type: spearman_cosine value: 0.6229896619978917 name: Spearman Cosine - type: pearson_cosine value: 0.6391062028706688 name: Pearson Cosine - type: spearman_cosine value: 0.6417698712729121 name: Spearman Cosine - type: pearson_cosine value: 0.622947898324511 name: Pearson Cosine - type: spearman_cosine value: 0.6179788172853071 name: Spearman Cosine - type: pearson_cosine value: 0.5903164175964553 name: Pearson Cosine - type: spearman_cosine value: 0.5887507390354803 name: Spearman Cosine - type: pearson_cosine value: 0.640080846863563 name: Pearson Cosine - type: spearman_cosine value: 0.6391082728350455 name: Spearman Cosine - type: pearson_cosine value: 0.6172821161239198 name: Pearson Cosine - type: spearman_cosine value: 0.6180296923884917 name: Spearman Cosine - type: pearson_cosine value: 0.6607896399210559 name: Pearson Cosine - type: spearman_cosine value: 0.6616750284666137 name: Spearman Cosine --- # SentenceTransformer based on RomainDarous/pre_training_original_model This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [RomainDarous/pre_training_original_model](https://huggingface.co/RomainDarous/pre_training_original_model) on the [multi_stsb_de](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_es](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_fr](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_it](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_nl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_pl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_pt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_ru](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) and [multi_stsb_zh](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) datasets. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [RomainDarous/pre_training_original_model](https://huggingface.co/RomainDarous/pre_training_original_model) - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 512 dimensions - **Similarity Function:** Cosine Similarity - **Training Datasets:** - [multi_stsb_de](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_es](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_fr](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_it](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_nl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_pl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_pt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_ru](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_zh](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - **Languages:** de, en, es, fr, it, nl, pl, pt, ru, zh ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RomainDarous/multists_finetuned_original_model") # Run inference sentences = [ 'Dois cães a lutar na neve.', 'Dois cães brincam na neve.', 'Pode sempre perguntar, então é a escolha do autor a aceitar ou não.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 512] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-eval`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test` and `sts-test` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-eval | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.6494 | 0.6608 | | **spearman_cosine** | **0.6712** | **0.6617** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:----------| | pearson_cosine | 0.6649 | | **spearman_cosine** | **0.686** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6575 | | **spearman_cosine** | **0.6819** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6483 | | **spearman_cosine** | **0.6739** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6577 | | **spearman_cosine** | **0.6798** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.5801 | | **spearman_cosine** | **0.6026** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6446 | | **spearman_cosine** | **0.6738** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6061 | | **spearman_cosine** | **0.6211** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6672 | | **spearman_cosine** | **0.6865** | ## Training Details ### Training Datasets #### multi_stsb_de * Dataset: [multi_stsb_de](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:---------------------------------------------------------------|:--------------------------------------------------------------------------|:--------------------------------| | Ein Flugzeug hebt gerade ab. | Ein Flugzeug hebt gerade ab. | 1.0 | | Ein Mann spielt eine große Flöte. | Ein Mann spielt eine Flöte. | 0.7599999904632568 | | Ein Mann streicht geriebenen Käse auf eine Pizza. | Ein Mann streicht geriebenen Käse auf eine ungekochte Pizza. | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_es * Dataset: [multi_stsb_es](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------------------|:----------------------------------------------------------------------|:--------------------------------| | Un avión está despegando. | Un avión está despegando. | 1.0 | | Un hombre está tocando una gran flauta. | Un hombre está tocando una flauta. | 0.7599999904632568 | | Un hombre está untando queso rallado en una pizza. | Un hombre está untando queso rallado en una pizza cruda. | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_fr * Dataset: [multi_stsb_fr](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:---------------------------------------------------------------------|:--------------------------------| | Un avion est en train de décoller. | Un avion est en train de décoller. | 1.0 | | Un homme joue d'une grande flûte. | Un homme joue de la flûte. | 0.7599999904632568 | | Un homme étale du fromage râpé sur une pizza. | Un homme étale du fromage râpé sur une pizza non cuite. | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_it * Dataset: [multi_stsb_it](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------| | Un aereo sta decollando. | Un aereo sta decollando. | 1.0 | | Un uomo sta suonando un grande flauto. | Un uomo sta suonando un flauto. | 0.7599999904632568 | | Un uomo sta spalmando del formaggio a pezzetti su una pizza. | Un uomo sta spalmando del formaggio a pezzetti su una pizza non cotta. | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_nl * Dataset: [multi_stsb_nl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------|:--------------------------------------------------------------------|:--------------------------------| | Er gaat een vliegtuig opstijgen. | Er gaat een vliegtuig opstijgen. | 1.0 | | Een man speelt een grote fluit. | Een man speelt fluit. | 0.7599999904632568 | | Een man smeert geraspte kaas op een pizza. | Een man strooit geraspte kaas op een ongekookte pizza. | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_pl * Dataset: [multi_stsb_pl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:------------------------------------------------------------------------|:--------------------------------| | Samolot wystartował. | Samolot wystartował. | 1.0 | | Człowiek gra na dużym flecie. | Człowiek gra na flecie. | 0.7599999904632568 | | Mężczyzna rozsiewa na pizzy rozdrobniony ser. | Mężczyzna rozsiewa rozdrobniony ser na niegotowanej pizzy. | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_pt * Dataset: [multi_stsb_pt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------| | Um avião está a descolar. | Um avião aéreo está a descolar. | 1.0 | | Um homem está a tocar uma grande flauta. | Um homem está a tocar uma flauta. | 0.7599999904632568 | | Um homem está a espalhar queijo desfiado numa pizza. | Um homem está a espalhar queijo desfiado sobre uma pizza não cozida. | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_ru * Dataset: [multi_stsb_ru](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------|:---------------------------------------------------------------------|:--------------------------------| | Самолет взлетает. | Взлетает самолет. | 1.0 | | Человек играет на большой флейте. | Человек играет на флейте. | 0.7599999904632568 | | Мужчина разбрасывает сыр на пиццу. | Мужчина разбрасывает измельченный сыр на вареную пиццу. | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_zh * Dataset: [multi_stsb_zh](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:------------------------------|:----------------------------------|:--------------------------------| | 一架飞机正在起飞。 | 一架飞机正在起飞。 | 1.0 | | 一个男人正在吹一支大笛子。 | 一个人在吹笛子。 | 0.7599999904632568 | | 一名男子正在比萨饼上涂抹奶酪丝。 | 一名男子正在将奶酪丝涂抹在未熟的披萨上。 | 0.7599999904632568 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Datasets #### multi_stsb_de * Dataset: [multi_stsb_de](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:-------------------------------------------------------------|:-----------------------------------------------------------|:-------------------------------| | Ein Mann mit einem Schutzhelm tanzt. | Ein Mann mit einem Schutzhelm tanzt. | 1.0 | | Ein kleines Kind reitet auf einem Pferd. | Ein Kind reitet auf einem Pferd. | 0.949999988079071 | | Ein Mann verfüttert eine Maus an eine Schlange. | Der Mann füttert die Schlange mit einer Maus. | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_es * Dataset: [multi_stsb_es](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------------------------|:---------------------------------------------------------------------|:-------------------------------| | Un hombre con un casco está bailando. | Un hombre con un casco está bailando. | 1.0 | | Un niño pequeño está montando a caballo. | Un niño está montando a caballo. | 0.949999988079071 | | Un hombre está alimentando a una serpiente con un ratón. | El hombre está alimentando a la serpiente con un ratón. | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_fr * Dataset: [multi_stsb_fr](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:-------------------------------------------------------------------------|:----------------------------------------------------------------------------|:-------------------------------| | Un homme avec un casque de sécurité est en train de danser. | Un homme portant un casque de sécurité est en train de danser. | 1.0 | | Un jeune enfant monte à cheval. | Un enfant monte à cheval. | 0.949999988079071 | | Un homme donne une souris à un serpent. | L'homme donne une souris au serpent. | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_it * Dataset: [multi_stsb_it](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------------------------|:---------------------------------------------------------------|:-------------------------------| | Un uomo con l'elmetto sta ballando. | Un uomo che indossa un elmetto sta ballando. | 1.0 | | Un bambino piccolo sta cavalcando un cavallo. | Un bambino sta cavalcando un cavallo. | 0.949999988079071 | | Un uomo sta dando da mangiare un topo a un serpente. | L'uomo sta dando da mangiare un topo al serpente. | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_nl * Dataset: [multi_stsb_nl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------|:-----------------------------------------------------|:-------------------------------| | Een man met een helm is aan het dansen. | Een man met een helm is aan het dansen. | 1.0 | | Een jong kind rijdt op een paard. | Een kind rijdt op een paard. | 0.949999988079071 | | Een man voedt een muis aan een slang. | De man voert een muis aan de slang. | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_pl * Dataset: [multi_stsb_pl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:---------------------------------------------------|:---------------------------------------------------|:-------------------------------| | Tańczy mężczyzna w twardym kapeluszu. | Tańczy mężczyzna w twardym kapeluszu. | 1.0 | | Małe dziecko jedzie na koniu. | Dziecko jedzie na koniu. | 0.949999988079071 | | Człowiek karmi węża myszką. | Ten człowiek karmi węża myszką. | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_pt * Dataset: [multi_stsb_pt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------------------|:-----------------------------------------------------------|:-------------------------------| | Um homem de chapéu duro está a dançar. | Um homem com um capacete está a dançar. | 1.0 | | Uma criança pequena está a montar a cavalo. | Uma criança está a montar a cavalo. | 0.949999988079071 | | Um homem está a alimentar um rato a uma cobra. | O homem está a alimentar a cobra com um rato. | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_ru * Dataset: [multi_stsb_ru](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------------|:----------------------------------------------|:-------------------------------| | Человек в твердой шляпе танцует. | Мужчина в твердой шляпе танцует. | 1.0 | | Маленький ребенок едет верхом на лошади. | Ребенок едет на лошади. | 0.949999988079071 | | Мужчина кормит мышь змее. | Человек кормит змею мышью. | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### multi_stsb_zh * Dataset: [multi_stsb_zh](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: sentence1, sentence2, and score * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | | | | * Samples: | sentence1 | sentence2 | score | |:---------------------------|:--------------------------|:-------------------------------| | 一个戴着硬帽子的人在跳舞。 | 一个戴着硬帽的人在跳舞。 | 1.0 | | 一个小孩子在骑马。 | 一个孩子在骑马。 | 0.949999988079071 | | 一个人正在用老鼠喂蛇。 | 那人正在给蛇喂老鼠。 | 1.0 | * Loss: [CoSENTLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 4 - `warmup_ratio`: 0.1 #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | multi stsb de loss | multi stsb es loss | multi stsb fr loss | multi stsb it loss | multi stsb nl loss | multi stsb pl loss | multi stsb pt loss | multi stsb ru loss | multi stsb zh loss | sts-eval_spearman_cosine | sts-test_spearman_cosine | |:-----:|:-----:|:-------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------------:|:------------------------:| | 1.0 | 3240 | 4.6594 | 4.6488 | 4.6520 | 4.6401 | 4.6637 | 4.6435 | 4.6943 | 4.6786 | 4.6902 | 4.6578 | 0.5620 | - | | 2.0 | 6480 | 4.4285 | 4.6860 | 4.6755 | 4.6796 | 4.6655 | 4.6472 | 4.7655 | 4.6910 | 4.7783 | 4.6939 | 0.6592 | - | | 3.0 | 9720 | 4.1541 | 4.9416 | 5.0391 | 4.9025 | 4.9229 | 4.9449 | 5.0618 | 5.0057 | 5.0001 | 4.9986 | 0.6764 | - | | 4.0 | 12960 | 3.8671 | 5.3776 | 5.5136 | 5.3842 | 5.3216 | 5.3303 | 5.4847 | 5.4591 | 5.3623 | 5.4139 | 0.6865 | 0.6617 | ### Framework Versions - Python: 3.11.10 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.3.1+cu121 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ```