CrossEncoder based on almanach/camembertv2-base

This is a Cross Encoder model finetuned from almanach/camembertv2-base using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

  • Model Type: Cross Encoder
  • Base model: almanach/camembertv2-base
  • Maximum Sequence Length: 1024 tokens
  • Number of Output Labels: 1 label

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the ๐Ÿค— Hub
model = CrossEncoder("tomaarsen/reranker-camembertv2-base-fr-lambda")
# Get scores for pairs of texts
pairs = [
    ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
    ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
    ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'How many calories in an egg',
    [
        'There are on average between 55 and 80 calories in an egg depending on its size.',
        'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
        'Most of the calories in an egg come from the yellow yolk in the center.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Reranking

Metric Value
map 0.6059 (+0.1333)
mrr@10 0.6052 (+0.1371)
ndcg@10 0.6217 (+0.1206)

Training Details

Training Dataset

Unnamed Dataset

  • Size: 100,000 training samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 0 characters
    • mean: 37.74 characters
    • max: 157 characters
    • size: 6 elements
    • size: 6 elements
  • Samples:
    query docs labels
    ['L'ambitus exigรฉ par le rรดle-titre est plus problรฉmatique : la plus haute note est un "si" aigu, ce qui n'est pas anormal pour une soprano ou une mezzo-soprano, alors que la plus basse est un "sol" bรฉmol grave dans le registre alto (et normalement au-dessous du registre d'une mezzo-soprano "standard"). Compte tenu d'une telle tessiture, qui ressemble ร  celle de nombreux rรดles de mezzo comme Carmen et Amneris, on pourrait croire qu'un soprano aigu n'est pas essentiel ร  la piรจce, mais c'est bien le contraire ; la plupart des sopranos graves qui ont abordรฉ ce rรดle ont imposรฉ un tel effort ร  leur voix tout au long de l'opรฉra, qu'elles se retrouvaient รฉpuisรฉes au moment de la scรจne finale (la partie la plus รฉprouvante pour le rรดle-titre). Ce rรดle est l'exemple classique de la diffรฉrence qui existe entre tessiture et ambitus : tandis que des mezzos peuvent exรฉcuter une note aigรผe (comme dans "Carmen"), ou mรชme soutenir temporairement une tessiture tendue, il est impossible pour un... [1, 0, 0, 0, 0, ...]
    ["Les saisons 2 ร  6 sont produites par Tรฉlรฉ-Vision V Inc., filiale de Groupe Tรฉlรฉ-Vision Inc. Lors de la saison d'hiver 2006, l'รฉmission รฉtait animรฉe par Isabelle Marรฉchal et Virginie Coossa. Pour les saisons 3,4,5 Marie Plourde a remplacรฉ Isabelle Marรฉchal, alors que Virginie Coossa est demeurรฉe coanimatrice. Lors des saisons 5 et 6, Kim Rusk, la gagnante de la saison 3, รฉtait la coanimatrice. La saison 6 sera animรฉe par Pierre-Yves Lord.", ',{"type": "ExternalData", "service":"geoshape","ids": "Q40","properties": {"fill":"#FF0000","stroke-width":0,"description": "Autriche"}}]', '! scope=col width="10%" Pages ! scope=col width="25%"
    ['En 1963, Bernard et Franรงoise Moitessier quittent le port de Marseille, pour un voyage de noces. Ils prennent le dรฉtroit de Gibraltar et se dirigent vers les รฎles Canaries oรน il retrouve Pierre Deshumeurs, le compagnon du "Snark". Les enfants de Franรงoise les rejoignent le temps des vacances scolaires. Les Moitessier poursuivent ensuite vers les Antilles, puis le canal de Panama, avant de s'arrรชter longuement dans l'archipel des Galรกpagos, oรน certaines รฎles reculรฉes de toutes civilisations accueillent une faune et une flore exceptionnelles qui retiennent l'attention du couple. Ils rejoignent ensuite la Polynรฉsie franรงaise oรน ils restent plusieurs mois.', ',{"type": "ExternalData", "service":"geoshape","ids": "Q40","properties": {"fill":"#FF0000","stroke-width":0,"description": "Autriche"}}]', '! scope=col width="10%" Pages ! scope=col width="25%"
  • Loss: LambdaLoss with these parameters:
    {
        "weighting_scheme": "sentence_transformers.cross_encoder.losses.LambdaLoss.NDCGLoss2PPScheme",
        "k": null,
        "sigma": 1.0,
        "eps": 1e-10,
        "reduction_log": "binary",
        "activation_fct": "torch.nn.modules.linear.Identity",
        "mini_batch_size": 8
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss swim_ir_dev_ndcg@10
-1 -1 - 0.0784 (-0.4228)
0.0002 1 2.0475 -
0.016 100 2.065 -
0.032 200 1.9662 -
0.048 300 0.9965 -
0.064 400 0.7667 -
0.08 500 0.6547 0.5961 (+0.0950)
0.096 600 0.5899 -
0.112 700 0.5331 -
0.128 800 0.4637 -
0.144 900 0.4826 -
0.16 1000 0.4249 0.6012 (+0.1000)
0.176 1100 0.4271 -
0.192 1200 0.4071 -
0.208 1300 0.3594 -
0.224 1400 0.401 -
0.24 1500 0.4171 0.5900 (+0.0888)
0.256 1600 0.3728 -
0.272 1700 0.3242 -
0.288 1800 0.3665 -
0.304 1900 0.3367 -
0.32 2000 0.3259 0.6134 (+0.1122)
0.336 2100 0.381 -
0.352 2200 0.3289 -
0.368 2300 0.3234 -
0.384 2400 0.3794 -
0.4 2500 0.3322 0.6070 (+0.1058)
0.416 2600 0.3139 -
0.432 2700 0.3427 -
0.448 2800 0.3162 -
0.464 2900 0.2899 -
0.48 3000 0.3571 0.6166 (+0.1155)
0.496 3100 0.3312 -
0.512 3200 0.3082 -
0.528 3300 0.2839 -
0.544 3400 0.3649 -
0.56 3500 0.325 0.6108 (+0.1097)
0.576 3600 0.3042 -
0.592 3700 0.2785 -
0.608 3800 0.3095 -
0.624 3900 0.3053 -
0.64 4000 0.293 0.6131 (+0.1119)
0.656 4100 0.2987 -
0.672 4200 0.2675 -
0.688 4300 0.2977 -
0.704 4400 0.2881 -
0.72 4500 0.2862 0.6186 (+0.1174)
0.736 4600 0.2996 -
0.752 4700 0.2724 -
0.768 4800 0.2442 -
0.784 4900 0.2923 -
0.8 5000 0.2691 0.6217 (+0.1206)
0.816 5100 0.3042 -
0.832 5200 0.2654 -
0.848 5300 0.3059 -
0.864 5400 0.2571 -
0.88 5500 0.2741 0.6174 (+0.1162)
0.896 5600 0.3009 -
0.912 5700 0.2669 -
0.928 5800 0.2272 -
0.944 5900 0.2673 -
0.96 6000 0.2674 0.6194 (+0.1182)
0.976 6100 0.2551 -
0.992 6200 0.2981 -
-1 -1 - 0.6217 (+0.1206)
  • The bold row denotes the saved checkpoint.

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.680 kWh
  • Carbon Emitted: 0.264 kg of CO2
  • Hours Used: 1.921 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 3.5.0.dev0
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

LambdaLoss

@inproceedings{wang2018lambdaloss,
  title={The lambdaloss framework for ranking metric optimization},
  author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
  booktitle={Proceedings of the 27th ACM international conference on information and knowledge management},
  pages={1313--1322},
  year={2018}
}
Downloads last month
1
Safetensors
Model size
112M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for tomaarsen/reranker-camembertv2-base-fr-lambda

Finetuned
(14)
this model

Evaluation results