YAML Metadata
Warning:
The pipeline tag "text-ranking" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other
CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: microsoft/MiniLM-L12-H384-uncased
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
- Training Dataset:
- Language: en
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-log")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Datasets:
NanoMSMARCO_R100
,NanoNFCorpus_R100
andNanoNQ_R100
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": true }
Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
---|---|---|---|
map | 0.4552 (-0.0344) | 0.3062 (+0.0452) | 0.5268 (+0.1072) |
mrr@10 | 0.4395 (-0.0380) | 0.4933 (-0.0065) | 0.5314 (+0.1047) |
ndcg@10 | 0.5067 (-0.0338) | 0.3140 (-0.0110) | 0.5767 (+0.0761) |
Cross Encoder Nano BEIR
- Dataset:
NanoBEIR_R100_mean
- Evaluated with
CrossEncoderNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true }
Metric | Value |
---|---|
map | 0.4294 (+0.0393) |
mrr@10 | 0.4881 (+0.0201) |
ndcg@10 | 0.4658 (+0.0104) |
Training Details
Training Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 78,704 training samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 12 characters
- mean: 34.12 characters
- max: 144 characters
- min: 3 elements
- mean: 6.50 elements
- max: 10 elements
- min: 3 elements
- mean: 6.50 elements
- max: 10 elements
- Samples:
query docs labels what animals were first on land
["Archaebacteria were the first animals to live on earth before any other organism. Archaebacteria have to be able live where there's no oxygen because there was no oxygen when they first appeared. juryrocket · 5 years ago. http://www.fossilmall.com/Cambrian_Shado... http://www.crystalinks.com/oldestanimal.... The first vertebrates on land were Ichthyostega and Acanthostega, early tetrapods that were ancestral to the amphibians.", 'One of the first four-legged creatures that walked on land moved like a fish known as a mudskipper, a study shows-dragging itself along on its forelimbs like crutches. The creature lived in floodplains on what is now Greenland during a period known geologically as the Devonian period-about 360 to 410 million years ago. Instead, the creature, known as Ichthyostega, would have hauled itself up on its front limbs like it was on crutches. Previously, scientists believed the creature would have walked like a salamander.The first 3D modelling of it showed that Ear...
[1, 0, 0, 0, 0, ...]
government contingency fund definition
['A consolidated fund or the consolidated revenue fund is the term used for the main bank account of the government in many of the countries in the Commonwealth of Nations. Contents. The Westminster Parliament provides a sum of money annually to provide a budget for the Scottish Government and fund the operation of the Scottish Parliament and the salaries for judges of Scottish courts. This money is transferred from the UK Consolidated Fund into an account known as the Scottish Consolidated Fund.', 'According to Washington State law, the fund balance in the Contingency Fund is limited to 37.5 cents per $1,000 assessed valuation. For 2009, the legal limit is $3.86 million. Per the adopted Contingency Fund budget policy, the target balance is 10% of the General Fund’s budgeted expenditures, which corresponds to $2.34 million in 2009. Restoring the Contingency Fund to its target level will constitute the Council’s highest funding priority following the final draw needed to address the rev...
[1, 0, 0, 0, 0, ...]
is malignant hypertension curable
['Malignant hypertension: Introduction. Malignant hypertension: Malignant hypertension is a condition characterized by very high blood pressure and swelling of the optic nerve. This type of hypertension is more common in people with kidney problems such as narrowed kidney blood vessels. Prognosis for Malignant hypertension. Prognosis for Malignant hypertension: The prognosis depends on how soon treatment is delivered after the onset of the condition. Prompt treatment can limit or prevent complications such as organ damage due to the high blood pressure. More about prognosis of Malignant hypertension.', "In many people, high blood pressure is the main cause of malignant hypertension. Missing doses of blood pressure medications can also cause it. In addition, there are certain medical conditions that can cause it. Normal blood pressure is below 140/90. A person with malignant hypertension has a blood pressure that's typically above 180/120. Malignant hypertension should be treated as a m...
[1, 0, 0, 0, 0, ...]
- Loss:
PListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": null, "respect_input_order": true }
Evaluation Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 1,000 evaluation samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 12 characters
- mean: 34.01 characters
- max: 86 characters
- min: 1 elements
- mean: 5.50 elements
- max: 10 elements
- min: 1 elements
- mean: 5.50 elements
- max: 10 elements
- Samples:
query docs labels what hormone triggers ovulation
['From Wikipedia, the free encyclopedia. Luteinizing hormone (LH, also known as lutropin and sometimes lutrophin) is a hormone produced by gonadotropic cells in the anterior pituitary gland. In females, an acute rise of LH ( LH surge ) triggers ovulation and development of the corpus luteum. Luteinizing hormone (LH, also known as lutropin and sometimes lutrophin) is a hormone produced by gonadotropic cells in the anterior pituitary gland', 'Luteinising hormone (LH) is made by the pituitary gland and stimulates the mature egg to be released from the ovary, this is called ovulation. So, to answer the question: … Luteinising Hormone (LH). ', '1 The so–called LH surge causes the release of the egg from the ovary and you‘re ovulating. 2 Ovulation normally occurs 24 to 36 hours after the LH surge, which is why LH is a good predictor for peak fertility. Ovulation. The level of estrogen in your body is still increasing and it eventually causes a rapid rise in luteinising hormone (often cal...
[1, 0, 0, 0, 0, ...]
how much are bills season tickets
['Last year, that cost $720, $80 for nine games. This year, it averages $90 over 10 games, $900 for the season. The Bills also announced Thursday that they will debut a variable ticket-pricing plan for preseason and regular-season games.', 'Bills Extra Points Credit Cardholders enjoy 20% off every purchase at NFLShop.com, a flexible financing option when renewing season tickets on the card AND they earn DOUBLE POINTS for every $1 spent on Bills tickets, inside Ralph Wilson Stadium, and at the Bills Store.', 'Buy Buffalo Bills tickets to join fellow fans in filling Ralph Wilson Stadium on game day. From the collisions on the field to the fans singing the Shout Song in the stands, attending a Bills home game is truly a one-of-a-kind experience.', 'Buffalo Bills Tickets. The Buffalo Bills hit the court this season eager to win. Get your NFL tickets to see the Buffalo Bills organization play Football in Ralph Wilson Stadium. Get you Buffalo Bills tickets today and get ready to make some no...
[1, 0, 0, 0, 0, ...]
cost of materials to build a garage
['Cost to build a new 2 car garage will vary from $30 to $41 per square foot for standard construction including labor cost and materials prices.', '1 Upgrading the quality of materials can bump the cost to $55 a square foot, or $13,200 for a minimum single-car structure, $21,000 for a two-car garage and $47,000 or more for the dream version.', 'For the building of the garages, the typical costs will include: 1 Two car-According to Hanley Wood and their Remodeling magazine, the cost to construct a standard two-car garage addition is $58,432. 2 This translates to $86 per square foot; and.', '1 Kits with pre-cut or pre-fab materials to build a steel or wood garage run $5,000-$14,000 or more, depending on the size of the structure and the quality of materials.', 'If you are in an area with higher-than-average cost of living you can use $50 to $55. If there are complications or you prefer top quality materials and components the price can go up.So, generally speaking, a 24 ft.sq., two ca...
[1, 0, 0, 0, 0, ...]
- Loss:
PListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": null, "respect_input_order": true }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: Trueload_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
---|---|---|---|---|---|---|---|
-1 | -1 | - | - | 0.0364 (-0.5040) | 0.2321 (-0.0930) | 0.0300 (-0.4706) | 0.0995 (-0.3559) |
0.0002 | 1 | 1.9527 | - | - | - | - | - |
0.0508 | 250 | 1.8465 | - | - | - | - | - |
0.1016 | 500 | 1.6905 | 1.6672 | 0.0443 (-0.4961) | 0.2497 (-0.0754) | 0.0663 (-0.4344) | 0.1201 (-0.3353) |
0.1525 | 750 | 1.6665 | - | - | - | - | - |
0.2033 | 1000 | 1.6623 | 1.6569 | 0.0822 (-0.4582) | 0.2517 (-0.0734) | 0.1883 (-0.3124) | 0.1741 (-0.2813) |
0.2541 | 1250 | 1.6604 | - | - | - | - | - |
0.3049 | 1500 | 1.6486 | 1.6432 | 0.3926 (-0.1478) | 0.2794 (-0.0457) | 0.3401 (-0.1606) | 0.3373 (-0.1180) |
0.3558 | 1750 | 1.6472 | - | - | - | - | - |
0.4066 | 2000 | 1.6436 | 1.6382 | 0.4531 (-0.0873) | 0.2831 (-0.0419) | 0.4918 (-0.0089) | 0.4093 (-0.0460) |
0.4574 | 2250 | 1.6449 | - | - | - | - | - |
0.5082 | 2500 | 1.6419 | 1.6362 | 0.4299 (-0.1105) | 0.2850 (-0.0400) | 0.5065 (+0.0058) | 0.4071 (-0.0482) |
0.5591 | 2750 | 1.6449 | - | - | - | - | - |
0.6099 | 3000 | 1.6436 | 1.6316 | 0.5067 (-0.0338) | 0.3140 (-0.0110) | 0.5767 (+0.0761) | 0.4658 (+0.0104) |
0.6607 | 3250 | 1.641 | - | - | - | - | - |
0.7115 | 3500 | 1.6372 | 1.6321 | 0.5166 (-0.0238) | 0.3161 (-0.0089) | 0.5590 (+0.0584) | 0.4639 (+0.0085) |
0.7624 | 3750 | 1.6388 | - | - | - | - | - |
0.8132 | 4000 | 1.6337 | 1.6294 | 0.4844 (-0.0560) | 0.3146 (-0.0104) | 0.5672 (+0.0665) | 0.4554 (+0.0000) |
0.8640 | 4250 | 1.637 | - | - | - | - | - |
0.9148 | 4500 | 1.638 | 1.6300 | 0.4975 (-0.0430) | 0.3111 (-0.0140) | 0.5655 (+0.0649) | 0.4580 (+0.0026) |
0.9656 | 4750 | 1.6393 | - | - | - | - | - |
-1 | -1 | - | - | 0.5067 (-0.0338) | 0.3140 (-0.0110) | 0.5767 (+0.0761) | 0.4658 (+0.0104) |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.4.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
PListMLELoss
@inproceedings{lan2014position,
title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
booktitle={UAI},
volume={14},
pages={449--458},
year={2014}
}
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-log
Base model
microsoft/MiniLM-L12-H384-uncasedDataset used to train yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-log
Evaluation results
- Map on NanoMSMARCO R100self-reported0.455
- Mrr@10 on NanoMSMARCO R100self-reported0.440
- Ndcg@10 on NanoMSMARCO R100self-reported0.507
- Map on NanoNFCorpus R100self-reported0.306
- Mrr@10 on NanoNFCorpus R100self-reported0.493
- Ndcg@10 on NanoNFCorpus R100self-reported0.314
- Map on NanoNQ R100self-reported0.527
- Mrr@10 on NanoNQ R100self-reported0.531
- Ndcg@10 on NanoNQ R100self-reported0.577
- Map on NanoBEIR R100 meanself-reported0.429