CrossEncoder based on BAAI/bge-reranker-v2-m3
This is a Cross Encoder model finetuned from BAAI/bge-reranker-v2-m3 using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: BAAI/bge-reranker-v2-m3
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the ๐ค Hub
model = CrossEncoder("cross_encoder_model_id")
# Get scores for pairs of texts
pairs = [
['A Hydro Flask in a light brown color with a small hand logo.', 'A large, light-brown Hydro Flask water bottle with a darker tan cap and black accents, appears to be made of metal, and seems to be in new condition with tags still attached.'],
['A black smartphone.', 'The image shows four used smartphones, including a white and black Samsung smartphone, a black and silver phone of unknown brand, a white and black Nokia phone, and a white Apple iPhone, all appearing to be between 4 and 5 inches in screen size.'],
['A purple pencil case with a unicorn design.', 'A new, mint green hard-shell pencil case with a ribbed texture and a central circular illustration of a unicorn with a rainbow mane.'],
['A folded, dark blue umbrella has a slightly crinkled matching fabric case and its handle is still wrapped in clear plastic.', 'There are two blue umbrellas.'],
['a black messenger bag with purple stitching.', 'A gray-green backpack with black mesh padding and an orange "NANEU PRO" tag on the side.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'A Hydro Flask in a light brown color with a small hand logo.',
[
'A large, light-brown Hydro Flask water bottle with a darker tan cap and black accents, appears to be made of metal, and seems to be in new condition with tags still attached.',
'The image shows four used smartphones, including a white and black Samsung smartphone, a black and silver phone of unknown brand, a white and black Nokia phone, and a white Apple iPhone, all appearing to be between 4 and 5 inches in screen size.',
'A new, mint green hard-shell pencil case with a ribbed texture and a central circular illustration of a unicorn with a rainbow mane.',
'There are two blue umbrellas.',
'A gray-green backpack with black mesh padding and an orange "NANEU PRO" tag on the side.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Binary Classification
- Dataset:
eval - Evaluated with
CEBinaryClassificationEvaluator
| Metric | Value |
|---|---|
| accuracy | 0.8962 |
| accuracy_threshold | 0.2969 |
| f1 | 0.7976 |
| f1_threshold | 0.2016 |
| precision | 0.7505 |
| recall | 0.8511 |
| average_precision | 0.8669 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 68,056 training samples
- Columns:
sentence_0,sentence_1, andlabel - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 18 characters
- mean: 104.96 characters
- max: 313 characters
- min: 15 characters
- mean: 116.53 characters
- max: 482 characters
- min: 0.0
- mean: 0.23
- max: 1.0
- Samples:
sentence_0 sentence_1 label A Hydro Flask in a light brown color with a small hand logo.A large, light-brown Hydro Flask water bottle with a darker tan cap and black accents, appears to be made of metal, and seems to be in new condition with tags still attached.1.0A black smartphone.The image shows four used smartphones, including a white and black Samsung smartphone, a black and silver phone of unknown brand, a white and black Nokia phone, and a white Apple iPhone, all appearing to be between 4 and 5 inches in screen size.0.0A purple pencil case with a unicorn design.A new, mint green hard-shell pencil case with a ribbed texture and a central circular illustration of a unicorn with a rainbow mane.0.0 - Loss:
BinaryCrossEntropyLosswith these parameters:{ "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 16per_device_eval_batch_size: 16
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss | eval_average_precision |
|---|---|---|---|
| 0.1175 | 500 | 0.3493 | 0.7918 |
| 0.2351 | 1000 | 0.3064 | 0.8216 |
| 0.3526 | 1500 | 0.2832 | 0.8328 |
| 0.4701 | 2000 | 0.2873 | 0.8408 |
| 0.5877 | 2500 | 0.2866 | 0.8502 |
| 0.7052 | 3000 | 0.2797 | 0.8499 |
| 0.8228 | 3500 | 0.2737 | 0.8525 |
| 0.9403 | 4000 | 0.2724 | 0.8563 |
| 1.0 | 4254 | - | 0.8587 |
| 1.0578 | 4500 | 0.2718 | 0.8565 |
| 1.1754 | 5000 | 0.264 | 0.8561 |
| 1.2929 | 5500 | 0.2642 | 0.8584 |
| 1.4104 | 6000 | 0.2604 | 0.8582 |
| 1.5280 | 6500 | 0.2593 | 0.8595 |
| 1.6455 | 7000 | 0.2498 | 0.8628 |
| 1.7630 | 7500 | 0.2515 | 0.8649 |
| 1.8806 | 8000 | 0.2504 | 0.8650 |
| 1.9981 | 8500 | 0.2624 | 0.8643 |
| 2.0 | 8508 | - | 0.8632 |
| 2.1157 | 9000 | 0.2481 | 0.8662 |
| 2.2332 | 9500 | 0.2483 | 0.8661 |
| 2.3507 | 10000 | 0.2543 | 0.8647 |
| 2.4683 | 10500 | 0.2473 | 0.8669 |
Framework Versions
- Python: 3.12.10
- Sentence Transformers: 5.1.2
- Transformers: 4.57.1
- PyTorch: 2.9.1+cu128
- Accelerate: 1.11.0
- Datasets: 4.4.1
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ingridchien/harvard-loop-reranker
Base model
BAAI/bge-reranker-v2-m3Evaluation results
- Accuracy on evalself-reported0.896
- Accuracy Threshold on evalself-reported0.297
- F1 on evalself-reported0.798
- F1 Threshold on evalself-reported0.202
- Precision on evalself-reported0.750
- Recall on evalself-reported0.851
- Average Precision on evalself-reported0.867