CrossEncoder based on BAAI/bge-reranker-v2-m3

This is a Cross Encoder model finetuned from BAAI/bge-reranker-v2-m3 using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

  • Model Type: Cross Encoder
  • Base model: BAAI/bge-reranker-v2-m3
  • Maximum Sequence Length: 512 tokens
  • Number of Output Labels: 1 label

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the ๐Ÿค— Hub
model = CrossEncoder("cross_encoder_model_id")
# Get scores for pairs of texts
pairs = [
    ['A Hydro Flask in a light brown color with a small hand logo.', 'A large, light-brown Hydro Flask water bottle with a darker tan cap and black accents, appears to be made of metal, and seems to be in new condition with tags still attached.'],
    ['A black smartphone.', 'The image shows four used smartphones, including a white and black Samsung smartphone, a black and silver phone of unknown brand, a white and black Nokia phone, and a white Apple iPhone, all appearing to be between 4 and 5 inches in screen size.'],
    ['A purple pencil case with a unicorn design.', 'A new, mint green hard-shell pencil case with a ribbed texture and a central circular illustration of a unicorn with a rainbow mane.'],
    ['A folded, dark blue umbrella has a slightly crinkled matching fabric case and its handle is still wrapped in clear plastic.', 'There are two blue umbrellas.'],
    ['a black messenger bag with purple stitching.', 'A gray-green backpack with black mesh padding and an orange "NANEU PRO" tag on the side.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'A Hydro Flask in a light brown color with a small hand logo.',
    [
        'A large, light-brown Hydro Flask water bottle with a darker tan cap and black accents, appears to be made of metal, and seems to be in new condition with tags still attached.',
        'The image shows four used smartphones, including a white and black Samsung smartphone, a black and silver phone of unknown brand, a white and black Nokia phone, and a white Apple iPhone, all appearing to be between 4 and 5 inches in screen size.',
        'A new, mint green hard-shell pencil case with a ribbed texture and a central circular illustration of a unicorn with a rainbow mane.',
        'There are two blue umbrellas.',
        'A gray-green backpack with black mesh padding and an orange "NANEU PRO" tag on the side.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Binary Classification

Metric Value
accuracy 0.8962
accuracy_threshold 0.2969
f1 0.7976
f1_threshold 0.2016
precision 0.7505
recall 0.8511
average_precision 0.8669

Training Details

Training Dataset

Unnamed Dataset

  • Size: 68,056 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 18 characters
    • mean: 104.96 characters
    • max: 313 characters
    • min: 15 characters
    • mean: 116.53 characters
    • max: 482 characters
    • min: 0.0
    • mean: 0.23
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    A Hydro Flask in a light brown color with a small hand logo. A large, light-brown Hydro Flask water bottle with a darker tan cap and black accents, appears to be made of metal, and seems to be in new condition with tags still attached. 1.0
    A black smartphone. The image shows four used smartphones, including a white and black Samsung smartphone, a black and silver phone of unknown brand, a white and black Nokia phone, and a white Apple iPhone, all appearing to be between 4 and 5 inches in screen size. 0.0
    A purple pencil case with a unicorn design. A new, mint green hard-shell pencil case with a ribbed texture and a central circular illustration of a unicorn with a rainbow mane. 0.0
  • Loss: BinaryCrossEntropyLoss with these parameters:
    {
        "activation_fn": "torch.nn.modules.linear.Identity",
        "pos_weight": null
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss eval_average_precision
0.1175 500 0.3493 0.7918
0.2351 1000 0.3064 0.8216
0.3526 1500 0.2832 0.8328
0.4701 2000 0.2873 0.8408
0.5877 2500 0.2866 0.8502
0.7052 3000 0.2797 0.8499
0.8228 3500 0.2737 0.8525
0.9403 4000 0.2724 0.8563
1.0 4254 - 0.8587
1.0578 4500 0.2718 0.8565
1.1754 5000 0.264 0.8561
1.2929 5500 0.2642 0.8584
1.4104 6000 0.2604 0.8582
1.5280 6500 0.2593 0.8595
1.6455 7000 0.2498 0.8628
1.7630 7500 0.2515 0.8649
1.8806 8000 0.2504 0.8650
1.9981 8500 0.2624 0.8643
2.0 8508 - 0.8632
2.1157 9000 0.2481 0.8662
2.2332 9500 0.2483 0.8661
2.3507 10000 0.2543 0.8647
2.4683 10500 0.2473 0.8669

Framework Versions

  • Python: 3.12.10
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.1
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.11.0
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
14
Safetensors
Model size
0.6B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ingridchien/harvard-loop-reranker

Finetuned
(36)
this model

Evaluation results