🧠
Model

Bge M3

by Xenova hf-model--xenova--bge-m3
Nexus Index
43.2 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 51
R: Recency 86
Q: Quality 50
Tech Context
Vital Performance
30.1K DL / 30D
0.0%
Audited 43.2 FNI Score
Tiny - Params
- Context
30.1K Downloads
Commercial MIT License
Model Information Summary
Entity Passport
Registry ID hf-model--xenova--bge-m3
License MIT
Provider huggingface
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__xenova__bge_m3,
  author = {Xenova},
  title = {Bge M3 Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/xenova/bge-m3}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Xenova. (2026). Bge M3 [Model]. Free2AITools. https://huggingface.co/xenova/bge-m3

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

🤗 HF Download
huggingface-cli download xenova/bge-m3
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

43.2
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 51
Recency (R) 86
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for Bge M3: Semantic (S:50), Authority (A:0), Popularity (P:51), Recency (R:86), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

https://huggingface.co/BAAI/bge-m3 with ONNX weights to be compatible with Transformers.js.

Usage (Transformers.js)

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

bash
npm i @huggingface/transformers

You can then use the model to compute embeddings, as follows:

js
import { pipeline } from '@huggingface/transformers';

// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/bge-m3');

// Compute sentence embeddings
const texts = ["What is BGE M3?", "Defination of BM25"]
const embeddings = await extractor(texts, { pooling: 'cls', normalize: true });
console.log(embeddings);
// Tensor {
//   dims: [ 2, 1024 ],
//   type: 'float32',
//   data: Float32Array(2048) [ -0.0340719036757946, -0.04478546231985092, ... ],
//   size: 2048
// }

console.log(embeddings.tolist()); // Convert embeddings to a JavaScript list
// [
//   [ -0.0340719036757946, -0.04478546231985092, -0.004497686866670847, ... ],
//   [ -0.015383965335786343, -0.041989751160144806, -0.025820579379796982, ... ]
// ]

You can also use the model for retrieval. For example:

js
import { pipeline, cos_sim } from '@huggingface/transformers';

// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/bge-m3');

// Define query to use for retrieval
const query = 'What is BGE M3?';

// List of documents you want to embed
const texts = [
  'BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.',
  'BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document',
];

// Compute sentence embeddings
const embeddings = await extractor(texts, { pooling: 'cls', normalize: true });

// Compute query embeddings
const query_embeddings = await extractor(query, { pooling: 'cls', normalize: true });

// Sort by cosine similarity score
const scores = embeddings.tolist().map(
  (embedding, i) => ({
    id: i,
    score: cos_sim(query_embeddings.data, embedding),
    text: texts[i],
  })
).sort((a, b) => b.score - a.score);
console.log(scores);
// [
//   { id: 0, score: 0.62532672968664, text: 'BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.' },
//   { id: 1, score: 0.33111060648806, text: 'BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document' },
// ]

Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
30.1KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--xenova--bge-m3
slug
xenova--bge-m3
source
huggingface
author
Xenova
license
MIT
tags
transformers.js, onnx, xlm-roberta, feature-extraction, base_model:baai/bge-m3, base_model:quantized:baai/bge-m3, license:mit, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
feature-extraction

📊 Engagement & Metrics

downloads
30,069
stars
0
forks
0

Data indexed from public sources. Updated daily.