You can also use the model for retrieval. For example:
js
import { pipeline, cos_sim } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/bge-m3');
// Define query to use for retrieval
const query = 'What is BGE M3?';
// List of documents you want to embed
const texts = [
'BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.',
'BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document',
];
// Compute sentence embeddings
const embeddings = await extractor(texts, { pooling: 'cls', normalize: true });
// Compute query embeddings
const query_embeddings = await extractor(query, { pooling: 'cls', normalize: true });
// Sort by cosine similarity score
const scores = embeddings.tolist().map(
(embedding, i) => ({
id: i,
score: cos_sim(query_embeddings.data, embedding),
text: texts[i],
})
).sort((a, b) => b.score - a.score);
console.log(scores);
// [
// { id: 0, score: 0.62532672968664, text: 'BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.' },
// { id: 1, score: 0.33111060648806, text: 'BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document' },
// ]
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using đ¤ Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx).
â ī¸ Incomplete Data
Some information about this model is not available.
Use with Caution - Verify details from the original source before relying on this data.