🧠

mxbai-embed-large-v1

by mixedbread-ai Model ID: hf-model--mixedbread-ai--mxbai-embed-large-v1
FNI 18.3
Top 80%
🔗 View Source
Audited 18.3 FNI Score
Tiny 0.34B Params
4k Context
Hot 2.2M Downloads
8G GPU ~2GB Est. VRAM

Quick Commands

🦙 Ollama Run
ollama run mxbai-embed-large-v1
🤗 HF Download
huggingface-cli download mixedbread-ai/mxbai-embed-large-v1
📦 Install Lib
pip install -U transformers
📊

Engineering Specs

Hardware

Parameters
0.34B
Architecture
BertModel
Context Length
4K
Model Size
5.0GB

🧠 Lifecycle

Library
-
Precision
float16
Tokenizer
-

🌐 Identity

Source
HuggingFace
License
Open Access
💾

Est. VRAM Benchmark

~1.6GB

Analyze Hardware

* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.

🕸️ Neural Mesh Hub

Interconnecting Research, Data & Ecosystem

📈 Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

No similar models found.

🔬Technical Deep Dive

Full Specifications [+]
---

🚀 What's Next?

Quick Commands

🦙 Ollama Run
ollama run mxbai-embed-large-v1
🤗 HF Download
huggingface-cli download mixedbread-ai/mxbai-embed-large-v1
📦 Install Lib
pip install -U transformers
🖥️

Hardware Compatibility

Multi-Tier Validation Matrix

Live Sync
🎮 Compatible

RTX 3060 / 4060 Ti

Entry 8GB VRAM
🎮 Compatible

RTX 4070 Super

Mid 12GB VRAM
💻 Compatible

RTX 4080 / Mac M3

High 16GB VRAM
🚀 Compatible

RTX 3090 / 4090

Pro 24GB VRAM
🏗️ Compatible

RTX 6000 Ada

Workstation 48GB VRAM
🏭 Compatible

A100 / H100

Datacenter 80GB VRAM
ℹ️

Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.

README

37,487 chars • Full Disclosure Protocol Active

ZEN MODE • README



📝 Limitations & Considerations

  • Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • FNI scores are relative rankings and may change as new models are added.
  • License Unknown: Verify licensing terms before commercial use.
  • Source: Unknown
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__mixedbread_ai__mxbai_embed_large_v1,
  author = {mixedbread-ai},
  title = {undefined Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
mixedbread-ai. (2026). undefined [Model]. Free2AITools. https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseℹ️ Verify with original source

🛡️ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

🆔 Identity & Source

id
hf-model--mixedbread-ai--mxbai-embed-large-v1
author
mixedbread-ai
tags
sentence-transformersonnxsafetensorsopenvinoggufbertfeature-extractionmtebtransformers.jstransformersenarxiv:2309.12871license:apache-2.0model-indextext-embeddings-inferenceendpoints_compatibleregion:us

⚙️ Technical Specs

architecture
BertModel
params billions
0.34
context length
4,096
vram gb
1.6
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

likes
745
downloads
2,207,470

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)