🧠 Model

Mixtral-8x7B-v0.1

by mistralai

--- library_name: vllm license: apache-2.0 language: - fr - it - de - es - en tags: - moe - mistral-common extra_gated_description: >- If you want to learn more

πŸ• Updated 12/19/2025

🧠 Architecture Explorer

Neural network architecture

1 Input Layer
2 Hidden Layers
3 Attention
4 Output Layer

About

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our ...

πŸ“ Limitations & Considerations

  • β€’ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • β€’ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • β€’ FNI scores are relative rankings and may change as new models are added.
  • β€’ Data source: [{"source_platform":"huggingface","source_url":"https://huggingface.co/mistralai/Mixtral-8x7B-v0.1","fetched_at":"2025-12-19T07:41:01.176Z","adapter_version":"3.2.0"}]

πŸ“š Related Resources

πŸ“„ Related Papers

No related papers linked yet. Check the model's official documentation for research papers.

πŸ“Š Training Datasets

Training data information not available. Refer to the original model card for details.

πŸ”— Related Models V6.2

πŸš€ What's Next?