mixtral-8x22b-v0.1
โก Quick Commands
huggingface-cli download mistral-community/mixtral-8x22b-v0.1 pip install -U transformers Engineering Specs
โก Hardware
๐ง Lifecycle
๐ Identity
Est. VRAM Benchmark
~108GB
* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.
๐ Interest Trend
Real-time Trend Indexing In-Progress
* Real-time activity index across HuggingFace, GitHub and Research citations.
๐ Semantic Keywords
No similar models found.
Social Proof
๐ฌTechnical Deep Dive
Full Specifications [+]โพ
๐ What's Next?
โก Quick Commands
huggingface-cli download mistral-community/mixtral-8x22b-v0.1 pip install -U transformers Hardware Compatibility
Multi-Tier Validation Matrix
RTX 3060 / 4060 Ti
RTX 4070 Super
RTX 4080 / Mac M3
RTX 3090 / 4090
RTX 6000 Ada
A100 / H100
Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.
README
Mixtral-8x22B
[!WARNING] This model checkpoint is provided as-is and might not be up-to-date. Please use the corresponding version from https://huggingface.co/mistralai org
[!TIP] MistralAI has uploaded weights to their organization at mistralai/Mixtral-8x22B-v0.1 and mistralai/Mixtral-8x22B-Instruct-v0.1 too.
[!TIP] Kudos to @v2ray for converting the checkpoints and uploading them in
transformerscompatible format. Go give them a follow!
Converted to HuggingFace Transformers format using the script here.
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
In half-precision
Note float16 precision only works on GPU devices
Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Lower precision using (8-bit & 4-bit) using `bitsandbytes`
Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Load the model with Flash Attention 2
Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Notice
Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lรฉlio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothรฉe Lacroix, Thรฉophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall.
[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 74.46 |
| AI2 Reasoning Challenge (25-Shot) | 70.48 |
| HellaSwag (10-Shot) | 88.73 |
| MMLU (5-Shot) | 77.81 |
| TruthfulQA (0-shot) | 51.08 |
| Winogrande (5-shot) | 84.53 |
| GSM8k (5-shot) | 74.15 |
Mixtral-8x22B
[!WARNING] This model checkpoint is provided as-is and might not be up-to-date. Please use the corresponding version from https://huggingface.co/mistralai org
[!TIP] MistralAI has uploaded weights to their organization at mistralai/Mixtral-8x22B-v0.1 and mistralai/Mixtral-8x22B-Instruct-v0.1 too.
[!TIP] Kudos to @v2ray for converting the checkpoints and uploading them in
transformerscompatible format. Go give them a follow!
Converted to HuggingFace Transformers format using the script here.
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
In half-precision
Note float16 precision only works on GPU devices
Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Lower precision using (8-bit & 4-bit) using `bitsandbytes`
Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Load the model with Flash Attention 2
Click to expand
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Notice
Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lรฉlio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothรฉe Lacroix, Thรฉophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall.
[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 74.46 |
| AI2 Reasoning Challenge (25-Shot) | 70.48 |
| HellaSwag (10-Shot) | 88.73 |
| MMLU (5-Shot) | 77.81 |
| TruthfulQA (0-shot) | 51.08 |
| Winogrande (5-shot) | 84.53 |
| GSM8k (5-shot) | 74.15 |
๐ Limitations & Considerations
- โข Benchmark scores may vary based on evaluation methodology and hardware configuration.
- โข VRAM requirements are estimates; actual usage depends on quantization and batch size.
- โข FNI scores are relative rankings and may change as new models are added.
- โ License Unknown: Verify licensing terms before commercial use.
- โข Source: Unknown
Cite this model
Academic & Research Attribution
@misc{hf_model__mistral_community__mixtral_8x22b_v0.1,
author = {mistral-community},
title = {undefined Model},
year = {2026},
howpublished = {\url{https://huggingface.co/mistral-community/mixtral-8x22b-v0.1}},
note = {Accessed via Free2AITools Knowledge Fortress}
} AI Summary: Based on Hugging Face metadata. Not a recommendation.
๐ก๏ธ Model Transparency Report
Verified data manifest for traceability and transparency.
๐ Identity & Source
- id
- hf-model--mistral-community--mixtral-8x22b-v0.1
- author
- mistral-community
- tags
- transformerssafetensorsmixtraltext-generationmoefritdeesenlicense:apache-2.0model-indextext-generation-inferenceendpoints_compatibledeploy:azureregion:us
โ๏ธ Technical Specs
- architecture
- MixtralForCausalLM
- params billions
- 140.62
- context length
- 4,096
- vram gb
- 108
- vram is estimated
- true
- vram formula
- VRAM โ (params * 0.75) + 2GB (KV) + 0.5GB (OS)
๐ Engagement & Metrics
- likes
- 672
- downloads
- 393
Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)