gpt-oss-20b
"Try gpt-oss · Guides · Model card ·..."
⚡ Quick Commands
ollama run gpt-oss-20b huggingface-cli download openai/gpt-oss-20b pip install -U transformers Engineering Specs
⚡ Hardware
🧠 Lifecycle
🌐 Identity
Est. VRAM Benchmark
~17.4GB
* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.
🕸️ Neural Mesh Hub
Interconnecting Research, Data & Ecosystem
🔗 Core Ecosystem
🔬 Research & Data
📈 Interest Trend
Real-time Trend Indexing In-Progress
* Real-time activity index across HuggingFace, GitHub and Research citations.
No similar models found.
Social Proof
🔬Technical Deep Dive
Full Specifications [+]▾
🚀 What's Next?
⚡ Quick Commands
ollama run gpt-oss-20b huggingface-cli download openai/gpt-oss-20b pip install -U transformers Hardware Compatibility
Multi-Tier Validation Matrix
RTX 3060 / 4060 Ti
RTX 4070 Super
RTX 4080 / Mac M3
RTX 3090 / 4090
RTX 6000 Ada
A100 / H100
Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.
README
Try gpt-oss · Guides · Model card · OpenAI blog
Welcome to the gpt-oss series, OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
gpt-oss-120b— for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)gpt-oss-20b— for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our harmony response format and should only be used with the harmony format as it will not work correctly otherwise.
[!NOTE] This model card is dedicated to the smaller
gpt-oss-20bmodel. Check outgpt-oss-120bfor the larger model.
Highlights
- Permissive Apache 2.0 license: Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
- Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
- Full chain-of-thought: Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
- Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
- Agentic capabilities: Use the models’ native capabilities for function calling, web browsing, Python code execution, and Structured Outputs.
- MXFP4 quantization: The models were post-trained with MXFP4 quantization of the MoE weights, making
gpt-oss-120brun on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and thegpt-oss-20bmodel run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
Inference examples
Transformers
You can use gpt-oss-120b and gpt-oss-20b with Transformers. If you use the Transformers chat template, it will automatically apply the harmony response format. If you use model.generate directly, you need to apply the harmony format manually using the chat template or use our openai-harmony package.
To get started, install the necessary dependencies to setup your environment:
pip install -U transformers kernels torch
Once, setup you can proceed to run the model by running the snippet below:
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Alternatively, you can run the model via Transformers Serve to spin up a OpenAI-compatible webserver:
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
Learn more about how to use gpt-oss with Transformers.
vLLM
vLLM recommends using uv for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
Learn more about how to use gpt-oss with vLLM.
PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our reference implementations in the gpt-oss repository.
Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after installing Ollama.
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
Learn more about how to use gpt-oss with Ollama.
LM Studio
If you are using LM Studio you can use the following commands to download.
# gpt-oss-20b
lms get openai/gpt-oss-20b
Check out our awesome list for a broader collection of gpt-oss resources and inference partners.
Download the model
You can download the model weights from the Hugging Face Hub directly from Hugging Face CLI:
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
- Low: Fast responses for general dialogue.
- Medium: Balanced speed and detail.
- High: Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
Tool use
The gpt-oss models are excellent for:
- Web browsing (using built-in browsing tools)
- Function calling with defined schemas
- Agentic operations like browser tasks
Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model gpt-oss-20b can be fine-tuned on consumer hardware, whereas the larger gpt-oss-120b can be fine-tuned on a single H100 node.
Citation
@misc{openai2025gptoss120bgptoss20bmodel,
title={gpt-oss-120b & gpt-oss-20b Model Card},
author={OpenAI},
year={2025},
eprint={2508.10925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10925},
}
6,977 chars • Full Disclosure Protocol Active
Try gpt-oss · Guides · Model card · OpenAI blog
Welcome to the gpt-oss series, OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
gpt-oss-120b— for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)gpt-oss-20b— for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our harmony response format and should only be used with the harmony format as it will not work correctly otherwise.
[!NOTE] This model card is dedicated to the smaller
gpt-oss-20bmodel. Check outgpt-oss-120bfor the larger model.
Highlights
- Permissive Apache 2.0 license: Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
- Configurable reasoning effort: Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
- Full chain-of-thought: Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
- Fine-tunable: Fully customize models to your specific use case through parameter fine-tuning.
- Agentic capabilities: Use the models’ native capabilities for function calling, web browsing, Python code execution, and Structured Outputs.
- MXFP4 quantization: The models were post-trained with MXFP4 quantization of the MoE weights, making
gpt-oss-120brun on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and thegpt-oss-20bmodel run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
Inference examples
Transformers
You can use gpt-oss-120b and gpt-oss-20b with Transformers. If you use the Transformers chat template, it will automatically apply the harmony response format. If you use model.generate directly, you need to apply the harmony format manually using the chat template or use our openai-harmony package.
To get started, install the necessary dependencies to setup your environment:
pip install -U transformers kernels torch
Once, setup you can proceed to run the model by running the snippet below:
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Alternatively, you can run the model via Transformers Serve to spin up a OpenAI-compatible webserver:
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
Learn more about how to use gpt-oss with Transformers.
vLLM
vLLM recommends using uv for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
Learn more about how to use gpt-oss with vLLM.
PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our reference implementations in the gpt-oss repository.
Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after installing Ollama.
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
Learn more about how to use gpt-oss with Ollama.
LM Studio
If you are using LM Studio you can use the following commands to download.
# gpt-oss-20b
lms get openai/gpt-oss-20b
Check out our awesome list for a broader collection of gpt-oss resources and inference partners.
Download the model
You can download the model weights from the Hugging Face Hub directly from Hugging Face CLI:
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
- Low: Fast responses for general dialogue.
- Medium: Balanced speed and detail.
- High: Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
Tool use
The gpt-oss models are excellent for:
- Web browsing (using built-in browsing tools)
- Function calling with defined schemas
- Agentic operations like browser tasks
Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model gpt-oss-20b can be fine-tuned on consumer hardware, whereas the larger gpt-oss-120b can be fine-tuned on a single H100 node.
Citation
@misc{openai2025gptoss120bgptoss20bmodel,
title={gpt-oss-120b & gpt-oss-20b Model Card},
author={OpenAI},
year={2025},
eprint={2508.10925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.10925},
}
📝 Limitations & Considerations
- • Benchmark scores may vary based on evaluation methodology and hardware configuration.
- • VRAM requirements are estimates; actual usage depends on quantization and batch size.
- • FNI scores are relative rankings and may change as new models are added.
- ⚠ License Unknown: Verify licensing terms before commercial use.
- • Source: Unknown
Cite this model
Academic & Research Attribution
@misc{hf_model__openai__gpt_oss_20b,
author = {openai},
title = {undefined Model},
year = {2026},
howpublished = {\url{https://huggingface.co/openai/gpt-oss-20b}},
note = {Accessed via Free2AITools Knowledge Fortress}
} AI Summary: Based on Hugging Face metadata. Not a recommendation.
🛡️ Model Transparency Report
Verified data manifest for traceability and transparency.
🆔 Identity & Source
- id
- hf-model--openai--gpt-oss-20b
- author
- openai
- tags
- transformerssafetensorsgpt_osstext-generationvllmconversationalarxiv:2508.10925license:apache-2.0endpoints_compatible8-bitmxfp4deploy:azureregion:us
⚙️ Technical Specs
- architecture
- GptOssForCausalLM
- params billions
- 21.51
- context length
- 4,096
- vram gb
- 17.4
- vram is estimated
- true
- vram formula
- VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)
📊 Engagement & Metrics
- likes
- 4,036
- downloads
- 8,180,102
Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)