🧠

qwq-32b

by qwen Model ID: hf-model--qwen--qwq-32b
FNI 8
Top 85%

"QwQ is the reasoning model of the Qwen series. Compared ......"

🔗 View Source
Audited 8 FNI Score
32.76B Params
4k Context
Hot 56.1K Downloads
H100+ ~28GB Est. VRAM

⚡ Quick Commands

đŸĻ™ Ollama Run
ollama run qwq-32b
🤗 HF Download
huggingface-cli download qwen/qwq-32b
đŸ“Ļ Install Lib
pip install -U transformers
📊

Engineering Specs

⚡ Hardware

Parameters
32.76B
Architecture
Qwen2ForCausalLM
Context Length
4K
Model Size
61.0GB

🧠 Lifecycle

Library
-
Precision
float16
Tokenizer
-

🌐 Identity

Source
HuggingFace
License
Open Access
💾

Est. VRAM Benchmark

~27.1GB

Analyze Hardware

* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.

đŸ•¸ī¸ Neural Mesh Hub

Interconnecting Research, Data & Ecosystem

📈 Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

No similar models found.

đŸ”ŦTechnical Deep Dive

Full Specifications [+]
---

🚀 What's Next?

⚡ Quick Commands

đŸĻ™ Ollama Run
ollama run qwq-32b
🤗 HF Download
huggingface-cli download qwen/qwq-32b
đŸ“Ļ Install Lib
pip install -U transformers
đŸ–Ĩī¸

Hardware Compatibility

Multi-Tier Validation Matrix

Live Sync
🎮 Compatible

RTX 3060 / 4060 Ti

Entry 8GB VRAM
🎮 Compatible

RTX 4070 Super

Mid 12GB VRAM
đŸ’ģ Compatible

RTX 4080 / Mac M3

High 16GB VRAM
🚀 Compatible

RTX 3090 / 4090

Pro 24GB VRAM
đŸ—ī¸ Compatible

RTX 6000 Ada

Workstation 48GB VRAM
🏭 Compatible

A100 / H100

Datacenter 80GB VRAM
â„šī¸

Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.

README

6,976 chars â€ĸ Full Disclosure Protocol Active

ZEN MODE â€ĸ README

QwQ-32B

Chat

Introduction

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.

This repo contains the QwQ 32B model, which has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
  • Number of Parameters: 32.5B
  • Number of Paramaters (Non-Embedding): 31.0B
  • Number of Layers: 64
  • Number of Attention Heads (GQA): 40 for Q and 8 for KV
  • Context Length: Full 131,072 tokens
    • For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in this section.

Note: For the best experience, please review the usage guidelines before deploying QwQ models.

You can try our demo or access QwQ models via QwenChat.

For more details, please refer to our blog, GitHub, and Documentation.

Requirements

QwQ is based on Qwen2.5, whose code has been in the latest Hugging face transformers. We advise you to use the latest version of transformers.

With transformers<4.37.0, you will encounter the following error:

KeyError: 'qwen2'

Quickstart

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/QwQ-32B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "How many r's are in the word \"strawberry\""
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Usage Guidelines

To achieve optimal performance, we recommend the following settings:

  1. Enforce Thoughtful Output: Ensure the model starts with "<think>\n" to prevent generating empty thinking content, which can degrade output quality. If you use apply_chat_template and set add_generation_prompt=True, this is already automatically implemented, but it may cause the response to lack the <think> tag at the beginning. This is normal behavior.

  2. Sampling Parameters:

    • Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions.
    • Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
    • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance.
  3. No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in apply_chat_template.

  4. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.

    • Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
    • Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the answer field with only the choice letter, e.g.,\"answer\": \"C\"." in the prompt.
  5. Handle Long Inputs: For inputs exceeding 8,192 tokens, enable YaRN to improve the model's ability to capture long-sequence information effectively.

    For supported frameworks, you could add the following to config.json to enable YaRN:

    {
    ...,
    "rope_scaling": {
        "factor": 4.0,
        "original_max_position_embeddings": 32768,
        "type": "yarn"
    }
    }

    For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.

Evaluation & Performance

Detailed evaluation results are reported in this 📑 blog.

For requirements on GPU memory and the respective throughput, see results here.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwq32b,
    title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
    url = {https://qwenlm.github.io/blog/qwq-32b/},
    author = {Qwen Team},
    month = {March},
    year = {2025}
}

@article{qwen2.5,
      title={Qwen2.5 Technical Report}, 
      author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
      journal={arXiv preprint arXiv:2412.15115},
      year={2024}
}

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.
  • â€ĸ Source: Unknown
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__qwen__qwq_32b,
  author = {qwen},
  title = {undefined Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/qwen/qwq-32b}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
qwen. (2026). undefined [Model]. Free2AITools. https://huggingface.co/qwen/qwq-32b
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

🆔 Identity & Source

id
hf-model--qwen--qwq-32b
author
qwen
tags
transformerssafetensorsqwen2text-generationchatconversationalenarxiv:2309.00071arxiv:2412.15115base_model:qwen/qwen2.5-32bbase_model:finetune:qwen/qwen2.5-32blicense:apache-2.0text-generation-inferenceendpoints_compatibledeploy:azureregion:us

âš™ī¸ Technical Specs

architecture
Qwen2ForCausalLM
params billions
32.76
context length
4,096
vram gb
27.1
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 2GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

likes
2,871
downloads
56,109

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)