๐Ÿง 

qvq-72b-preview

by qwen Model ID: hf-model--qwen--qvq-72b-preview
FNI 6.6
Top 85%

"**QVQ-72B-Preview** is an expe......"

๐Ÿ”— View Source
Audited 6.6 FNI Score
73.41B Params
4k Context
383 Downloads
H100+ ~58GB Est. VRAM

โšก Quick Commands

๐Ÿฆ™ Ollama Run
ollama run qvq-72b-preview
๐Ÿค— HF Download
huggingface-cli download qwen/qvq-72b-preview
๐Ÿ“ฆ Install Lib
pip install -U transformers
๐Ÿ“Š

Engineering Specs

โšก Hardware

Parameters
73.41B
Architecture
Qwen2VLForConditionalGeneration
Context Length
4K
Model Size
136.7GB

๐Ÿง  Lifecycle

Library
-
Precision
float16
Tokenizer
-

๐ŸŒ Identity

Source
HuggingFace
License
Open Access
๐Ÿ’พ

Est. VRAM Benchmark

~57.6GB

Analyze Hardware

* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.

๐Ÿ•ธ๏ธ Neural Mesh Hub

Interconnecting Research, Data & Ecosystem

โšก

๐Ÿ”— Core Ecosystem

๐Ÿ”ฌ

๐Ÿ”ฌ Research & Data

๐Ÿ“ˆ Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

No similar models found.

๐Ÿ”ฌTechnical Deep Dive

Full Specifications [+]
---

๐Ÿš€ What's Next?

โšก Quick Commands

๐Ÿฆ™ Ollama Run
ollama run qvq-72b-preview
๐Ÿค— HF Download
huggingface-cli download qwen/qvq-72b-preview
๐Ÿ“ฆ Install Lib
pip install -U transformers
๐Ÿ–ฅ๏ธ

Hardware Compatibility

Multi-Tier Validation Matrix

Live Sync
๐ŸŽฎ Compatible

RTX 3060 / 4060 Ti

Entry 8GB VRAM
๐ŸŽฎ Compatible

RTX 4070 Super

Mid 12GB VRAM
๐Ÿ’ป Compatible

RTX 4080 / Mac M3

High 16GB VRAM
๐Ÿš€ Compatible

RTX 3090 / 4090

Pro 24GB VRAM
๐Ÿ—๏ธ Compatible

RTX 6000 Ada

Workstation 48GB VRAM
๐Ÿญ Compatible

A100 / H100

Datacenter 80GB VRAM
โ„น๏ธ

Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.

README

6,069 chars โ€ข Full Disclosure Protocol Active

ZEN MODE โ€ข README

QVQ-72B-Preview

Chat

Introduction

QVQ-72B-Preview is an experimental research model developed by the Qwen team, focusing on enhancing visual reasoning capabilities.

Performance

QVQ-72B-Preview o1-2024-12-17 gpt-4o-2024-05-13 Claude3.5 Sonnet-20241022 Qwen2VL-72B
MMMU(val) 70.3 77.3 69.1 70.4 64.5
MathVista(mini) 71.4 71.0 63.8 65.3 70.5
MathVision(full) 35.9 โ€“ 30.4 35.6 25.9
OlympiadBench 20.4 โ€“ 25.9 โ€“ 11.2

QVQ-72B-Preview has achieved remarkable performance on various benchmarks. It scored a remarkable 70.3% on the Multimodal Massive Multi-task Understanding (MMMU) benchmark, showcasing QVQ's powerful ability in multidisciplinary understanding and reasoning. Furthermore, the significant improvements on MathVision highlight the model's progress in mathematical reasoning tasks. OlympiadBench also demonstrates the model's enhanced ability to tackle challenging problems.

But It's Not All Perfect: Acknowledging the Limitations

While QVQ-72B-Preview exhibits promising performance that surpasses expectations, itโ€™s important to acknowledge several limitations:

  1. Language Mixing and Code-Switching: The model might occasionally mix different languages or unexpectedly switch between them, potentially affecting the clarity of its responses.
  2. Recursive Reasoning Loops: There's a risk of the model getting caught in recursive reasoning loops, leading to lengthy responses that may not even arrive at a final answer.
  3. Safety and Ethical Considerations: Robust safety measures are needed to ensure reliable and safe performance. Users should exercise caution when deploying this model.
  4. Performance and Benchmark Limitations: Despite the improvements in visual reasoning, QVQ doesnโ€™t entirely replace the capabilities of Qwen2-VL-72B. During multi-step visual reasoning, the model might gradually lose focus on the image content, leading to hallucinations. Moreover, QVQ doesnโ€™t show significant improvement over Qwen2-VL-72B in basic recognition tasks like identifying people, animals, or plants.

Note: Currently, the model only supports single-round dialogues and image outputs. It does not support video inputs.

Quickstart

We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:

pip install qwen-vl-utils

Here we show a code snippet to show you how to use the chat model with transformers and qwen_vl_utils:

from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
    "Qwen/QVQ-72B-Preview", torch_dtype="auto", device_map="auto"
)

# default processer
processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview")

# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview", min_pixels=min_pixels, max_pixels=max_pixels)

messages = [
    {
        "role": "system",
        "content": [
            {"type": "text", "text": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}
        ],
    },
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/demo.png",
            },
            {"type": "text", "text": "What value should be filled in the blank space?"},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=8192)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qvq-72b-preview,
    title = {QVQ: To See the World with Wisdom},
    url = {https://qwenlm.github.io/blog/qvq-72b-preview/},
    author = {Qwen Team},
    month = {December},
    year = {2024}
}

@article{Qwen2VL,
  title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
  author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
  journal={arXiv preprint arXiv:2409.12191},
  year={2024}
}

๐Ÿ“ Limitations & Considerations

  • โ€ข Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • โ€ข VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • โ€ข FNI scores are relative rankings and may change as new models are added.
  • โš  License Unknown: Verify licensing terms before commercial use.
  • โ€ข Source: Unknown
๐Ÿ“œ

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__qwen__qvq_72b_preview,
  author = {qwen},
  title = {undefined Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/qwen/qvq-72b-preview}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
qwen. (2026). undefined [Model]. Free2AITools. https://huggingface.co/qwen/qvq-72b-preview
๐Ÿ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

๐Ÿ“Š FNI Methodology ๐Ÿ“š Knowledge Baseโ„น๏ธ Verify with original source

๐Ÿ›ก๏ธ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

๐Ÿ†” Identity & Source

id
hf-model--qwen--qvq-72b-preview
author
qwen
tags
transformerssafetensorsqwen2_vlimage-to-textchatimage-text-to-textconversationalenarxiv:2409.12191base_model:qwen/qwen2-vl-72bbase_model:finetune:qwen/qwen2-vl-72blicense:othertext-generation-inferenceendpoints_compatibledeploy:azureregion:us

โš™๏ธ Technical Specs

architecture
Qwen2VLForConditionalGeneration
params billions
73.41
context length
4,096
vram gb
57.6
vram is estimated
true
vram formula
VRAM โ‰ˆ (params * 0.75) + 2GB (KV) + 0.5GB (OS)

๐Ÿ“Š Engagement & Metrics

likes
609
downloads
383

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)