qvq-72b-preview
"**QVQ-72B-Preview** is an expe......"
โก Quick Commands
ollama run qvq-72b-preview huggingface-cli download qwen/qvq-72b-preview pip install -U transformers Engineering Specs
โก Hardware
๐ง Lifecycle
๐ Identity
Est. VRAM Benchmark
~57.6GB
* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.
๐ธ๏ธ Neural Mesh Hub
Interconnecting Research, Data & Ecosystem
๐ Core Ecosystem
๐ฌ Research & Data
๐ Interest Trend
Real-time Trend Indexing In-Progress
* Real-time activity index across HuggingFace, GitHub and Research citations.
๐ Semantic Keywords
No similar models found.
Social Proof
๐ฌTechnical Deep Dive
Full Specifications [+]โพ
๐ What's Next?
โก Quick Commands
ollama run qvq-72b-preview huggingface-cli download qwen/qvq-72b-preview pip install -U transformers Hardware Compatibility
Multi-Tier Validation Matrix
RTX 3060 / 4060 Ti
RTX 4070 Super
RTX 4080 / Mac M3
RTX 3090 / 4090
RTX 6000 Ada
A100 / H100
Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.
README
QVQ-72B-Preview
Introduction
QVQ-72B-Preview is an experimental research model developed by the Qwen team, focusing on enhancing visual reasoning capabilities.
Performance
| QVQ-72B-Preview | o1-2024-12-17 | gpt-4o-2024-05-13 | Claude3.5 Sonnet-20241022 | Qwen2VL-72B | |
|---|---|---|---|---|---|
| MMMU(val) | 70.3 | 77.3 | 69.1 | 70.4 | 64.5 |
| MathVista(mini) | 71.4 | 71.0 | 63.8 | 65.3 | 70.5 |
| MathVision(full) | 35.9 | โ | 30.4 | 35.6 | 25.9 |
| OlympiadBench | 20.4 | โ | 25.9 | โ | 11.2 |
QVQ-72B-Preview has achieved remarkable performance on various benchmarks. It scored a remarkable 70.3% on the Multimodal Massive Multi-task Understanding (MMMU) benchmark, showcasing QVQ's powerful ability in multidisciplinary understanding and reasoning. Furthermore, the significant improvements on MathVision highlight the model's progress in mathematical reasoning tasks. OlympiadBench also demonstrates the model's enhanced ability to tackle challenging problems.
But It's Not All Perfect: Acknowledging the Limitations
While QVQ-72B-Preview exhibits promising performance that surpasses expectations, itโs important to acknowledge several limitations:
- Language Mixing and Code-Switching: The model might occasionally mix different languages or unexpectedly switch between them, potentially affecting the clarity of its responses.
- Recursive Reasoning Loops: There's a risk of the model getting caught in recursive reasoning loops, leading to lengthy responses that may not even arrive at a final answer.
- Safety and Ethical Considerations: Robust safety measures are needed to ensure reliable and safe performance. Users should exercise caution when deploying this model.
- Performance and Benchmark Limitations: Despite the improvements in visual reasoning, QVQ doesnโt entirely replace the capabilities of Qwen2-VL-72B. During multi-step visual reasoning, the model might gradually lose focus on the image content, leading to hallucinations. Moreover, QVQ doesnโt show significant improvement over Qwen2-VL-72B in basic recognition tasks like identifying people, animals, or plants.
Note: Currently, the model only supports single-round dialogues and image outputs. It does not support video inputs.
Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
pip install qwen-vl-utils
Here we show a code snippet to show you how to use the chat model with transformers and qwen_vl_utils:
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/QVQ-72B-Preview", torch_dtype="auto", device_map="auto"
)
# default processer
processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}
],
},
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/demo.png",
},
{"type": "text", "text": "What value should be filled in the blank space?"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=8192)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qvq-72b-preview,
title = {QVQ: To See the World with Wisdom},
url = {https://qwenlm.github.io/blog/qvq-72b-preview/},
author = {Qwen Team},
month = {December},
year = {2024}
}
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
6,069 chars โข Full Disclosure Protocol Active
QVQ-72B-Preview
Introduction
QVQ-72B-Preview is an experimental research model developed by the Qwen team, focusing on enhancing visual reasoning capabilities.
Performance
| QVQ-72B-Preview | o1-2024-12-17 | gpt-4o-2024-05-13 | Claude3.5 Sonnet-20241022 | Qwen2VL-72B | |
|---|---|---|---|---|---|
| MMMU(val) | 70.3 | 77.3 | 69.1 | 70.4 | 64.5 |
| MathVista(mini) | 71.4 | 71.0 | 63.8 | 65.3 | 70.5 |
| MathVision(full) | 35.9 | โ | 30.4 | 35.6 | 25.9 |
| OlympiadBench | 20.4 | โ | 25.9 | โ | 11.2 |
QVQ-72B-Preview has achieved remarkable performance on various benchmarks. It scored a remarkable 70.3% on the Multimodal Massive Multi-task Understanding (MMMU) benchmark, showcasing QVQ's powerful ability in multidisciplinary understanding and reasoning. Furthermore, the significant improvements on MathVision highlight the model's progress in mathematical reasoning tasks. OlympiadBench also demonstrates the model's enhanced ability to tackle challenging problems.
But It's Not All Perfect: Acknowledging the Limitations
While QVQ-72B-Preview exhibits promising performance that surpasses expectations, itโs important to acknowledge several limitations:
- Language Mixing and Code-Switching: The model might occasionally mix different languages or unexpectedly switch between them, potentially affecting the clarity of its responses.
- Recursive Reasoning Loops: There's a risk of the model getting caught in recursive reasoning loops, leading to lengthy responses that may not even arrive at a final answer.
- Safety and Ethical Considerations: Robust safety measures are needed to ensure reliable and safe performance. Users should exercise caution when deploying this model.
- Performance and Benchmark Limitations: Despite the improvements in visual reasoning, QVQ doesnโt entirely replace the capabilities of Qwen2-VL-72B. During multi-step visual reasoning, the model might gradually lose focus on the image content, leading to hallucinations. Moreover, QVQ doesnโt show significant improvement over Qwen2-VL-72B in basic recognition tasks like identifying people, animals, or plants.
Note: Currently, the model only supports single-round dialogues and image outputs. It does not support video inputs.
Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
pip install qwen-vl-utils
Here we show a code snippet to show you how to use the chat model with transformers and qwen_vl_utils:
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/QVQ-72B-Preview", torch_dtype="auto", device_map="auto"
)
# default processer
processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/QVQ-72B-Preview", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}
],
},
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/demo.png",
},
{"type": "text", "text": "What value should be filled in the blank space?"},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=8192)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qvq-72b-preview,
title = {QVQ: To See the World with Wisdom},
url = {https://qwenlm.github.io/blog/qvq-72b-preview/},
author = {Qwen Team},
month = {December},
year = {2024}
}
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
๐ Limitations & Considerations
- โข Benchmark scores may vary based on evaluation methodology and hardware configuration.
- โข VRAM requirements are estimates; actual usage depends on quantization and batch size.
- โข FNI scores are relative rankings and may change as new models are added.
- โ License Unknown: Verify licensing terms before commercial use.
- โข Source: Unknown
Cite this model
Academic & Research Attribution
@misc{hf_model__qwen__qvq_72b_preview,
author = {qwen},
title = {undefined Model},
year = {2026},
howpublished = {\url{https://huggingface.co/qwen/qvq-72b-preview}},
note = {Accessed via Free2AITools Knowledge Fortress}
} AI Summary: Based on Hugging Face metadata. Not a recommendation.
๐ก๏ธ Model Transparency Report
Verified data manifest for traceability and transparency.
๐ Identity & Source
- id
- hf-model--qwen--qvq-72b-preview
- author
- qwen
- tags
- transformerssafetensorsqwen2_vlimage-to-textchatimage-text-to-textconversationalenarxiv:2409.12191base_model:qwen/qwen2-vl-72bbase_model:finetune:qwen/qwen2-vl-72blicense:othertext-generation-inferenceendpoints_compatibledeploy:azureregion:us
โ๏ธ Technical Specs
- architecture
- Qwen2VLForConditionalGeneration
- params billions
- 73.41
- context length
- 4,096
- vram gb
- 57.6
- vram is estimated
- true
- vram formula
- VRAM โ (params * 0.75) + 2GB (KV) + 0.5GB (OS)
๐ Engagement & Metrics
- likes
- 609
- downloads
- 383
Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)