deepseek-ocr
โก Quick Commands
ollama run deepseek-ocr huggingface-cli download deepseek-ai/deepseek-ocr pip install -U transformers Engineering Specs
โก Hardware
๐ง Lifecycle
๐ Identity
Est. VRAM Benchmark
~3.8GB
* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.
๐ธ๏ธ Neural Mesh Hub
Interconnecting Research, Data & Ecosystem
๐ Core Ecosystem
๐ฌ Research & Data
๐ Interest Trend
Real-time Trend Indexing In-Progress
* Real-time activity index across HuggingFace, GitHub and Research citations.
No similar models found.
Social Proof
๐ฌTechnical Deep Dive
Full Specifications [+]โพ
๐ What's Next?
โก Quick Commands
ollama run deepseek-ocr huggingface-cli download deepseek-ai/deepseek-ocr pip install -U transformers Hardware Compatibility
Multi-Tier Validation Matrix
RTX 3060 / 4060 Ti
RTX 4070 Super
RTX 4080 / Mac M3
RTX 3090 / 4090
RTX 6000 Ada
A100 / H100
Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.
README
๐ Github | ๐ฅ Model Download | ๐ Paper Link | ๐ Arxiv Paper Link |
DeepSeek-OCR: Contexts Optical Compression
Explore the boundaries of visual-text compression.
Usage
Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.12.9 + CUDA11.8๏ผ
torch==2.6.0
transformers==4.46.3
tokenizers==0.20.3
einops
addict
easydict
pip install flash-attn==2.7.3 --no-build-isolation
from transformers import AutoModel, AutoTokenizer
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
model_name = 'deepseek-ai/DeepSeek-OCR'
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True)
model = model.eval().cuda().to(torch.bfloat16)
# prompt = "\nFree OCR. "
prompt = "\n<|grounding|>Convert the document to markdown. "
image_file = 'your_image.jpg'
output_path = 'your/output/dir'
# infer(self, tokenizer, prompt='', image_file='', output_path = ' ', base_size = 1024, image_size = 640, crop_mode = True, test_compress = False, save_results = False):
# Tiny: base_size = 512, image_size = 512, crop_mode = False
# Small: base_size = 640, image_size = 640, crop_mode = False
# Base: base_size = 1024, image_size = 1024, crop_mode = False
# Large: base_size = 1280, image_size = 1280, crop_mode = False
# Gundam: base_size = 1024, image_size = 640, crop_mode = True
res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = True)
vLLM
Refer to ๐GitHub for guidance on model inference acceleration and PDF processing, etc.
[2025/10/23] ๐๐๐ DeepSeek-OCR is now officially supported in upstream vLLM.
uv venv
source .venv/bin/activate
# Until v0.11.1 release, you need to install vLLM from nightly build
uv pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
from vllm import LLM, SamplingParams
from vllm.model_executor.models.deepseek_ocr import NGramPerReqLogitsProcessor
from PIL import Image
# Create model instance
llm = LLM(
model="deepseek-ai/DeepSeek-OCR",
enable_prefix_caching=False,
mm_processor_cache_gb=0,
logits_processors=[NGramPerReqLogitsProcessor]
)
# Prepare batched input with your image file
image_1 = Image.open("path/to/your/image_1.png").convert("RGB")
image_2 = Image.open("path/to/your/image_2.png").convert("RGB")
prompt = "\nFree OCR."
model_input = [
{
"prompt": prompt,
"multi_modal_data": {"image": image_1}
},
{
"prompt": prompt,
"multi_modal_data": {"image": image_2}
}
]
sampling_param = SamplingParams(
temperature=0.0,
max_tokens=8192,
# ngram logit processor args
extra_args=dict(
ngram_size=30,
window_size=90,
whitelist_token_ids={128821, 128822}, # whitelist: ,
),
skip_special_tokens=False,
)
# Generate output
model_outputs = llm.generate(model_input, sampling_param)
# Print output
for output in model_outputs:
print(output.outputs[0].text)
Visualizations
![]() |
![]() |
![]() |
![]() |
Acknowledgement
We would like to thank Vary, GOT-OCR2.0, MinerU, PaddleOCR, OneChart, Slow Perception for their valuable models and ideas.
We also appreciate the benchmarks: Fox, OminiDocBench.
Citation
@article{wei2025deepseek,
title={DeepSeek-OCR: Contexts Optical Compression},
author={Wei, Haoran and Sun, Yaofeng and Li, Yukun},
journal={arXiv preprint arXiv:2510.18234},
year={2025}
}
6,129 chars โข Full Disclosure Protocol Active
๐ Github | ๐ฅ Model Download | ๐ Paper Link | ๐ Arxiv Paper Link |
DeepSeek-OCR: Contexts Optical Compression
Explore the boundaries of visual-text compression.
Usage
Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.12.9 + CUDA11.8๏ผ
torch==2.6.0
transformers==4.46.3
tokenizers==0.20.3
einops
addict
easydict
pip install flash-attn==2.7.3 --no-build-isolation
from transformers import AutoModel, AutoTokenizer
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
model_name = 'deepseek-ai/DeepSeek-OCR'
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True)
model = model.eval().cuda().to(torch.bfloat16)
# prompt = "\nFree OCR. "
prompt = "\n<|grounding|>Convert the document to markdown. "
image_file = 'your_image.jpg'
output_path = 'your/output/dir'
# infer(self, tokenizer, prompt='', image_file='', output_path = ' ', base_size = 1024, image_size = 640, crop_mode = True, test_compress = False, save_results = False):
# Tiny: base_size = 512, image_size = 512, crop_mode = False
# Small: base_size = 640, image_size = 640, crop_mode = False
# Base: base_size = 1024, image_size = 1024, crop_mode = False
# Large: base_size = 1280, image_size = 1280, crop_mode = False
# Gundam: base_size = 1024, image_size = 640, crop_mode = True
res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = True)
vLLM
Refer to ๐GitHub for guidance on model inference acceleration and PDF processing, etc.
[2025/10/23] ๐๐๐ DeepSeek-OCR is now officially supported in upstream vLLM.
uv venv
source .venv/bin/activate
# Until v0.11.1 release, you need to install vLLM from nightly build
uv pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
from vllm import LLM, SamplingParams
from vllm.model_executor.models.deepseek_ocr import NGramPerReqLogitsProcessor
from PIL import Image
# Create model instance
llm = LLM(
model="deepseek-ai/DeepSeek-OCR",
enable_prefix_caching=False,
mm_processor_cache_gb=0,
logits_processors=[NGramPerReqLogitsProcessor]
)
# Prepare batched input with your image file
image_1 = Image.open("path/to/your/image_1.png").convert("RGB")
image_2 = Image.open("path/to/your/image_2.png").convert("RGB")
prompt = "\nFree OCR."
model_input = [
{
"prompt": prompt,
"multi_modal_data": {"image": image_1}
},
{
"prompt": prompt,
"multi_modal_data": {"image": image_2}
}
]
sampling_param = SamplingParams(
temperature=0.0,
max_tokens=8192,
# ngram logit processor args
extra_args=dict(
ngram_size=30,
window_size=90,
whitelist_token_ids={128821, 128822}, # whitelist: ,
),
skip_special_tokens=False,
)
# Generate output
model_outputs = llm.generate(model_input, sampling_param)
# Print output
for output in model_outputs:
print(output.outputs[0].text)
Visualizations
![]() |
![]() |
![]() |
![]() |
Acknowledgement
We would like to thank Vary, GOT-OCR2.0, MinerU, PaddleOCR, OneChart, Slow Perception for their valuable models and ideas.
We also appreciate the benchmarks: Fox, OminiDocBench.
Citation
@article{wei2025deepseek,
title={DeepSeek-OCR: Contexts Optical Compression},
author={Wei, Haoran and Sun, Yaofeng and Li, Yukun},
journal={arXiv preprint arXiv:2510.18234},
year={2025}
}
๐ Limitations & Considerations
- โข Benchmark scores may vary based on evaluation methodology and hardware configuration.
- โข VRAM requirements are estimates; actual usage depends on quantization and batch size.
- โข FNI scores are relative rankings and may change as new models are added.
- โ License Unknown: Verify licensing terms before commercial use.
- โข Source: Unknown
Cite this model
Academic & Research Attribution
@misc{hf_model__deepseek_ai__deepseek_ocr,
author = {deepseek-ai},
title = {undefined Model},
year = {2026},
howpublished = {\url{https://huggingface.co/deepseek-ai/deepseek-ocr}},
note = {Accessed via Free2AITools Knowledge Fortress}
} AI Summary: Based on Hugging Face metadata. Not a recommendation.
๐ก๏ธ Model Transparency Report
Verified data manifest for traceability and transparency.
๐ Identity & Source
- id
- hf-model--deepseek-ai--deepseek-ocr
- author
- deepseek-ai
- tags
- transformerssafetensorsdeepseek_vl_v2feature-extractiondeepseekvision-languageocrcustom_codeimage-text-to-textmultilingualarxiv:2510.18234license:mitregion:us
โ๏ธ Technical Specs
- architecture
- DeepseekOCRForCausalLM
- params billions
- 3.34
- context length
- 4,096
- vram gb
- 3.8
- vram is estimated
- true
- vram formula
- VRAM โ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)
๐ Engagement & Metrics
- likes
- 2,948
- downloads
- 5,433,086
Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)



