๐Ÿง 

deepseek-ocr

by deepseek-ai Model ID: hf-model--deepseek-ai--deepseek-ocr
FNI 16.1
Top 67%
๐Ÿ”— View Source
Audited 16.1 FNI Score
3.34B Params
4k Context
Hot 5.4M Downloads
8G GPU ~4GB Est. VRAM

โšก Quick Commands

๐Ÿฆ™ Ollama Run
ollama run deepseek-ocr
๐Ÿค— HF Download
huggingface-cli download deepseek-ai/deepseek-ocr
๐Ÿ“ฆ Install Lib
pip install -U transformers
๐Ÿ“Š

Engineering Specs

โšก Hardware

Parameters
3.34B
Architecture
DeepseekOCRForCausalLM
Context Length
4K
Model Size
6.2GB

๐Ÿง  Lifecycle

Library
-
Precision
float16
Tokenizer
-

๐ŸŒ Identity

Source
HuggingFace
License
Open Access
๐Ÿ’พ

Est. VRAM Benchmark

~3.8GB

Analyze Hardware

* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.

๐Ÿ•ธ๏ธ Neural Mesh Hub

Interconnecting Research, Data & Ecosystem

๐Ÿ“ˆ Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

No similar models found.

๐Ÿ”ฌTechnical Deep Dive

Full Specifications [+]
---

๐Ÿš€ What's Next?

โšก Quick Commands

๐Ÿฆ™ Ollama Run
ollama run deepseek-ocr
๐Ÿค— HF Download
huggingface-cli download deepseek-ai/deepseek-ocr
๐Ÿ“ฆ Install Lib
pip install -U transformers
๐Ÿ–ฅ๏ธ

Hardware Compatibility

Multi-Tier Validation Matrix

Live Sync
๐ŸŽฎ Compatible

RTX 3060 / 4060 Ti

Entry 8GB VRAM
๐ŸŽฎ Compatible

RTX 4070 Super

Mid 12GB VRAM
๐Ÿ’ป Compatible

RTX 4080 / Mac M3

High 16GB VRAM
๐Ÿš€ Compatible

RTX 3090 / 4090

Pro 24GB VRAM
๐Ÿ—๏ธ Compatible

RTX 6000 Ada

Workstation 48GB VRAM
๐Ÿญ Compatible

A100 / H100

Datacenter 80GB VRAM
โ„น๏ธ

Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.

README

6,129 chars โ€ข Full Disclosure Protocol Active

ZEN MODE โ€ข README
DeepSeek AI

๐ŸŒŸ Github | ๐Ÿ“ฅ Model Download | ๐Ÿ“„ Paper Link | ๐Ÿ“„ Arxiv Paper Link |

DeepSeek-OCR: Contexts Optical Compression

Explore the boundaries of visual-text compression.

Usage

Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.12.9 + CUDA11.8๏ผš

torch==2.6.0
transformers==4.46.3
tokenizers==0.20.3
einops
addict 
easydict
pip install flash-attn==2.7.3 --no-build-isolation
from transformers import AutoModel, AutoTokenizer
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
model_name = 'deepseek-ai/DeepSeek-OCR'

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, _attn_implementation='flash_attention_2', trust_remote_code=True, use_safetensors=True)
model = model.eval().cuda().to(torch.bfloat16)

# prompt = "\nFree OCR. "
prompt = "\n<|grounding|>Convert the document to markdown. "
image_file = 'your_image.jpg'
output_path = 'your/output/dir'

# infer(self, tokenizer, prompt='', image_file='', output_path = ' ', base_size = 1024, image_size = 640, crop_mode = True, test_compress = False, save_results = False):

# Tiny: base_size = 512, image_size = 512, crop_mode = False
# Small: base_size = 640, image_size = 640, crop_mode = False
# Base: base_size = 1024, image_size = 1024, crop_mode = False
# Large: base_size = 1280, image_size = 1280, crop_mode = False

# Gundam: base_size = 1024, image_size = 640, crop_mode = True

res = model.infer(tokenizer, prompt=prompt, image_file=image_file, output_path = output_path, base_size = 1024, image_size = 640, crop_mode=True, save_results = True, test_compress = True)

vLLM

Refer to ๐ŸŒŸGitHub for guidance on model inference acceleration and PDF processing, etc.

[2025/10/23] ๐Ÿš€๐Ÿš€๐Ÿš€ DeepSeek-OCR is now officially supported in upstream vLLM.

uv venv
source .venv/bin/activate
# Until v0.11.1 release, you need to install vLLM from nightly build
uv pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
from vllm import LLM, SamplingParams
from vllm.model_executor.models.deepseek_ocr import NGramPerReqLogitsProcessor
from PIL import Image

# Create model instance
llm = LLM(
    model="deepseek-ai/DeepSeek-OCR",
    enable_prefix_caching=False,
    mm_processor_cache_gb=0,
    logits_processors=[NGramPerReqLogitsProcessor]
)

# Prepare batched input with your image file
image_1 = Image.open("path/to/your/image_1.png").convert("RGB")
image_2 = Image.open("path/to/your/image_2.png").convert("RGB")
prompt = "\nFree OCR."

model_input = [
    {
        "prompt": prompt,
        "multi_modal_data": {"image": image_1}
    },
    {
        "prompt": prompt,
        "multi_modal_data": {"image": image_2}
    }
]

sampling_param = SamplingParams(
            temperature=0.0,
            max_tokens=8192,
            # ngram logit processor args
            extra_args=dict(
                ngram_size=30,
                window_size=90,
                whitelist_token_ids={128821, 128822},  # whitelist: , 
            ),
            skip_special_tokens=False,
        )
# Generate output
model_outputs = llm.generate(model_input, sampling_param)

# Print output
for output in model_outputs:
    print(output.outputs[0].text)

Visualizations

Acknowledgement

We would like to thank Vary, GOT-OCR2.0, MinerU, PaddleOCR, OneChart, Slow Perception for their valuable models and ideas.

We also appreciate the benchmarks: Fox, OminiDocBench.

Citation

@article{wei2025deepseek,
  title={DeepSeek-OCR: Contexts Optical Compression},
  author={Wei, Haoran and Sun, Yaofeng and Li, Yukun},
  journal={arXiv preprint arXiv:2510.18234},
  year={2025}
}

๐Ÿ“ Limitations & Considerations

  • โ€ข Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • โ€ข VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • โ€ข FNI scores are relative rankings and may change as new models are added.
  • โš  License Unknown: Verify licensing terms before commercial use.
  • โ€ข Source: Unknown
๐Ÿ“œ

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__deepseek_ai__deepseek_ocr,
  author = {deepseek-ai},
  title = {undefined Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/deepseek-ai/deepseek-ocr}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
deepseek-ai. (2026). undefined [Model]. Free2AITools. https://huggingface.co/deepseek-ai/deepseek-ocr
๐Ÿ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

๐Ÿ“Š FNI Methodology ๐Ÿ“š Knowledge Baseโ„น๏ธ Verify with original source

๐Ÿ›ก๏ธ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

๐Ÿ†” Identity & Source

id
hf-model--deepseek-ai--deepseek-ocr
author
deepseek-ai
tags
transformerssafetensorsdeepseek_vl_v2feature-extractiondeepseekvision-languageocrcustom_codeimage-text-to-textmultilingualarxiv:2510.18234license:mitregion:us

โš™๏ธ Technical Specs

architecture
DeepseekOCRForCausalLM
params billions
3.34
context length
4,096
vram gb
3.8
vram is estimated
true
vram formula
VRAM โ‰ˆ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

๐Ÿ“Š Engagement & Metrics

likes
2,948
downloads
5,433,086

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)