๐Ÿง 

blip-image-captioning-base

by salesforce Model ID: hf-model--salesforce--blip-image-captioning-base
FNI 16
Top 86%

"Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | !BLIP.gif | |:--:| | Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP | Authors from the paper write in the abstract: *Vision-Language Pre-training (VLP) h..."

๐Ÿ”— View Source
Audited 16 FNI Score
Tiny - Params
- Context
Hot 2.3M Downloads

โšก Quick Commands

๐Ÿค— HF Download
huggingface-cli download salesforce/blip-image-captioning-base
๐Ÿ“ฆ Install Lib
pip install -U transformers
๐Ÿ“Š

Engineering Specs

โšก Hardware

Parameters
-
Architecture
BlipForConditionalGeneration
Context Length
-
Model Size
5.5GB

๐Ÿง  Lifecycle

Library
-
Precision
float16
Tokenizer
-

๐ŸŒ Identity

Source
HuggingFace
License
Open Access

๐Ÿ•ธ๏ธ Neural Mesh Hub

Interconnecting Research, Data & Ecosystem

๐Ÿ”ฌ

๐Ÿ”ฌ Research & Data

๐Ÿ“ˆ Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

No similar models found.

๐Ÿ”ฌTechnical Deep Dive

Full Specifications [+]
---

๐Ÿš€ What's Next?

โšก Quick Commands

๐Ÿค— HF Download
huggingface-cli download salesforce/blip-image-captioning-base
๐Ÿ“ฆ Install Lib
pip install -U transformers
๐Ÿ–ฅ๏ธ

Hardware Compatibility

Multi-Tier Validation Matrix

Live Sync
๐ŸŽฎ Compatible

RTX 3060 / 4060 Ti

Entry 8GB VRAM
๐ŸŽฎ Compatible

RTX 4070 Super

Mid 12GB VRAM
๐Ÿ’ป Compatible

RTX 4080 / Mac M3

High 16GB VRAM
๐Ÿš€ Compatible

RTX 3090 / 4090

Pro 24GB VRAM
๐Ÿ—๏ธ Compatible

RTX 6000 Ada

Workstation 48GB VRAM
๐Ÿญ Compatible

A100 / H100

Datacenter 80GB VRAM
โ„น๏ธ

Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.

README

6,257 chars โ€ข Full Disclosure Protocol Active

ZEN MODE โ€ข README

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone).

BLIP.gif
Pull figure from BLIP official repo

TL;DR

Authors from the paper write in the abstract:

Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.

Usage

You can use this model for conditional and un-conditional image captioning

Using the Pytorch model

Running the model on CPU

Click to expand
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration

processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog

# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog

Running the model on GPU

In full precision
Click to expand
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration

processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog

# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
In half precision (`float16`)
Click to expand
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration

processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog

# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog

Ethical Considerations

This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact peopleโ€™s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.

BibTex and citation info

@misc{https://doi.org/10.48550/arxiv.2201.12086,
  doi = {10.48550/ARXIV.2201.12086},
  
  url = {https://arxiv.org/abs/2201.12086},
  
  author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
  
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution 4.0 International}
}

๐Ÿ“ Limitations & Considerations

  • โ€ข Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • โ€ข VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • โ€ข FNI scores are relative rankings and may change as new models are added.
  • โš  License Unknown: Verify licensing terms before commercial use.
  • โ€ข Source: Unknown
๐Ÿ“œ

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__salesforce__blip_image_captioning_base,
  author = {salesforce},
  title = {undefined Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/salesforce/blip-image-captioning-base}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
salesforce. (2026). undefined [Model]. Free2AITools. https://huggingface.co/salesforce/blip-image-captioning-base
๐Ÿ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

๐Ÿ“Š FNI Methodology ๐Ÿ“š Knowledge Baseโ„น๏ธ Verify with original source

๐Ÿ›ก๏ธ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

๐Ÿ†” Identity & Source

id
hf-model--salesforce--blip-image-captioning-base
author
salesforce
tags
transformerspytorchtfblipimage-to-textimage-captioningarxiv:2201.12086license:bsd-3-clauseendpoints_compatibleregion:us

โš™๏ธ Technical Specs

architecture
BlipForConditionalGeneration
params billions
null
context length
null

๐Ÿ“Š Engagement & Metrics

likes
822
downloads
2,302,902

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)