π§ Model
clip-vit-large-patch14
by openai
--- tags: - vision widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing s
π Updated 12/19/2025
π§ Architecture Explorer
Neural network architecture
1 Input Layer
2 Hidden Layers
3 Attention
4 Output Layer
About
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found here. The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generali...
π Limitations & Considerations
- β’ Benchmark scores may vary based on evaluation methodology and hardware configuration.
- β’ VRAM requirements are estimates; actual usage depends on quantization and batch size.
- β’ FNI scores are relative rankings and may change as new models are added.
- β License Unknown: Verify licensing terms before commercial use.
- β’ Data source: [{"source_platform":"huggingface","source_url":"https://huggingface.co/openai/clip-vit-large-patch14","fetched_at":"2025-12-19T07:41:01.175Z","adapter_version":"3.2.0"}]
π Related Resources
π Related Papers
No related papers linked yet. Check the model's official documentation for research papers.
π Training Datasets
Training data information not available. Refer to the original model card for details.