🧠

magicprompt-stable-diffusion

by gustavosta Model ID: hf-model--gustavosta--magicprompt-stable-diffusion
FNI 2.6
Top 71%

"This is a model from the MagicPrompt series of models, which are GPT-2 models intended to generate prompt texts for imaging AIs, in this case: Stable Diffusion. This model was trained with 150,000 steps and a set of about 80,000 data filtered and extracted from the image finder for Stable Diffusion:..."

🔗 View Source
Audited 2.6 FNI Score
Tiny 0.14B Params
4k Context
6.3K Downloads
8G GPU ~2GB Est. VRAM

⚡ Quick Commands

đŸĻ™ Ollama Run
ollama run magicprompt-stable-diffusion
🤗 HF Download
huggingface-cli download gustavosta/magicprompt-stable-diffusion
đŸ“Ļ Install Lib
pip install -U transformers
📊

Engineering Specs

⚡ Hardware

Parameters
0.14B
Architecture
GPT2LMHeadModel
Context Length
4K
Model Size
4.3GB

🧠 Lifecycle

Library
-
Precision
float16
Tokenizer
-

🌐 Identity

Source
HuggingFace
License
Open Access
💾

Est. VRAM Benchmark

~1.4GB

Analyze Hardware

* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.

📈 Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

No similar models found.

đŸ”ŦTechnical Deep Dive

Full Specifications [+]
---

🚀 What's Next?

⚡ Quick Commands

đŸĻ™ Ollama Run
ollama run magicprompt-stable-diffusion
🤗 HF Download
huggingface-cli download gustavosta/magicprompt-stable-diffusion
đŸ“Ļ Install Lib
pip install -U transformers
đŸ–Ĩī¸

Hardware Compatibility

Multi-Tier Validation Matrix

Live Sync
🎮 Compatible

RTX 3060 / 4060 Ti

Entry 8GB VRAM
🎮 Compatible

RTX 4070 Super

Mid 12GB VRAM
đŸ’ģ Compatible

RTX 4080 / Mac M3

High 16GB VRAM
🚀 Compatible

RTX 3090 / 4090

Pro 24GB VRAM
đŸ—ī¸ Compatible

RTX 6000 Ada

Workstation 48GB VRAM
🏭 Compatible

A100 / H100

Datacenter 80GB VRAM
â„šī¸

Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.

README

MagicPrompt - Stable Diffusion

This is a model from the MagicPrompt series of models, which are GPT-2 models intended to generate prompt texts for imaging AIs, in this case: Stable Diffusion.

đŸ–ŧī¸ Here's an example:

This model was trained with 150,000 steps and a set of about 80,000 data filtered and extracted from the image finder for Stable Diffusion: "Lexica.art". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare, but if you want to take a look at the original dataset, you can have a look here: datasets/Gustavosta/Stable-Diffusion-Prompts.

If you want to test the model with a demo, you can go to: "spaces/Gustavosta/MagicPrompt-Stable-Diffusion".

đŸ’ģ You can see other MagicPrompt models:

âš–ī¸ Licence:

MIT

When using this model, please credit: Gustavosta

Thanks for reading this far! :)

ZEN MODE â€ĸ README

MagicPrompt - Stable Diffusion

This is a model from the MagicPrompt series of models, which are GPT-2 models intended to generate prompt texts for imaging AIs, in this case: Stable Diffusion.

đŸ–ŧī¸ Here's an example:

This model was trained with 150,000 steps and a set of about 80,000 data filtered and extracted from the image finder for Stable Diffusion: "Lexica.art". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare, but if you want to take a look at the original dataset, you can have a look here: datasets/Gustavosta/Stable-Diffusion-Prompts.

If you want to test the model with a demo, you can go to: "spaces/Gustavosta/MagicPrompt-Stable-Diffusion".

đŸ’ģ You can see other MagicPrompt models:

âš–ī¸ Licence:

MIT

When using this model, please credit: Gustavosta

Thanks for reading this far! :)

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.
  • â€ĸ Source: Unknown
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__gustavosta__magicprompt_stable_diffusion,
  author = {gustavosta},
  title = {undefined Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/gustavosta/magicprompt-stable-diffusion}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
gustavosta. (2026). undefined [Model]. Free2AITools. https://huggingface.co/gustavosta/magicprompt-stable-diffusion
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

🆔 Identity & Source

id
hf-model--gustavosta--magicprompt-stable-diffusion
author
gustavosta
tags
transformerspytorchcoremlsafetensorsgpt2text-generationlicense:mittext-generation-inferenceendpoints_compatibledeploy:azureregion:us

âš™ī¸ Technical Specs

architecture
GPT2LMHeadModel
params billions
0.14
context length
4,096
vram gb
1.4
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

likes
731
downloads
6,267

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)