🧠

flux.1-dev-gguf

by city96 Model ID: hf-model--city96--flux.1-dev-gguf
FNI 3.4
Top 62%

"This is a direct GGUF conversion of black-forest-labs/FLUX.1-dev As this is a quantized model not a finetune, all the same restrictions/original license terms still apply. The model files can be used with the ComfyUI-GGUF custom node. Place model files in - see the GitHub......"

🔗 View Source
Audited 3.4 FNI Score
Tiny - Params
- Context
Hot 75.9K Downloads

⚡ Quick Commands

🤗 HF Download
huggingface-cli download city96/flux.1-dev-gguf
📊

Engineering Specs

⚡ Hardware

Parameters
-
Architecture
-
Context Length
-
Model Size
96.0GB

🧠 Lifecycle

Library
-
Precision
float16
Tokenizer
-

🌐 Identity

Source
HuggingFace
License
Open Access

đŸ•¸ī¸ Neural Mesh Hub

Interconnecting Research, Data & Ecosystem

📈 Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

No similar models found.

đŸ”ŦTechnical Deep Dive

Full Specifications [+]
---

🚀 What's Next?

⚡ Quick Commands

🤗 HF Download
huggingface-cli download city96/flux.1-dev-gguf
đŸ–Ĩī¸

Hardware Compatibility

Multi-Tier Validation Matrix

Live Sync
🎮 Compatible

RTX 3060 / 4060 Ti

Entry 8GB VRAM
🎮 Compatible

RTX 4070 Super

Mid 12GB VRAM
đŸ’ģ Compatible

RTX 4080 / Mac M3

High 16GB VRAM
🚀 Compatible

RTX 3090 / 4090

Pro 24GB VRAM
đŸ—ī¸ Compatible

RTX 6000 Ada

Workstation 48GB VRAM
🏭 Compatible

A100 / H100

Datacenter 80GB VRAM
â„šī¸

Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.

README

This is a direct GGUF conversion of black-forest-labs/FLUX.1-dev

As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.

The model files can be used with the ComfyUI-GGUF custom node.

Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.

Please refer to this chart for a basic overview of quantization types.

ZEN MODE â€ĸ README

This is a direct GGUF conversion of black-forest-labs/FLUX.1-dev

As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.

The model files can be used with the ComfyUI-GGUF custom node.

Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.

Please refer to this chart for a basic overview of quantization types.

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.
  • â€ĸ Source: Unknown
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__city96__flux.1_dev_gguf,
  author = {city96},
  title = {undefined Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/city96/flux.1-dev-gguf}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
city96. (2026). undefined [Model]. Free2AITools. https://huggingface.co/city96/flux.1-dev-gguf
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

🆔 Identity & Source

id
hf-model--city96--flux.1-dev-gguf
author
city96
tags
gguftext-to-imageimage-generationfluxbase_model:black-forest-labs/flux.1-devbase_model:quantized:black-forest-labs/flux.1-devlicense:otherregion:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null

📊 Engagement & Metrics

likes
1,247
downloads
75,915

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)