🧠
Model

Darwin 4b David I1 Gguf

by mradermacher hf-model--mradermacher--darwin-4b-david-i1-gguf
Nexus Index
38.9 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 28
R: Recency 98
Q: Quality 30
Tech Context
4 Params
4.096K Ctx
Vital Performance
1.2K DL / 30D
0.0%
Audited 38.9 FNI Score
4B Params
4k Context
1.2K Downloads
8G GPU ~5GB Est. VRAM
Commercial APACHE License
Model Information Summary
Entity Passport
Registry ID hf-model--mradermacher--darwin-4b-david-i1-gguf
License Apache-2.0
Provider huggingface
💾

Compute Threshold

~4.3GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__mradermacher__darwin_4b_david_i1_gguf,
  author = {mradermacher},
  title = {Darwin 4b David I1 Gguf Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/mradermacher/darwin-4b-david-i1-gguf}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
mradermacher. (2026). Darwin 4b David I1 Gguf [Model]. Free2AITools. https://huggingface.co/mradermacher/darwin-4b-david-i1-gguf

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run darwin-4b-david-i1-gguf
🤗 HF Download
huggingface-cli download mradermacher/darwin-4b-david-i1-gguf
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

38.9
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 28
Recency (R) 98
Quality (Q) 30

đŸ’Ŧ Index Insight

FNI V2.0 for Darwin 4b David I1 Gguf: Semantic (S:50), Authority (A:0), Popularity (P:28), Recency (R:98), Quality (Q:30).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

About

weighted/imatrix quants of https://huggingface.co/FINAL-Bench/Darwin-4B-David

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/Darwin-4B-David-GGUF

This is a vision model - mmproj files (if any) will be in the static repository.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF imatrix 0.1 imatrix file (for creating your own quants)
GGUF i1-Q2_K 4.5 IQ3_XXS probably better
GGUF i1-Q3_K_S 4.8 IQ3_XS probably better
GGUF i1-IQ3_S 4.8 beats Q3_K*
GGUF i1-IQ3_M 4.8
GGUF i1-Q3_K_M 5.0 IQ3_S probably better
GGUF i1-Q3_K_L 5.1 IQ3_M probably better
GGUF i1-IQ4_XS 5.2
GGUF i1-IQ4_NL 5.3 prefer IQ4_XS
GGUF i1-Q4_0 5.3 fast, low quality
GGUF i1-Q4_K_S 5.3 optimal size/speed/quality
GGUF i1-Q4_K_M 5.4 fast, recommended
GGUF i1-Q4_1 5.5
GGUF i1-Q5_K_S 5.8
GGUF i1-Q5_K_M 5.9
GGUF i1-Q6_K 6.3 practically like static Q6_K

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
1.2KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--mradermacher--darwin-4b-david-i1-gguf
slug
mradermacher--darwin-4b-david-i1-gguf
source
huggingface
author
mradermacher
license
Apache-2.0
tags
transformers, gguf, darwin-v6, generation-2, evolutionary-merge, mri-guided, dare-ties, gemma4, reasoning, thinking, proto-agi, vidraft, en, ko, ja, zh, multilingual, base_model:final-bench/darwin-4b-david, base_model:quantized:final-bench/darwin-4b-david, license:apache-2.0, endpoints_compatible, region:us, imatrix, conversational

âš™ī¸ Technical Specs

architecture
null
params billions
4
context length
4,096
pipeline tag
vram gb
4.3
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
1,215
stars
0
forks
0

Data indexed from public sources. Updated daily.