🧠
Model

Gemma 3n E2b It Gguf

by Ggml Org hf-model--ggml-org--gemma-3n-e2b-it-gguf
Nexus Index
35.7 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 40
R: Recency 61
Q: Quality 30
Tech Context
2 Params
4.096K Ctx
Vital Performance
5.6K DL / 30D
0.0%
Audited 35.7 FNI Score
Tiny 2B Params
4k Context
5.6K Downloads
8G GPU ~3GB Est. VRAM
Restricted GEMMA License
Model Information Summary
Entity Passport
Registry ID hf-model--ggml-org--gemma-3n-e2b-it-gguf
License Gemma
Provider huggingface
💾

Compute Threshold

~2.8GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__ggml_org__gemma_3n_e2b_it_gguf,
  author = {Ggml Org},
  title = {Gemma 3n E2b It Gguf Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/ggml-org/gemma-3n-e2b-it-gguf}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Ggml Org. (2026). Gemma 3n E2b It Gguf [Model]. Free2AITools. https://huggingface.co/ggml-org/gemma-3n-e2b-it-gguf

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run gemma-3n-e2b-it-gguf
🤗 HF Download
huggingface-cli download ggml-org/gemma-3n-e2b-it-gguf

âš–ī¸ Nexus Index V2.0

35.7
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 40
Recency (R) 61
Quality (Q) 30

đŸ’Ŧ Index Insight

FNI V2.0 for Gemma 3n E2b It Gguf: Semantic (S:50), Authority (A:0), Popularity (P:40), Recency (R:61), Quality (Q:30).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

[!Note] This version does not contain multimodal support. We are still working on adding multimodal.

Gemma 3n model card

Original model: https://huggingface.co/google/gemma-3n-E2B-it

Model Page: Gemma 3n

Resources and Technical Documentation:

Terms of Use: Terms
Authors: Google DeepMind

Example usage

With llama.cpp

To install llama.cpp on your system, see installation guide

sh
llama-cli -hf ggml-org/gemma-3n-E2B-it-GGUF:Q8_0 -fa -c 0 --jinja

With LM Studio

Search for gemma-3n-E2B-it-GGUF and add it to your model library

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
5.6KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--ggml-org--gemma-3n-e2b-it-gguf
slug
ggml-org--gemma-3n-e2b-it-gguf
source
huggingface
author
Ggml Org
license
Gemma
tags
gguf, base_model:google/gemma-3n-e2b-it, base_model:quantized:google/gemma-3n-e2b-it, license:gemma, endpoints_compatible, region:us, conversational

âš™ī¸ Technical Specs

architecture
null
params billions
2
context length
4,096
pipeline tag
vram gb
2.8
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
5,631
stars
0
forks
0

Data indexed from public sources. Updated daily.