🧠
Model

Lfm2.5 1.2b Terminal Sft 1epoch Liquidcli Templateholdout

by Llm Os Models hf-model--llm-os-models--lfm2.5-1.2b-terminal-sft-1epoch-liquidcli-templateholdout
Nexus Index
39.0 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 0
R: Recency 100
Q: Quality 65
Tech Context
1.2 Params
4.096K Ctx
Vital Performance
0 DL / 30D
0.0%
Audited 39 FNI Score
Tiny 1.2B Params
4k Context
0 Downloads
8G GPU ~3GB Est. VRAM
Model Information Summary
Entity Passport
Registry ID hf-model--llm-os-models--lfm2.5-1.2b-terminal-sft-1epoch-liquidcli-templateholdout
Provider huggingface
💾

Compute Threshold

~2.2GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__llm_os_models__lfm2.5_1.2b_terminal_sft_1epoch_liquidcli_templateholdout,
  author = {Llm Os Models},
  title = {Lfm2.5 1.2b Terminal Sft 1epoch Liquidcli Templateholdout Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/llm-os-models/lfm2.5-1.2b-terminal-sft-1epoch-liquidcli-templateholdout}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Llm Os Models. (2026). Lfm2.5 1.2b Terminal Sft 1epoch Liquidcli Templateholdout [Model]. Free2AITools. https://huggingface.co/llm-os-models/lfm2.5-1.2b-terminal-sft-1epoch-liquidcli-templateholdout

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run lfm2.5-1.2b-terminal-sft-1epoch-liquidcli-templateholdout
🤗 HF Download
huggingface-cli download llm-os-models/lfm2.5-1.2b-terminal-sft-1epoch-liquidcli-templateholdout
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

39.0
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 0
Recency (R) 100
Quality (Q) 65

đŸ’Ŧ Index Insight

FNI V2.0 for Lfm2.5 1.2b Terminal Sft 1epoch Liquidcli Templateholdout: Semantic (S:50), Authority (A:0), Popularity (P:0), Recency (R:100), Quality (Q:65).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--llm-os-models--lfm2.5-1.2b-terminal-sft-1epoch-liquidcli-templateholdout
slug
llm-os-models--lfm2.5-1.2b-terminal-sft-1epoch-liquidcli-templateholdout
source
huggingface
author
Llm Os Models
license
tags
transformers, safetensors, lfm2, text-generation, terminal, sft, vllm, tb2-lite, conversational, en, ko, base_model:liquidai/lfm2.5-1.2b-instruct, base_model:finetune:liquidai/lfm2.5-1.2b-instruct, endpoints_compatible, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
1.2
context length
4,096
pipeline tag
text-generation
vram gb
2.2
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
0
stars
0
forks
0

Data indexed from public sources. Updated daily.