🧠
Model

Otel Llm 8.3b It

by farbodtavakkoli hf-model--farbodtavakkoli--otel-llm-8.3b-it
Nexus Index
49.0 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 79
R: Recency 100
Q: Quality 50
Tech Context
8.3 Params
4.096K Ctx
Vital Performance
1.3M DL / 30D
0.0%
Audited 49 FNI Score
8.3B Params
4k Context
Hot 1.3M Downloads
8G GPU ~8GB Est. VRAM
Model Information Summary
Entity Passport
Registry ID hf-model--farbodtavakkoli--otel-llm-8.3b-it
Provider huggingface_deepspec
💾

Compute Threshold

~7.5GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__farbodtavakkoli__otel_llm_8.3b_it,
  author = {farbodtavakkoli},
  title = {Otel Llm 8.3b It Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/farbodtavakkoli/otel-llm-8.3b-it}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
farbodtavakkoli. (2026). Otel Llm 8.3b It [Model]. Free2AITools. https://huggingface.co/farbodtavakkoli/otel-llm-8.3b-it

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run otel-llm-8.3b-it
🤗 HF Download
huggingface-cli download farbodtavakkoli/otel-llm-8.3b-it

âš–ī¸ Nexus Index V2.0

49.0
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 79
Recency (R) 100
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for Otel Llm 8.3b It: Semantic (S:50), Authority (A:0), Popularity (P:79), Recency (R:100), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

OTel-LLM-8.3B-IT

OTel-LLM-8.3B-IT is a telecom-specialized language model fine-tuned on telecommunications domain data. It is part of the OTel Family of Models, an open-source initiative to build industry-standard AI models for the global telecommunications sector.

Model Details

Attribute Value
Base Model nvidia/RNJ-1-Instruct
Parameters 8.3B
Training Method Full parameter fine-tuning
Language English
License Apache 2.0

Training Data

The model was trained on high-quality telecom-focused data curated by 200+ domain experts from organizations including AT&T, RelationalAI, AMD, GSMA, Purdue University, Khalifa University, University of Leeds, Yale University, The University of Texas at Dallas, NetoAI, and MantisNLP.

Data Sources:

  • GSMA Permanent Reference Documents
  • 3GPP Specifications
  • O-RAN Documentation
  • RFC Series
  • eSIM, terminals, security, networks, roaming, APIs
  • Industry whitepapers and telecom academic papers

Intended Use

This model is optimized for:

  • RAG applications in telecommunications
  • Question answering on telecom specifications and standards

Language Models

Embedding Models

Reranker Models

Training Infrastructure

  • Framework: ScalarLM (GPU-agnostic)
  • Compute: TensorWave with AMD GPUs and Azure with NVIDIA GPUs.

Citation

bibtex
@misc{otel2026,
  title={OTel: Open Telco AI Models},
  author={Tavakkoli, Farbod and Diamos, Gregory and Paulk, Roderic and Terrazas, Jorden},
  year={2026},
  url={https://huggingface.co/farbodtavakkoli}
}

Contact

If you have any technical questions, please feel free to reach out to [email protected] or [email protected]

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
1.3MDownloads
đŸ“ĻData Source: huggingface_deepspec
🔄 Daily sync (03:00 UTC)

AI Summary: Based on huggingface_deepspec metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--farbodtavakkoli--otel-llm-8.3b-it
slug
farbodtavakkoli--otel-llm-8.3b-it
source
huggingface_deepspec
author
farbodtavakkoli
license
tags
pytorch, gemma3_text, telecom, telecommunications, gsma, fine-tuned, text-generation, conversational, en, license:apache-2.0, region:us, gemma, 7B, base_model:essentialai/rnj-1-instruct, base_model:finetune:essentialai/rnj-1-instruct

âš™ī¸ Technical Specs

architecture
null
params billions
8.3
context length
4,096
pipeline tag
text-generation
vram gb
7.5
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
1,274,059
stars
0
forks
0

Data indexed from public sources. Updated daily.