🧠
Model

Parakeet Tdt 0.6b V3 Onnx

by istupakov hf-model--istupakov--parakeet-tdt-0.6b-v3-onnx
Nexus Index
44.9 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 61
R: Recency 87
Q: Quality 50
Tech Context
0.6B Params
4.096K Ctx
Vital Performance
197.1K DL / 30D
0.0%
Audited 44.9 FNI Score
Tiny 0.6B Params
4k Context
Hot 197.1K Downloads
8G GPU ~2GB Est. VRAM
Restricted CC License
Model Information Summary
Entity Passport
Registry ID hf-model--istupakov--parakeet-tdt-0.6b-v3-onnx
License CC-BY-4.0
Provider huggingface
💾

Compute Threshold

~1.8GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__istupakov__parakeet_tdt_0.6b_v3_onnx,
  author = {istupakov},
  title = {Parakeet Tdt 0.6b V3 Onnx Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/istupakov/parakeet-tdt-0.6b-v3-onnx}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
istupakov. (2026). Parakeet Tdt 0.6b V3 Onnx [Model]. Free2AITools. https://huggingface.co/istupakov/parakeet-tdt-0.6b-v3-onnx

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run parakeet-tdt-0.6b-v3-onnx
🤗 HF Download
huggingface-cli download istupakov/parakeet-tdt-0.6b-v3-onnx

âš–ī¸ Nexus Index V2.0

44.9
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 61
Recency (R) 87
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for Parakeet Tdt 0.6b V3 Onnx: Semantic (S:50), Authority (A:0), Popularity (P:61), Recency (R:87), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

NVIDIA Parakeet TDT 0.6B V3 (Multilingual) model converted to ONNX format for onnx-asr.

Install onnx-asr

shell
pip install onnx-asr[cpu,hub]

Load Parakeet TDT model and recognize wav file

py
import onnx_asr
model = onnx_asr.load_model("nemo-parakeet-tdt-0.6b-v3")
print(model.recognize("test.wav"))

Code for models export

py
import nemo.collections.asr as nemo_asr
from pathlib import Path

model = nemo_asr.models.ASRModel.from_pretrained("nvidia/parakeet-tdt-0.6b-v3")

onnx_dir = Path("nemo-onnx")
onnx_dir.mkdir(exist_ok=True)
model.export(str(Path(onnx_dir, "model.onnx")))

with Path(onnx_dir, "vocab.txt").open("wt") as f:
    for i, token in enumerate([*model.tokenizer.vocab, ""]):
        f.write(f"{token} {i}\n")

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
197.1KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--istupakov--parakeet-tdt-0.6b-v3-onnx
slug
istupakov--parakeet-tdt-0.6b-v3-onnx
source
huggingface
author
istupakov
license
CC-BY-4.0
tags
onnx, nemo-conformer-tdt, automatic-speech-recognition, asr, onnx-asr, en, es, fr, de, bg, hr, cs, da, nl, et, fi, el, hu, it, lv, lt, mt, pl, pt, ro, sk, sl, sv, ru, uk, base_model:nvidia/parakeet-tdt-0.6b-v3, base_model:quantized:nvidia/parakeet-tdt-0.6b-v3, license:cc-by-4.0, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
0.6
context length
4,096
pipeline tag
automatic-speech-recognition
vram gb
1.8
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
197,113
stars
0
forks
0

Data indexed from public sources. Updated daily.