🧠
Model

Wayra Perplexity Estimator 55m

by Latam Gpt hf-model--latam-gpt--wayra-perplexity-estimator-55m
Nexus Index
5.0 Top 6%
P / V / C / U Breakdown Calibration Pending

Pillar scores are computed during the next indexing cycle.

Tech Context
0.06B Params
4.096K Ctx
Vital Performance
209 DL / 30D
0.0%
Audited 5 FNI Score
Tiny 0.06B Params
4k Context
209 Downloads
8G GPU ~2GB Est. VRAM
Dense WAYRAPPL Architecture
Model Information Summary
Entity Passport
Registry ID hf-model--latam-gpt--wayra-perplexity-estimator-55m
Provider huggingface
💾

Compute Threshold

~1.3GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__latam_gpt__wayra_perplexity_estimator_55m,
  author = {Latam Gpt},
  title = {Wayra Perplexity Estimator 55m Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/latam-gpt/Wayra-Perplexity-Estimator-55M}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Latam Gpt. (2026). Wayra Perplexity Estimator 55m [Model]. Free2AITools. https://huggingface.co/latam-gpt/Wayra-Perplexity-Estimator-55M

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run wayra-perplexity-estimator-55m
🤗 HF Download
huggingface-cli download latam-gpt/wayra-perplexity-estimator-55m
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V16.5

5.0
ESTIMATED IMPACT TIER
Popularity (P) 0
Freshness (F) 0
Completeness (C) 0
Utility (U) 0

đŸ’Ŧ Index Insight

The Free2AITools Nexus Index for Wayra Perplexity Estimator 55m aggregates Popularity (P:0), Freshness (F:0), and Completeness (C:0). The Utility score (U:0) represents deployment readiness and ecosystem adoption.

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • â€ĸ Source: Unknown
Top Tier

Social Proof

HuggingFace Hub
209Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

🆔 Identity & Source

id
hf-model--latam-gpt--wayra-perplexity-estimator-55m
source
huggingface
author
Latam Gpt
tags
transformerssafetensorswayrapplperplexity-estimationtensorrtdata-quality-assessmentdataset-contamination-detectiona100-optimizedcurriculum-learningmlopstext-classificationesptenlicense:apache-2.0endpoints_compatibleregion:us

âš™ī¸ Technical Specs

architecture
WayraPPL
params billions
0.06
context length
4,096
pipeline tag
text-classification
vram gb
1.3
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

likes
0
downloads
209

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)