🧠
Model

1.7b Mixturevitae Web Curated 100bt Longsft 16k

by ontocord hf-model--ontocord--1.7b-mixturevitae-web_curated-100bt-longsft_16k
Nexus Index
39.2 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 4
R: Recency 97
Q: Quality 65
Tech Context
1.7 Params
16.384K Ctx
Vital Performance
35 DL / 30D
0.0%
Audited 39.2 FNI Score
Tiny 1.7B Params
16k Context
35 Downloads
8G GPU ~4GB Est. VRAM
Restricted OTHER License
Model Information Summary
Entity Passport
Registry ID hf-model--ontocord--1.7b-mixturevitae-web_curated-100bt-longsft_16k
License Other
Provider huggingface
💾

Compute Threshold

~3.8GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__ontocord__1.7b_mixturevitae_web_curated_100bt_longsft_16k,
  author = {ontocord},
  title = {1.7b Mixturevitae Web Curated 100bt Longsft 16k Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/ontocord/1.7b-mixturevitae-web_curated-100bt-longsft_16k}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
ontocord. (2026). 1.7b Mixturevitae Web Curated 100bt Longsft 16k [Model]. Free2AITools. https://huggingface.co/ontocord/1.7b-mixturevitae-web_curated-100bt-longsft_16k

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run 1.7b-mixturevitae-web_curated-100bt-longsft_16k
🤗 HF Download
huggingface-cli download ontocord/1.7b-mixturevitae-web_curated-100bt-longsft_16k
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

39.2
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 4
Recency (R) 97
Quality (Q) 65

đŸ’Ŧ Index Insight

FNI V2.0 for 1.7b Mixturevitae Web Curated 100bt Longsft 16k: Semantic (S:50), Authority (A:0), Popularity (P:4), Recency (R:97), Quality (Q:65).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

long-context-mv100bt-web_curated

This model is a fine-tuned version of ontocord/1.7b-MixtureVitae-web_curated-100BT on the long_sft dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • total_eval_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 1.0

Training results

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.7.0a0+7c8ec84dab.nv25.03
  • Datasets 3.6.0
  • Tokenizers 0.21.4

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
35Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--ontocord--1.7b-mixturevitae-web_curated-100bt-longsft_16k
slug
ontocord--1.7b-mixturevitae-web_curated-100bt-longsft_16k
source
huggingface
author
ontocord
license
Other
tags
transformers, safetensors, opensci, feature-extraction, llama-factory, full, generated_from_trainer, custom_code, license:other, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
1.7
context length
16,384
pipeline tag
feature-extraction
vram gb
3.8
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 2GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
35
stars
0
forks
0

Data indexed from public sources. Updated daily.