🧠
Model

Home Cook Mistral Small Omni 24b 2507 Gguf

by ngxson hf-model--ngxson--home-cook-mistral-small-omni-24b-2507-gguf
Nexus Index
37.4 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 41
R: Recency 58
Q: Quality 50
Tech Context
24 Params
4.096K Ctx
Vital Performance
6.6K DL / 30D
0.0%
Audited 37.4 FNI Score
24B Params
4k Context
6.6K Downloads
24G GPU ~20GB Est. VRAM
Commercial APACHE License
Model Information Summary
Entity Passport
Registry ID hf-model--ngxson--home-cook-mistral-small-omni-24b-2507-gguf
License Apache-2.0
Provider huggingface
💾

Compute Threshold

~19.3GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__ngxson__home_cook_mistral_small_omni_24b_2507_gguf,
  author = {ngxson},
  title = {Home Cook Mistral Small Omni 24b 2507 Gguf Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/ngxson/home-cook-mistral-small-omni-24b-2507-gguf}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
ngxson. (2026). Home Cook Mistral Small Omni 24b 2507 Gguf [Model]. Free2AITools. https://huggingface.co/ngxson/home-cook-mistral-small-omni-24b-2507-gguf

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run home-cook-mistral-small-omni-24b-2507-gguf
🤗 HF Download
huggingface-cli download ngxson/home-cook-mistral-small-omni-24b-2507-gguf

âš–ī¸ Nexus Index V2.0

37.4
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 41
Recency (R) 58
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for Home Cook Mistral Small Omni 24b 2507 Gguf: Semantic (S:50), Authority (A:0), Popularity (P:41), Recency (R:58), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

Home-cooked Mistral Small Omni

This is a multimodal model created by merging Mistral Small 2506 (with vision capabilities) and Voxtral 2507 (with audio capabilities) using a modified version of the mergekit tool.

For detailed merging instructions, refer to the sections below.

License and Attribution

This model is a merged derivative work combining Mistral Small 2506 and Voxtral 2507, both originally released by Mistral AI under the Apache 2.0 license. The merged model is also distributed under the Apache 2.0 license, and the full license text, along with original copyright notices, is included in this repository. I have no affiliation, sponsorship, or formal relationship with Mistral AI. This project is an independent effort to combine the vision and audio capabilities of the two models.

Steps to reproduce

Merge text model

Install mergekit from this version: https://github.com/arcee-ai/mergekit/tree/0027c5c51471fa891d438eccda5455ebe55b536e

Modify the mergekit source code, open file mergekit/merge_methods/generalized_task_arithmetic.py

py
    # Normalize the vectors to get the directions and angles
    v0 = normalize(v0, eps)
    v1 = normalize(v1, eps)

    if v0.shape != v1.shape:                # ADD THIS
        res = np.array([0.0])               # ADD THIS
        return maybe_torch(res, is_torch)   # ADD THIS

    # Dot product with the normalized vectors (can't use np.dot in W)
    dot = np.sum(v0 * v1)

    # If absolute value of dot product is almost 1, vectors are ~colinear, so use lerp
    if np.abs(dot) > DOT_THRESHOLD:
        res = lerp(t, v0_copy, v1_copy)
        return maybe_torch(res, is_torch)

Prepare YAML file for merging config:

yaml
name: mistral-omni
merge_method: slerp
models:
  - model: ../models/Voxtral-Small-24B-2507
  - model: ../models/Mistral-Small-3.2-24B-Instruct-2506
base_model: ../models/Mistral-Small-3.2-24B-Instruct-2506
parameters:
  t:
    - filter: self_attn
      value: [0.1, 0.3, 0.5, 0.3, 0.1, 0]
    - filter: mlp
      value: [0.1, 0.3, 0.5, 0.3, 0.1, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16

Merge it:

sh
mergekit-yaml mistral_o.yaml ../models/mistral_o

Go to the mistral_o output directory, then download tekken.json from Voxtral and place it there: https://huggingface.co/mistralai/Voxtral-Small-24B-2507/blob/main/tekken.json

Finally, use convert_hf_to_gguf.py to convert it back to GGUF as usual

Merge mmproj models

Download these mmproj files:

Rename them to audio.ggufand vision.gguf respectively

Then run merge_mmproj_models.py from this repo. The output file will be mmproj-model.gguf

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
6.6KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--ngxson--home-cook-mistral-small-omni-24b-2507-gguf
slug
ngxson--home-cook-mistral-small-omni-24b-2507-gguf
source
huggingface
author
ngxson
license
Apache-2.0
tags
gguf, any-to-any, license:apache-2.0, endpoints_compatible, region:us, conversational

âš™ī¸ Technical Specs

architecture
null
params billions
24
context length
4,096
pipeline tag
any-to-any
vram gb
19.3
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
6,614
stars
0
forks
0

Data indexed from public sources. Updated daily.