Model

Wanvideo Comfy

by Kijai ID: hf-model--kijai--wanvideo_comfy
FNI Rank 41
Percentile Top 1%
Activity
β†’ 0.0%

Combined and quantized models for WanVideo, originating from here: https://huggingface.co/Wan-AI/ Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper and ComfyUI native WanVideo nodes. I've also started to do fp8_scaled versions over here: https://huggingface.co/Kijai/WanVideo_comfy_f...

Audited 41 FNI Score
Tiny - Params
- Context
Hot 6.0M Downloads
Model Information Summary
Entity Passport
Registry ID hf-model--kijai--wanvideo_comfy
Provider huggingface

πŸ•ΈοΈ Neural Mesh Hub

Interconnecting Research, Data & Ecosystem

πŸ•ΈοΈ

Intelligence Hive

Multi-source Relation Matrix

Live Index
πŸ“œ

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__kijai__wanvideo_comfy,
  author = {Kijai},
  title = {Wanvideo Comfy Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/Kijai/WanVideo_comfy}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Kijai. (2026). Wanvideo Comfy [Model]. Free2AITools. https://huggingface.co/Kijai/WanVideo_comfy

πŸ”¬Technical Deep Dive

Full Specifications [+]

⚑ Quick Commands

πŸ€— HF Download
huggingface-cli download kijai/wanvideo_comfy

βš–οΈ Free2AI Nexus Index

Methodology β†’ πŸ“˜ What is FNI?
41.0
Top 1% Overall Impact
πŸ”₯ Popularity (P) 0
πŸš€ Velocity (V) 0
πŸ›‘οΈ Credibility (C) 0
πŸ”§ Utility (U) 0
Nexus Verified Data

πŸ’¬ Why this score?

This Wanvideo Comfy has a P score of 0 (popularity from downloads/likes), V of 0 (growth velocity), C of 0 (credibility from citations), and U of 0 (utility/deploy support).

Data Verified πŸ• Last Updated: Not calculated
Free2AI Nexus Index | Fair Β· Transparent Β· Explainable | Full Methodology
---

πŸš€ What's Next?

README


tags:

  • diffusion-single-file
  • comfyui
    base_model:
  • Wan-AI/Wan2.1-VACE-14B
  • Wan-AI/Wan2.1-VACE-1.3B

Combined and quantized models for WanVideo, originating from here:

https://huggingface.co/Wan-AI/

Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper and ComfyUI native WanVideo nodes.

I've also started to do fp8_scaled versions over here: https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled

Other model sources:

TinyVAE from https://github.com/madebyollin/taehv

SkyReels: https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9

WanVideoFun: https://huggingface.co/collections/alibaba-pai/wan21-fun-v11-680f514c89fe7b4df9d44f17


Lightx2v:

CausVid 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid

CFG and Step distill 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill


CausVid 1.3B: https://huggingface.co/tianweiy/CausVid

AccVideo: https://huggingface.co/aejion/AccVideo-WanX-T2V-14B

Phantom: https://huggingface.co/bytedance-research/Phantom

ATI: https://huggingface.co/bytedance-research/ATI

MiniMaxRemover: https://huggingface.co/zibojia/minimax-remover

MAGREF: https://huggingface.co/MAGREF-Video/MAGREF

FantasyTalking: https://github.com/Fantasy-AMAP/fantasy-talking

MultiTalk: https://github.com/MeiGen-AI/MultiTalk

Anisora: https://huggingface.co/IndexTeam/Index-anisora/tree/main/14B

Pusa: https://huggingface.co/RaphaelLiu/PusaV1/tree/main

FastVideo: https://huggingface.co/FastVideo

EchoShot: https://github.com/D2I-ai/EchoShot

Wan22 5B Turbo: https://huggingface.co/quanhaol/Wan2.2-TI2V-5B-Turbo

Ovi: https://github.com/character-ai/Ovi

FlashVSR: https://huggingface.co/JunhaoZhuang/FlashVSR

rCM: https://huggingface.co/worstcoder/rcm-Wan/tree/main


CausVid LoRAs are experimental extractions from the CausVid finetunes, the aim with them is to benefit from the distillation in CausVid, rather than any actual causal inference.

v1 = direct extraction, has adverse effects on motion and introduces flashing artifact at full strength.

v1.5 = same as above, but without the first block which fixes the flashing at full strength.

v2 = further pruned version with only attention layers and no first block, fixes flashing and retains motion better, needs more steps and can also benefit from cfg.

ZEN MODE β€’ README

tags:

  • diffusion-single-file
  • comfyui
    base_model:
  • Wan-AI/Wan2.1-VACE-14B
  • Wan-AI/Wan2.1-VACE-1.3B

Combined and quantized models for WanVideo, originating from here:

https://huggingface.co/Wan-AI/

Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper and ComfyUI native WanVideo nodes.

I've also started to do fp8_scaled versions over here: https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled

Other model sources:

TinyVAE from https://github.com/madebyollin/taehv

SkyReels: https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9

WanVideoFun: https://huggingface.co/collections/alibaba-pai/wan21-fun-v11-680f514c89fe7b4df9d44f17


Lightx2v:

CausVid 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid

CFG and Step distill 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill


CausVid 1.3B: https://huggingface.co/tianweiy/CausVid

AccVideo: https://huggingface.co/aejion/AccVideo-WanX-T2V-14B

Phantom: https://huggingface.co/bytedance-research/Phantom

ATI: https://huggingface.co/bytedance-research/ATI

MiniMaxRemover: https://huggingface.co/zibojia/minimax-remover

MAGREF: https://huggingface.co/MAGREF-Video/MAGREF

FantasyTalking: https://github.com/Fantasy-AMAP/fantasy-talking

MultiTalk: https://github.com/MeiGen-AI/MultiTalk

Anisora: https://huggingface.co/IndexTeam/Index-anisora/tree/main/14B

Pusa: https://huggingface.co/RaphaelLiu/PusaV1/tree/main

FastVideo: https://huggingface.co/FastVideo

EchoShot: https://github.com/D2I-ai/EchoShot

Wan22 5B Turbo: https://huggingface.co/quanhaol/Wan2.2-TI2V-5B-Turbo

Ovi: https://github.com/character-ai/Ovi

FlashVSR: https://huggingface.co/JunhaoZhuang/FlashVSR

rCM: https://huggingface.co/worstcoder/rcm-Wan/tree/main


CausVid LoRAs are experimental extractions from the CausVid finetunes, the aim with them is to benefit from the distillation in CausVid, rather than any actual causal inference.

v1 = direct extraction, has adverse effects on motion and introduces flashing artifact at full strength.

v1.5 = same as above, but without the first block which fixes the flashing at full strength.

v2 = further pruned version with only attention layers and no first block, fixes flashing and retains motion better, needs more steps and can also benefit from cfg.

πŸ“ Limitations & Considerations

  • β€’ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • β€’ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • β€’ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.
  • β€’ Source: Unknown
Top Tier

Social Proof

HuggingFace Hub
1.8KLikes
6.0MDownloads
πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

πŸ†” Identity & Source

id
hf-model--kijai--wanvideo_comfy
source
huggingface
author
Kijai
tags
diffusion-single-filecomfyuibase_model:wan-ai/wan2.1-vace-1.3bbase_model:finetune:wan-ai/wan2.1-vace-1.3bregion:us

βš™οΈ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
other

πŸ“Š Engagement & Metrics

likes
1,828
downloads
6,029,100

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)