🚀
Space

LaVie

by Vchitect hf-space--vchitect--lavie
Nexus Index
29.8 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 48
R: Recency 0
Q: Quality 50
Tech Context
Vital Performance
0 DL / 30D
0.0%
gradio SDK
CPU Hardware
Running Status
- Activity
Space Information Summary
Entity Passport
Registry ID hf-space--vchitect--lavie
Provider huggingface
📜

Cite this space

Academic & Research Attribution

BibTeX
@misc{hf_space__vchitect__lavie,
  author = {Vchitect},
  title = {LaVie Space},
  year = {2026},
  howpublished = {\url{https://huggingface.co/spaces/vchitect/lavie}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Vchitect. (2026). LaVie [Space]. Free2AITools. https://huggingface.co/spaces/vchitect/lavie

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

âš–ī¸ Nexus Index V2.0

29.8
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 48
Recency (R) 0
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for LaVie: Semantic (S:50), Authority (A:0), Popularity (P:48), Recency (R:0), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live

Environment Profile

LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Models

This repository is the official PyTorch implementation of LaVie.

LaVie is a Text-to-Video (T2V) generation framework, and main part of video generation system Vchitect.

arXiv Project Page

Installation

text
conda env create -f environment.yml 
conda activate lavie

Download Pre-Trained models

Download pre-trained models, stable diffusion 1.4, stable-diffusion-x4-upscaler to ./pretrained_models. You should be able to see the following:

text
├── pretrained_models
│   ├── lavie_base.pt
│   ├── lavie_interpolation.pt
│   ├── lavie_vsr.pt
│   ├── stable-diffusion-v1-4
│   │   ├── ...
└── └── stable-diffusion-x4-upscaler
        ├── ...

Inference

The inference contains Base T2V, Video Interpolation and Video Super-Resolution three steps. We provide several options to generate videos:

  • Step1: 320 x 512 resolution, 16 frames
  • Step1+Step2: 320 x 512 resolution, 61 frames
  • Step1+Step3: 1280 x 2048 resolution, 16 frames
  • Step1+Step2+Step3: 1280 x 2048 resolution, 61 frames

Feel free to try different options:)

Step1. Base T2V

Run following command to generate videos from base T2V model.

text
cd base
python pipelines/sample.py --config configs/sample.yaml

Edit text_prompt in configs/sample.yaml to change prompt, results will be saved under ./res/base.

Step2 (optional). Video Interpolation

Run following command to conduct video interpolation.

text
cd interpolation
python sample.py --config configs/sample.yaml

The default input video path is ./res/base, results will be saved under ./res/interpolation. In configs/sample.yaml, you could modify default input_folder with YOUR_INPUT_FOLDER in configs/sample.yaml. Input videos should be named as prompt1.mp4, prompt2.mp4, ... and put under YOUR_INPUT_FOLDER. Launching the code will process all the input videos in input_folder.

Step3 (optional). Video Super-Resolution

Run following command to conduct video super-resolution.

text
cd vsr
python sample.py --config configs/sample.yaml

The default input video path is ./res/base and results will be saved under ./res/vsr. You could modify default input_path with YOUR_INPUT_FOLDER in configs/sample.yaml. Smiliar to Step2, input videos should be named as prompt1.mp4, prompt2.mp4, ... and put under YOUR_INPUT_FOLDER. Launching the code will process all the input videos in input_folder.

BibTex

bibtex
@article{wang2023lavie,
  title={LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models},
  author={Wang, Yaohui and Chen, Xinyuan and Ma, Xin and Zhou, Shangchen and Huang, Ziqi and Wang, Yi and Yang, Ceyuan and He, Yinan and Yu, Jiashuo and Yang, Peiqing and others},
  journal={arXiv preprint arXiv:2309.15103},
  year={2023}
}

Acknowledgements

The code is buit upon diffusers and Stable Diffusion, we thank all the contributors for open-sourcing.

License

The code is licensed under Apache-2.0, model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form.

🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Space Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-space--vchitect--lavie
slug
vchitect--lavie
source
huggingface
author
Vchitect
license
tags
gradio, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
gradio

📊 Engagement & Metrics

downloads
0
stars
190
forks
0

Data indexed from public sources. Updated daily.