🎮

streamingt2v

FNI 8.9
by PAIR gradio

"--- title: StreamingT2V emoji: 🔥 colorFrom: purple colorTo: blue sdk: gradio sdk_version: 4.25.0 app_file: app.py pinned: false short_description: Consistent, Dynamic, and Extendable Long Video Generation --- This repository is the official implementation of StreamingT2V. **StreamingT2V: Consistent..."

Best Scenarios

Interactive UI Demo

Technical Constraints

Generic Use
gradio SDK
CPU Config
Running Status
166 Likes

🕸️ Neural Graph Explorer

v15.13

Graph Overview

263 Entities
273 Connections
Explore Full Graph →

📈 Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

🔬Deep Dive

Expand Details [+]

🛠️ Technical Profile

Hardware & Scale

SDK
gradio
Hardware
V100
Status
Running

🌐 Cloud & Rights

Source
huggingface
License
Open Access

🎮 Demo Preview

Interact with caution. Content generated by third-party code.

💻 Usage

pip install gradio
git clone https://huggingface.co/spaces/PAIR/streamingt2v

Space Overview

StreamingT2V

This repository is the official implementation of StreamingT2V.

StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
Roberto Henschel, Levon Khachatryan, Daniil Hayrapetyan, Hayk Poghosyan, Vahram Tadevosyan, Zhangyang Wang, Shant Navasardyan, Humphrey Shi

arXiv preprint | Video | Project page



StreamingT2V is an advanced autoregressive technique that enables the creation of long videos featuring rich motion dynamics without any stagnation. It ensures temporal consistency throughout the video, aligns closely with the descriptive text, and maintains high frame-level image quality. Our demonstrations include successful examples of videos up to 1200 frames, spanning 2 minutes, and can be extended for even longer durations. Importantly, the effectiveness of StreamingT2V is not limited by the specific Text2Video model used, indicating that improvements in base models could yield even higher-quality videos.

News

* [03/21/2024] Paper StreamingT2V released! * [04/03/2024] Code and model released!

Setup

  • Clone this repository and enter:
  • `` shell git clone https://github.com/Picsart-AI-Research/StreamingT2V.git cd StreamingT2V/

    code
    2. Install requirements using Python 3.10 and CUDA >= 11.6
    shell conda create -n st2v python=3.10 conda activate st2v pip install -r requirements.txt

    code
    3. (Optional) Install FFmpeg if it's missing on your system
    shell conda install conda-forge::ffmpeg
    `
  • Download the weights from HF and put them into the t2v_enhanced/checkpoints`
  • 3,433 characters total