charactergen
"--- license: apache-2.0 title: CharacterGen sdk: gradio sdk_version: 5.14.0 emoji: 🏃 colorFrom: gray colorTo: red pinned: false short_description: Gradio demo of CharacterGen (SIGGRAPH 2024) --- This is the official codebase of SIGGRAPH'24 (TOG) CharacterGen. !teaser - [x] Rendering Script of VRM m..."
Best Scenarios
Technical Constraints
🕸️ Neural Graph Explorer
v15.13📚 Learn More
📈 Interest Trend
* Real-time activity index across HuggingFace, GitHub and Research citations.
🕸️ Neural Graph Explorer
v15.13📚 Learn More
📈 Interest Trend
* Real-time activity index across HuggingFace, GitHub and Research citations.
Benchmark integration for interactive spaces is in preview.
🔬Deep Dive
Expand Details [+]▾
🛠️ Technical Profile
⚡ Hardware & Scale
🌐 Cloud & Rights
🎮 Demo Preview
💻 Usage
pip install gradio git clone https://huggingface.co/spaces/VAST-AI/charactergen Space Overview
CharacterGen: Efficient 3D Character Generation from Single Images with Multi-View Pose Calibration
This is the official codebase of SIGGRAPH'24 (TOG) CharacterGen.
- [x] Rendering Script of VRM model, including blender and three-js.
- [x] Inference code for 2D generation stage.
- [x] Inference code for 3D generation stage.
Quick Start
1. Prepare environment
pip install -r requirements.txt
2. Download the weight
Install huggingface-cli first.
huggingface-cli download --resume-download zjpshadow/CharacterGen --include 2D_Stage/* --local-dir .
huggingface-cli download --resume-download zjpshadow/CharacterGen --include 3D_Stage/* --local-dir .
If you find mistakes on download, you can download all the reporitory and move to the right folder.
3. Run the script
#### Run the whole pipeline
python webui.py
#### Only Run 2D Stage
cd 2D_Stage
python webui.py
#### Only Run 3D Stage
cd 3D_Stage
python webui.py
Get the Anime3D Dataset
Due to the policy, we cannot redistribute the raw data of VRM format 3D character. You can download the vroid dataset follow PAniC-3D instruction. And the you can render the script with blender or three-js with our released rendering script.
Blender
First, you should install Blender and the VRM addon for Blender.
The you can render the VRM and export the obj of VRM under some fbx animation.
blender -b --python render_script/blender/render.py importVrmPath importFbxPath outputFolder [is_apose]
The last input argument represents whether you use apose; if used, output apose; otherwise, output the action of any frame in the fbx.
three-vrm
Much quicker than blender VRM add-on.
Insta
CharacterGen: Efficient 3D Character Generation from Single Images with Multi-View Pose Calibration
This is the official codebase of SIGGRAPH'24 (TOG) CharacterGen.
- [x] Rendering Script of VRM model, including blender and three-js.
- [x] Inference code for 2D generation stage.
- [x] Inference code for 3D generation stage.
Quick Start
1. Prepare environment
pip install -r requirements.txt
2. Download the weight
Install huggingface-cli first.
huggingface-cli download --resume-download zjpshadow/CharacterGen --include 2D_Stage/* --local-dir .
huggingface-cli download --resume-download zjpshadow/CharacterGen --include 3D_Stage/* --local-dir .
If you find mistakes on download, you can download all the reporitory and move to the right folder.
3. Run the script
#### Run the whole pipeline
python webui.py
#### Only Run 2D Stage
cd 2D_Stage
python webui.py
#### Only Run 3D Stage
cd 3D_Stage
python webui.py
Get the Anime3D Dataset
Due to the policy, we cannot redistribute the raw data of VRM format 3D character. You can download the vroid dataset follow PAniC-3D instruction. And the you can render the script with blender or three-js with our released rendering script.
Blender
First, you should install Blender and the VRM addon for Blender.
The you can render the VRM and export the obj of VRM under some fbx animation.
blender -b --python render_script/blender/render.py importVrmPath importFbxPath outputFolder [is_apose]
The last input argument represents whether you use apose; if used, output apose; otherwise, output the action of any frame in the fbx.
three-vrm
Much quicker than blender VRM add-on.
Install Node.js first to use the npm environment.
cd render_script/three-js
npm install three @pixiv/three-vrm
If you want to render depth-map images of VRM, you should replace three-vrm with my version.
Fisrt, run the backend to catch the data from the frontend (default port is 17070), remember to change the folder path.
pip install fastapi uvicorn aiofiles pillow numpy
python up_backend.py
Second, run the frontend to render the images.
npm run dev
The open the website http://localhost:5173/, it use 2 threads to render the image, which costs about 1 day.
Our Result
| Single Input Image | 2D Multi-View Images | 3D Character |
|-------|-------|-------|
|
|
|
|
|
|
|
|
|
|
|
|
Acknowledgements
This project is built upon Tune-A-Video and TripoSR. And the rendering scripts is build upon three-vrm and VRM-Addon-for-Blender. Thanks very much to many friends for their unselfish help with our work. We're extremely grateful to Yuanchen, Yangguang, and Yuan Liang for their guidance on code details and ideas. We thank all the authors for their great repos and help.
Citation
If you find our code or paper helps, please consider citing:
@article{peng2024charactergen,
title ={CharacterGen: Efficient 3D Character Generation from Single Images with Multi-View Pose Canonicalization},
author ={Hao-Yang Peng and Jia-Peng Zhang and Meng-Hao Guo and Yan-Pei Cao and Shi-Min Hu},
journal ={ACM Transactions on Graphics (TOG)},
year ={2024},
volume ={43},
number ={4},
doi ={10.1145/3658217}
}
4,380 characters total