hallo
"--- title: Hallo emoji: 👋 sdk: gradio sdk_version: 4.36.1 app_file: app.py pinned: false suggested_hardware: l4x1 short_description: Generate realistic talking heads from image+audio --- Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation Mingwang Xu1*  Hui Li1*&ems..."
Best Scenarios
Technical Constraints
🕸️ Neural Graph Explorer
v15.13📈 Interest Trend
* Real-time activity index across HuggingFace, GitHub and Research citations.
🕸️ Neural Graph Explorer
v15.13📈 Interest Trend
* Real-time activity index across HuggingFace, GitHub and Research citations.
Benchmark integration for interactive spaces is in preview.
🔬Deep Dive
Expand Details [+]▾
🛠️ Technical Profile
⚡ Hardware & Scale
🌐 Cloud & Rights
🎮 Demo Preview
💻 Usage
pip install gradio git clone https://huggingface.co/spaces/fudan-generative-ai/hallo Space Overview
Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
Showcase
https://github.com/fudan-generative-vision/hallo/assets/17402682/294e78ef-c60d-4c32-8e3c-7f8d6934c6bd
Framework

- insightface: 2D and 3D Face Analysis placed into
pretrained_models/face_analysis/models/. (_Thanks to deepinsight_) - face landmarker: Face detection & mesh model from mediapipe placed into
pretrained_models/face_analysis/models. - motion module: motion module from AnimateDiff. (_Thanks to guoyww_).
- sd-vae-ft-mse: Weights are intended to be used with the diffusers library. (_Thanks to stablilityai_)
- StableDiffusion V1.5: Initialized and fine-tuned from Stable-Diffusion-v1-2. (_Thanks to runwayml_)
- wav2vec: wav audio to vector model from Facebook.
./pretrained_models/
|-- audio_separator/
| -- Kim_Vocal_2.onnx
|-- face_analysis/
| -- models/
| |-- face_landmarker_v2_with_blendshapes.task # face landmarker model from mediapipe
| |-- 1k3d68.onnx
| |-- 2d106det.onnx
| |-- genderage.onnx
| |-- glintr100.onnx
| -- scrfd_10g_bnkps.onnx
|-- motion_module/
| -- mm_sd_v15_v2.ckpt
|-- sd-vae-ft-mse/
| |-- config.json
| -- diffusion_pytorch_model.safetensors
|-- stable-diffusion-v1-5/
| |-- feature_extractor/
| | -- preprocessor_config.json
| |-- model_index.json
| |-- unet/
| | |-- config.json
| | -- diffusion_pytorch_model.safetensors
| -- v1-inference.yaml
-- wav2vec/
|-- wav2vec2-base-960h/
| |-- config.json
| |-- feature_extractor_config.json
| |-- model.safetensors
| |-- preprocessor_config.json
| |-- special_tokens_map.json
| |-- tokenizer_config.json
| -- vocab.json
Run inference
Simply to run the scripts/inference.py and pass source_image and driving_audio as input:
python scripts/inference.py --source_image your_image.png --driving_audio your_audio.wav
Animation results will be saved as ${PROJECT_ROOT}/.cache/output.mp4 by default. You can pass --output to specify the output file name.
For more options:
usage: inference.py [-h] [-c CONFIG] [--source_image SOURCE_IMAGE] [--driving_audio DRIVING_AUDIO] [--output OUTPUT] [--pose_weight POSE_WEIGHT]
[--face_weight FACE_WEIGHT] [--lip_weight LIP_WEIGHT] [--face_expand_ratio FACE_EXPAND_RATIO]options:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
--source_image SOURCE_IMAGE
source image
--driving_audio DRIVING_AUDIO
driving audio
--output OUTPUT output video file name
--pose_weight POSE_WEIGHT
weight of pose
--face_weight FACE_WEIGHT
weight of face
--lip_weight LIP_WEIGHT
weight of lip
--face_expand_ratio FACE_EXPAND_RATIO
face region
Roadmap
| Status | Milestone | ETA | | :----: | :---------------------------------------------------------------------------------------------------- | :--------: | | ✅ | Inference source code meet everyone on GitHub | 2024-06-15 | | ✅ | Pretrained models on Huggingface | 2024-06-15 | | 🚀🚀🚀 | [Traning: data preparation and training scripts]() | 2024-06-25 |
Citation
If you find our work useful for your research, please consider citing the paper:
@misc{xu2024hallo,
title={Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation},
author={Mingwang Xu and Hui Li and Qingkun Su and Hanlin Shang and Liwei Zhang and Ce Liu and Jingdong Wang and Yao Yao and Siyu zhu},
year={2024},
eprint={2406.08801},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Opportunities available
Multiple research positions are open at the Generative Vision Lab, Fudan University! Include:
- Research assistant
- Postdoctoral researcher
- PhD candidate
- Master students
Social Risks and Mitigations
The development of portrait image animation technologies driven by audio inputs poses social risks, such as the ethical implications of creating realistic portraits that could be misused for deepfakes. To mitigate these risks, it is crucial to establish ethical guidelines and responsible use practices. Privacy and consent concerns also arise from using individuals' images and voices. Addressing these involves transparent data usage policies, informed consent, and safeguarding privacy rights. By addressing these risks and implementing mitigations, the research aims to ensure the responsible and ethical development of this technology.
9,263 characters total