manimator
"--- title: Manimator emoji: 👀 colorFrom: gray colorTo: blue sdk: docker pinned: false license: mit short_description: Transform research papers and mathematical concepts into stu --- !manimator manimator is a tool to transform research papers and mathematical concepts into stunning visual explanati..."
Best Scenarios
Technical Constraints
🕸️ Neural Graph Explorer
v15.13📈 Interest Trend
* Real-time activity index across HuggingFace, GitHub and Research citations.
🕸️ Neural Graph Explorer
v15.13📈 Interest Trend
* Real-time activity index across HuggingFace, GitHub and Research citations.
Benchmark integration for interactive spaces is in preview.
🔬Deep Dive
Expand Details [+]▾
🛠️ Technical Profile
⚡ Hardware & Scale
🌐 Cloud & Rights
🎮 Demo Preview
💻 Usage
docker pull manimator git clone https://huggingface.co/spaces/HyperCluster/manimator Space Overview
manimator
What is _manimator_?
manimator is a tool to transform research papers and mathematical concepts into stunning visual explanations, powered by AI and the manim engine
Building on the incredible work by 3Blue1Brown and the manim community, _manimator_ turns complex research papers and user prompts into clear, animated explainer videos.
🔗 Try it out:
- Gradio Demo:
- Or replace
arxiv.orgwithmanimator.hypercluster.techin any arXiv PDF URL for instant visualizations!
🌟 Highlights so far:
- Over 1000+ uses within 24 hours of launch and over 5000 uses within a week
- Featured as Hugging Face's Space of the Week!
- 16th in Hugging Face's Top Trending Spaces
- Take a look at the paper on arXiv here: https://www.arxiv.org/abs/2507.14306
🎥 Demo Videos:
|
ArXiv usage Walkthrough |
Gradio Walkthrough |
Installation
[!IMPORTANT]
This project is built using the poetry tool to manage Python packages and dependencies. Download it from here to run this project or use the Docker image.
This project is dependent on the manim engine and hence has certain dependencies for running the engine properly which can be found here.
bash
git clone https://github.com/HyperCluster-Tech/manimator
cd manimator
Install Dependencies:
poetry install
Activate the environment:
poetry env activate
(If you're using a version before Poetry 2.0, you should use poetry shell)
Usage
After successfully installing all the project dependencies and manim dependencies, set the environment variables in a .env file according to the .env.example:
Run the FastAPI server:
poetry run app
and visit localhost:8000/docs to open SwaggerUI
Run the Gradio interface:
poetry run gradio-app
and open localhost:7860
Notes
To change the models being used, you can set the environment variables for the models according to LiteLLM syntax and set the corresponding API keys accordingly.
To prompt engineer to better suit your use case, you can modify the system prompts in utils/system_prompts.py and change the few shot examples in few_shot/few_shot_prompts.py.
🛳️ Docker
To use manimator with Docker, execute the following commands:
Build the Docker image locally. Then, run the Docker container as follows:
docker build -t manimator .
If you are running the FastAPI server
docker run -p 8000:8000 manimator
Else for the Gradio interface
docker run -p 7860:7860 manimator
API Endpoints
Health Check
#### Check API Health Status
Endpoint: /health-check
Method: GET
Returns the health status of the API.
Response:
{
"status": "ok"
}
Curl command:
curl http://localhost:8000/health-check
PDF Processing
#### Generate PDF Scene
Endpoint: /generate-pdf-scene
Method: POST
Processes a PDF file and generates a scene description for animation.
Request:
- Content-Type:
multipart/form-data - Body: PDF file
{
"scene_description": "Generated scene description based on PDF content"
}
Curl command:
curl -X POST -F "file=@/path/to/file.pdf" http://localhost:8000/generate-pdf-scene
#### Process ArXiv PDF
Endpoint: /pdf/{arxiv_id}
Method: GET
Downloads and processes an arXiv paper by ID to generate a scene description.
Parameters:
arxiv_id: The arXiv paper identifier
{
"scene_description": "Generated scene description based on arXiv paper"
}
Curl command:
curl http://localhost:8000/pdf/2312.12345
Scene Generation
#### Generate Prompt Scene
Endpoint: /generate-prompt-scene
Method: POST
Generates a scene description from a text prompt.
Request:
- Content-Type:
application/json - Body:
{
"prompt": "Your scene description prompt"
}
Response:
{
"scene_description": "Generated scene description based on prompt"
}
Curl command:
curl -X POST \
-H "Content-Type: application/json" \
-d '{"prompt": "Explain how neural networks work"}' \
http://localhost:8000/generate-prompt-scene
Animation Generation
#### Generate Animation
Endpoint: /generate-animation
Method: POST
Generates a Manim animation based on a text prompt.
Request:
- Content-Type:
application/json - Body:
{
"prompt": "Your animation prompt"
}
Response:
- Content-Type:
video/mp4 - Body: Generated MP4 animation file
curl -X POST \
-H "Content-Type: application/json" \
-d '{"prompt": "Create an animation explaining quantum computing"}' \
--output animation.mp4 \
http://localhost:8000/generate-animation
Error Handling
All endpoints follow consistent error handling:
- 400: Bad Request - Invalid input or missing required fields
- 500: Internal Server Error - Processing or generation failure
{
"detail": "Error description"
}
Notes
Coming Soon
- Improved Generation Quality
- Video Transcription
- Adding Audio
- Chrome Extension
Limitations
- LLM Limitations
- Video Generation Limitations
License
manimator is licensed under the MIT License. See LICENSE for more information.
The project uses the Manim engine under the hood, which is double-licensed under the MIT license, with copyright by 3blue1brown LLC and copyright by Manim Community Developers.
Acknowledgements
We acknowledge the Manim Community and 3Blue1Brown for developing and maintaining the Manim library, which serves as the foundation for this project. Project developers include: Samarth P, Vyoman Jain, Shiva Golugula, and M Sai Sathvik for their efforts in developing manimator.
Models and Providers being used:
- DeepSeek-V3
- Llama 3.3 70B via Groq
- Gemini 1.5 Flash / 2.0 Flash-experimental
Contact
For any inquiries, please contact us at [email protected] or refer to our website hypercluster.tech
10,343 characters total