The FreeTacman dataset enables diverse research directions in visuo-tactile learning and manipulation:
System Reproduction: For researchers interested in hardware implementation, you can reproduce FreeTacMan from scratch using our ๐ ๏ธ Hardware Guide and ๐ป Code.
Multimodal Imitation Learning: Transfer to other LED-based tactile sensors (such as GelSight) for developing robust multimodal imitation learning frameworks.
Tactile-aware Grasping: Utilize the dataset for pre-training tactile representation models and developing tactile-aware reasoning systems.
Simulation-to-Real Transfer: Leverage the dynamic tactile interaction sequences to enhance tactile simulation fidelity, significantly reducing the sim2real gap.
๐ Dataset Structure
The dataset is organized into 50 task categories, each containing:
Video files: Synchronized video recordings from the wrist-mounted and visuo-tactile cameras for each demonstration
Trajectory files: Detailed tracking data for tool center point pose and gripper distance
๐งพ Data Format
Video Files
Format: MP4
Views: Wrist-mounted camera and visuo-tactile camera perspectives per demonstration
Trajectory Files
Each trajectory file contains the following data columns:
If you use this dataset in your research, please cite:
bibtex
@article{wu2025freetacman,
title={FreeTacMan: Robot-free visuo-tactile data collection system for contact-rich manipulation},
author={Wu, Longyan and Yu, Checheng and Ren, Jieji and Chen, Li and Jiang, Yufei and Huang, Ran and Gu, Guoying and Li, Hongyang},
journal={IEEE International Conference on Robotics and Automation},
year={2026}
}
๐ผ License
This dataset is released under the MIT License. See LICENSE file for details.
๐ง Contact
For questions or issues regarding the dataset, please contact: Longyan Wu ([email protected]).