đ
2025 Challenge Demos dataset by Behavior 1k
â 36.9
đ
Dataset
2025 Challenge Demos
by Behavior 1k hf-dataset--behavior-1k--2025-challenge-demos
Nexus Index
36.9 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 60
R: Recency 48
Q: Quality 30
Tech Context
Vital Performance 0.0%
0 DL / 30D
Data Integrity 36.9 FNI Score
- Size
- Rows
Parquet Format
- Tokens
| Entity Passport | |
| Registry ID | hf-dataset--behavior-1k--2025-challenge-demos |
| License | MIT |
| Provider | huggingface |
đ
Cite this dataset
Academic & Research Attribution
BibTeX
@misc{hf_dataset__behavior_1k__2025_challenge_demos,
author = {Behavior 1k},
title = {2025 Challenge Demos Dataset},
year = {2026},
howpublished = {\url{https://huggingface.co/datasets/behavior-1k/2025-challenge-demos}},
note = {Accessed via Free2AITools Knowledge Fortress}
} APA Style
Behavior 1k. (2026). 2025 Challenge Demos [Dataset]. Free2AITools. https://huggingface.co/datasets/behavior-1k/2025-challenge-demos
đŦTechnical Deep Dive
Full Specifications [+]âž
âī¸ Nexus Index V2.0
36.9
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 60
Recency (R) 48
Quality (Q) 30
đŦ Index Insight
FNI V2.0 for 2025 Challenge Demos: Semantic (S:50), Authority (A:0), Popularity (P:60), Recency (R:48), Quality (Q:30).
Free2AITools Nexus Index
Verification Authority
Unbiased Data
Node Refresh: VFS Live
âŦī¸
Downloads
149,574
đī¸ Data Preview
đ
Row-level preview not available for this dataset.
Schema structure is shown in the Field Logic panel when available.
đ Explore Full Dataset âđ§Ŧ Field Logic
đ§Ŧ
Schema not yet indexed for this dataset.
Dataset Specification
VFS Recovering Reference...
This dataset was created using LeRobot.
Dataset Description
- Homepage: [More Information Needed]
- Paper: [More Information Needed]
- License: mit
Dataset Structure
json
{
"codebase_version": "v2.1",
"robot_type": "R1Pro",
"total_episodes": 10000,
"total_frames": 119094660,
"total_tasks": 50,
"total_videos": 90000,
"chunks_size": 10000,
"fps": 30,
"splits": {
"train": "0:10000"
},
"data_path": "data/task-{episode_chunk:04d}/episode_{episode_index:08d}.parquet",
"video_path": "videos/task-{episode_chunk:04d}/{video_key}/episode_{episode_index:08d}.mp4",
"metainfo_path": "meta/episodes/task-{episode_chunk:04d}/episode_{episode_index:08d}.json",
"annotation_path": "annotations/task-{episode_chunk:04d}/episode_{episode_index:08d}.json",
"features": {
"observation.images.rgb.left_wrist": {
"dtype": "video",
"shape": [
480,
480,
3
],
"names": [
"height",
"width",
"rgb"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 480,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.rgb.right_wrist": {
"dtype": "video",
"shape": [
480,
480,
3
],
"names": [
"height",
"width",
"rgb"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 480,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.rgb.head": {
"dtype": "video",
"shape": [
720,
720,
3
],
"names": [
"height",
"width",
"rgb"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 720,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.depth.left_wrist": {
"dtype": "video",
"shape": [
480,
480,
3
],
"names": [
"height",
"width",
"depth"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 480,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p16le",
"video.is_depth_map": true,
"has_audio": false
}
},
"observation.images.depth.right_wrist": {
"dtype": "video",
"shape": [
480,
480,
3
],
"names": [
"height",
"width",
"depth"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 480,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p16le",
"video.is_depth_map": true,
"has_audio": false
}
},
"observation.images.depth.head": {
"dtype": "video",
"shape": [
720,
720,
3
],
"names": [
"height",
"width",
"depth"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 720,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p16le",
"video.is_depth_map": true,
"has_audio": false
}
},
"observation.images.seg_instance_id.left_wrist": {
"dtype": "video",
"shape": [
480,
480,
3
],
"names": [
"height",
"width",
"rgb"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 480,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.seg_instance_id.right_wrist": {
"dtype": "video",
"shape": [
480,
480,
3
],
"names": [
"height",
"width",
"rgb"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 480,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.seg_instance_id.head": {
"dtype": "video",
"shape": [
720,
720,
3
],
"names": [
"height",
"width",
"rgb"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 720,
"video.channels": 3,
"video.codec": "libx265",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
23
],
"names": null
},
"timestamp": {
"dtype": "float64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"observation.cam_rel_poses": {
"dtype": "float32",
"shape": [
21
],
"names": null
},
"observation.state": {
"dtype": "float32",
"shape": [
256
],
"names": null
},
"observation.task_info": {
"dtype": "float32",
"shape": [
null
],
"names": null
}
}
}
Citation
BibTeX:
bibtex
@article{li2024behavior,
title={Behavior-1k: A human-centered, embodied ai benchmark with 1,000 everyday activities and realistic simulation},
author={Li, Chengshu and Zhang, Ruohan and Wong, Josiah and Gokmen, Cem and Srivastava, Sanjana and Mart{'i}n-Mart{'i}n, Roberto and Wang, Chen and Levine, Gabrael and Ai, Wensi and Martinez, Benjamin and Yin, Hang and Lingelbach, Michael and Hwang, Minjune and Hiranaka, Ayano and Garlanka, Sujay and Aydin, Arman and Lee, Sharon and Sun, Jiankai and Anvari, Mona and Sharma, Manasi and Bansal, Dhruva and Hunter, Samuel and Kim, Kyu-Young and Lou, Alan and Matthews, Caleb R. and Villa-Renteria, Ivan and Tang, Jerry Huayang and Tang, Claire and Xia, Fei and Li, Yunzhu and Savarese, Silvio and Gweon, Hyowon and Liu, C. Karen and Wu, Jiajun and Fei-Fei, Li},
journal={arXiv preprint arXiv:2403.09227},
year={2024}
}
Social Proof
HuggingFace Hub
149.6KDownloads
đ Daily sync (03:00 UTC)
AI Summary: Based on Hugging Face metadata. Not a recommendation.
đĄī¸ Dataset Transparency Report
Technical metadata sourced from upstream repositories.
Open Metadata
đ Identity & Source
- id
- hf-dataset--behavior-1k--2025-challenge-demos
- slug
- behavior-1k--2025-challenge-demos
- source
- huggingface
- author
- Behavior 1k
- license
- MIT
- tags
- task_categories:robotics, license:mit, modality:video, arxiv:2403.09227, doi:10.57967/hf/6394, region:us, lerobot, v, 2, ., 1
âī¸ Technical Specs
- architecture
- null
- params billions
- null
- context length
- null
- pipeline tag
đ Engagement & Metrics
- downloads
- 149,574
- stars
- 33
- forks
- 0
Data indexed from public sources. Updated daily.