πŸ“Š
Dataset

Egocentric 10k Evaluation

by Voxel51 hf-dataset--voxel51--egocentric_10k_evaluation
Nexus Index
35.6 Top 100%
S / A / P / R / Q Breakdown Calibration Pending

Pillar scores are computed during the next indexing cycle.

Tech Context
Vital Performance
0 DL / 30D
0.0%
Data Integrity 35.6 FNI Score
- Size
- Rows
Parquet Format
- Tokens
Dataset Information Summary
Entity Passport
Registry ID hf-dataset--voxel51--egocentric_10k_evaluation
Provider huggingface
πŸ“œ

Cite this dataset

Academic & Research Attribution

BibTeX
@misc{hf_dataset__voxel51__egocentric_10k_evaluation,
  author = {Voxel51},
  title = {Egocentric 10k Evaluation Dataset},
  year = {2026},
  howpublished = {\url{https://huggingface.co/datasets/voxel51/egocentric_10k_evaluation}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Voxel51. (2026). Egocentric 10k Evaluation [Dataset]. Free2AITools. https://huggingface.co/datasets/voxel51/egocentric_10k_evaluation

πŸ”¬Technical Deep Dive

Full Specifications [+]

βš–οΈ Nexus Index V2.0

35.6
ESTIMATED IMPACT TIER
Semantic (S) 0
Authority (A) 0
Popularity (P) 0
Recency (R) 0
Quality (Q) 0

πŸ’¬ Index Insight

FNI V2.0 for Egocentric 10k Evaluation: Semantic (S:0), Authority (A:0), Popularity (P:0), Recency (R:0), Quality (Q:0).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
⬇️
Downloads
38,548

πŸ‘οΈ Data Preview

πŸ“Š

Row-level preview not available for this dataset.

Schema structure is shown in the Field Logic panel when available.

πŸ”— Explore Full Dataset β†—

🧬 Field Logic

🧬

Schema not yet indexed for this dataset.

Dataset Specification

Dataset Card for Egocentric_10K_Evaluation

image/png

This is a FiftyOne dataset with 30000 samples.

Installation

If you haven't already, install FiftyOne:

bash
pip install -U fiftyone

Usage

python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Voxel51/Egocentric_10K_Evaluation")

# Launch the App
session = fo.launch_app(dataset)

Dataset Details

Dataset Description

Egocentric-10K-Evaluation is a benchmark evaluation set and analysis protocol for large-scale egocentric (first-person) video datasets, focused on measuring hand visibility and active manipulation in real-world, in-the-wild scenarios, especially relevant for robotics, computer vision, and AI agent training on manipulation tasks.[1][2][3]

  • Curated by: builddotai
  • Shared by : builddotai
  • License: Apache 2.0

Dataset Sources

Uses

Direct Use

This dataset is intended for benchmarking egocentric video data with respect to hand presence and active object manipulation, enabling standardized analysis, dataset comparison, and the development/evaluation of perception and robotics models centered on real-world human skill tasks.

Dataset Structure

Egocentric-10K-Evaluation consists of 10,000 sampled frames from factory egocentric video and comparable samples from other major datasets (Ego4D, EPIC-KITCHENS); each sample includes JSON metadata, hand label annotations (count 0, 1, or 2), and a binary label for presence/absence of active manipulation. The splits are standardized; additional metadata includes dataset, worker, and video index references.

Dataset Creation

Curation Rationale

To create a standardized benchmark for hand visibility and manipulation, facilitating research on manipulation-heavy tasks in robotics and AI using real industrial and skill-focused footage.

Source Data

Data Collection and Processing

The evaluation set comprises frames drawn from the primary Egocentric-10K dataset (real-world factory footage collected via head-mounted cameras), as well as standardized samples from open egocentric datasets Ego4D and EPIC-KITCHENS for comparison. Data is provided in 1080p, 30 FPS H.265 MP4 format, with structured JSON metadata and hand/manipulation annotations.

Who are the source data producers?

Egocentric-10K’s original video data was produced by real factory workers wearing head-mounted cameras, performing natural work-line activities. Annotation was performed following strict guidelines as described in the evaluation schema.

Annotations

Annotation process

Each sampled frame is annotated for number of visible hands (0/1/2; detailed rules provided) and whether the hands are engaged in active manipulation (β€œyes”/β€œno” per explicit definition). The annotation schema and rules are detailed in the benchmark documentation.

Citation

Social Proof

HuggingFace Hub
38.5KDownloads
πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Dataset Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

πŸ†” Identity & Source

id
hf-dataset--voxel51--egocentric_10k_evaluation
slug
voxel51--egocentric_10k_evaluation
source
huggingface
author
Voxel51
license
tags
task_categories:image-classification, language:en, size_categories:10k<n<100k, modality:image, library:fiftyone, region:us, fiftyone, image, image-classification

βš™οΈ Technical Specs

architecture
null
params billions
null
context length
10,240
pipeline tag

πŸ“Š Engagement & Metrics

downloads
38,548
stars
1
forks
0

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)