Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'test' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
cls: string
custom_metrics: null
ytrue: list<item: string>
ypred: list<item: string>
confs: list<item: null>
weights: null
ytrue_ids: list<item: string>
ypred_ids: list<item: string>
classes: list<item: string>
missing: string
vs
cls: string
custom_metrics: null
ytrue: list<item: double>
ypred: list<item: double>
confs: list<item: null>
ids: list<item: string>
missing: null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 249, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 563, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              cls: string
              custom_metrics: null
              ytrue: list<item: string>
              ypred: list<item: string>
              confs: list<item: null>
              weights: null
              ytrue_ids: list<item: string>
              ypred_ids: list<item: string>
              classes: list<item: string>
              missing: string
              vs
              cls: string
              custom_metrics: null
              ytrue: list<item: double>
              ypred: list<item: double>
              confs: list<item: null>
              ids: list<item: string>
              missing: null

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Egocentric_10K_Evaluation

image/png

This is a FiftyOne dataset with 30000 samples.

Installation

If you haven't already, install FiftyOne:

pip install -U fiftyone

Usage

import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Voxel51/Egocentric_10K_Evaluation")

# Launch the App
session = fo.launch_app(dataset)

Dataset Details

Dataset Description

Egocentric-10K-Evaluation is a benchmark evaluation set and analysis protocol for large-scale egocentric (first-person) video datasets, focused on measuring hand visibility and active manipulation in real-world, in-the-wild scenarios, especially relevant for robotics, computer vision, and AI agent training on manipulation tasks.[1][2][3]

  • Curated by: builddotai
  • Shared by : builddotai
  • License: Apache 2.0

Dataset Sources

Uses

Direct Use

This dataset is intended for benchmarking egocentric video data with respect to hand presence and active object manipulation, enabling standardized analysis, dataset comparison, and the development/evaluation of perception and robotics models centered on real-world human skill tasks.

Dataset Structure

Egocentric-10K-Evaluation consists of 10,000 sampled frames from factory egocentric video and comparable samples from other major datasets (Ego4D, EPIC-KITCHENS); each sample includes JSON metadata, hand label annotations (count 0, 1, or 2), and a binary label for presence/absence of active manipulation. The splits are standardized; additional metadata includes dataset, worker, and video index references.

Dataset Creation

Curation Rationale

To create a standardized benchmark for hand visibility and manipulation, facilitating research on manipulation-heavy tasks in robotics and AI using real industrial and skill-focused footage.

Source Data

Data Collection and Processing

The evaluation set comprises frames drawn from the primary Egocentric-10K dataset (real-world factory footage collected via head-mounted cameras), as well as standardized samples from open egocentric datasets Ego4D and EPIC-KITCHENS for comparison. Data is provided in 1080p, 30 FPS H.265 MP4 format, with structured JSON metadata and hand/manipulation annotations.

Who are the source data producers?

Egocentric-10K’s original video data was produced by real factory workers wearing head-mounted cameras, performing natural work-line activities. Annotation was performed following strict guidelines as described in the evaluation schema.

Annotations

Annotation process

Each sampled frame is annotated for number of visible hands (0/1/2; detailed rules provided) and whether the hands are engaged in active manipulation (“yes”/“no” per explicit definition). The annotation schema and rules are detailed in the benchmark documentation.

Citation

Downloads last month
1,634