Annotation and QA

The internal workspace that turns raw robotics video into buyer-ready data.

This MVP view makes the second side of the platform visible: annotators label robotics data, reviewers check quality, and HumanoidLayer packages verified outputs for buyers.

Active files

3

QA reviews

2

Avg score

84

Active video task

Warehouse exception handling video set

in progress

pick-scan-exception-042.mp4

Task

temporal segmentation

Assignee

J. Okafor

Due

2026-05-08

Annotation progress62%
object labels
action phases
success/failure
hand-object state
scene metadata
license/source notes
QA flags
format fields

QA checklist

Reviewers check consistency before data becomes buyer-ready.

Label taxonomy matches buyer schema

Segments have start/end boundaries

Sensitive content flags reviewed

License/source notes attached

Export format validated

QA decision model

Tasks are approved, rejected, or sent back for fixes. Quality scores become marketplace trust signals.

Annotation queue

Mock tasks show how contributor work moves through labeling, submission, and review.

ANN-710
in progress

Warehouse exception handling video set

pick-scan-exception-042.mp4

Task type

temporal segmentation

Assigned to

J. Okafor

Quality score

91/100

ANN-708
submitted

ALOHA bimanual subset

tool-use-demo-118.hdf5

Task type

object labeling

Assigned to

Maya Chen

Quality score

88/100

ANN-701
needs fix

Kitchen manipulation pilot

drawer-open-close-021.mp4

Task type

QA review

Assigned to

R. Alvarez

Quality score

73/100

Workflow inside the refinery

The annotation surface is where the metaphor becomes operational: raw data becomes structured, reviewed, versioned, and useful.

1Raw video or robot episode enters a private workspace

2Metadata and license context are normalized

3Annotation tasks are assigned to qualified contributors

4QA reviewers approve, reject, or request fixes

5A buyer-ready dataset version is packaged for delivery