Invariant action recognition dataset

TitleInvariant action recognition dataset
Publication TypeDataset
Year of Publication2017
AuthorsTacchetti, A, Isik, L, Poggio, T
Date Published11/2017
Abstract

To study the effect of changes in view and actor on action recognition, we filmed a dataset of five actors performing five different actions (drink, eat, jump, run and walk) on a treadmill from five different views (0, 45, 90, 135, and 180 degrees from the front of the actor/treadmill; the treadmill rather than the camera was rotated in place to acquire from different viewpoints). The dataset was filmed on a fixed, constant background. To avoid low-level object/action confounds (e.g. the action “drink” being classified as the only videos with water bottle in the scene) and guarantee that the main sources of variation of visual appearance are due to actions, actors and viewpoint, the actors held the same objects (an apple and a water bottle) in each video, regardless of the action they performed. This controlled design allows us to test hypotheses on the computational mechanisms underlying invariant recognition in the human visual system without having to settle for a synthetic dataset.

More information and the dataset files can be found here - https://doi-org.ezproxy.canberra.edu.au/10.7910/DVN/DMT0PG

URLhttps://doi-org.ezproxy.canberra.edu.au/10.7910/DVN/DMT0PG
Citation Key3162

Research Area: 

CBMM Relationship: 

  • CBMM Funded