PHAV

Procedural generation of videos to train deep action recognition networks

Released in: Procedural Generation of Videos to Train Deep Action Recognition Networks

Contributor:

Summary

Deep learning for human action recognition in videos is making significant progress, but is slowed down by its dependency on expensive manual labeling of large video collections. In this work, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for “Procedural Human Action Videos”. It contains a total of 39,982 videos, with more than 1,000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We introduce a deep multi-task representation learning architecture to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF101 and HMDB51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, significantly outperforming fine-tuning state-of-the-art unsupervised generative models of videos.

39982 videos

Images in dataset

2017

Year Released

Key Links & Stats

@inproceedings{DeSouza:Procedural:CVPR2017, author = {De Souza, CR and Gaidon, A and Cabon, Y and Lopez Pena, AM }, title = {Procedural Generation of Videos to Train Deep Action Recognition Networks}, booktitle = {CVPR}, year = {2017} }

scenebox

Modalities

  1. Still Image
  2. Video
  3. RGB-D
  4. Stereo

Verticals

  1. Digital Human

ML Task

  1. Object Detection
  2. Semantic Segmentation
  3. Instance Segmentation
  4. Human Pose Estimation
  5. Activity Recognition
  6. Depth Estimation
  7. Object Recognition
  8. Object Tracking

Related organizations

NAVER LABS Europe

Universitat Autonoma de Barcelona

Toyota Research Institute