VEIS

Virtual environment for instance segmentation

Released in: Effective Use of Synthetic Data for Urban Scene Semantic Segmentation

Contributor:

Summary

Training a deep network to perform semantic segmentation requires large amounts of labeled data. To alleviate the manual effort of annotating real images, researchers have investigated the use of synthetic data, which can be labeled automatically. Unfortunately, a network trained on synthetic data performs relatively poorly on real images. While this can be addressed by domain adaptation, existing methods all require having access to real images during training. In this paper, the authors introduce a drastically different way to handle synthetic images that does not require seeing any real images at training time. This approach builds on the observation that foreground and background classes are not affected in the same manner by the domain shift, and thus should be treated differently. In particular, the former should be handled in a detection-based manner to better account for the fact that, while their texture in synthetic images is not photo-realistic, their shape looks natural. The experiments evidence the effectiveness of our approach on Cityscapes and CamVid with models trained on synthetic data only.

61305 images

Images in dataset

2018

Year Released

Key Links & Stats

VEIS

@inproceedings{sadat2018effective, title={Effective Use of Synthetic Data for Urban Scene Semantic Segmentation}, author={Sadat Saleh, Fatemeh and Sadegh Aliakbarian, Mohammad and Salzmann, Mathieu and Petersson, Lars and Alvarez, Jose M}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, pages={84--100}, year={2018} }

scenebox

Modalities

  1. Still Image

Verticals

  1. A/V

ML Task

  1. Object Detection
  2. Semantic Segmentation
  3. Instance Segmentation
  4. Autonomous Vehicles
  5. Scene Understanding

Related organizations