NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Released in: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

Source: arXiv - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

Contributor:

by

Summary

We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location.

Uknown

Images in dataset

2020

Year Released

Key Links & Stats

bmild / nerf

@inproceedings{mildenhall2020nerf, title={NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis}, author={Ben Mildenhall and Pratul P. Srinivasan and Matthew Tancik and Jonathan T. Barron and Ravi Ramamoorthi and Ren Ng}, year={2020}, booktitle={ECCV}, }

scenebox

Modalities

  1. Video

Verticals

  1. Satellite
  2. A/V

ML Task

  1. NERF

Related organizations