GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields
Alessandro Ruzzi (ETH Zürich)
Xiangwei Shi (TU Delft - Pattern Recognition and Bioinformatics)
Xi Wang (ETH Zürich)
Gengyan Li (ETH Zürich)
Shalini De Mello (NVIDIA)
Hyung Jin Chang (University of Birmingham)
X. Zhang (TU Delft - Pattern Recognition and Bioinformatics)
Otmar Hilliges (ETH Zürich)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
We propose GazeNeRF, a 3D-aware method for the task of gaze redirection. Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results. Instead, we build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion. Our method leverages recent advancements in conditional image-based neural radiance fields and proposes a two-stream architecture that predicts volumetric features for the face and eye regions separately. Rigidly transforming the eye features via a 3D rotation matrix provides fine-grained control over the desired gaze angle. The final, redirected image is then attained via differentiable volume compositing. Our experiments show that this architecture outperforms naively conditioned NeRF baselines as well as previous state-of-the-art 2D gaze redirection methods in terms of redirection accuracy and identity preservation. Code and models will be released for research purposes.