In recent years, commodity 3D sensors have become widely available, spawning significant interest in both offline and real-time 3D reconstruction. While state-of-the-art reconstruction results from commodity RGB-D sensors are visually appealing, they are far from usable in practical computer graphics applications since they do not match the high quality of artist-modeled 3D graphics content. One of the biggest challenges in this context is that obtained 3D scans suffer from occlusions, thus resulting in incomplete 3D models. In this talk, I will present a data-driven approach towards generating high quality 3D models from commodity scan data, and the use of these geometrically complete 3D models towards semantic and texture understanding of real-world environments.
Biography: Angela Dai is a junior research group leader at the Technical University of Munich. Her research focuses on creating high-quality 3D models of real-world environments, towards enabling human-level scene understanding and democratizing 3D scanning for content creation and mixed reality scenarios. She completed her Ph.D. in Computer Science at Stanford University, advised by Pat Hanrahan. During her PhD, she has advanced real-time 3D reconstruction, and leveraged this towards developing machine learning approaches towards improving the reconstruction quality and semantic and instance understanding of these 3D scans. Angela received her Bachelors degree in Computer Science from Princeton University. Her work has been recognized with a Professor Michael J. Flynn Stanford Graduate Fellowship and a 1.25mil euro ZDB junior research group award.