Header logo is de


Learning Non-rigid Optimization

  • 10 July 2020 • 15:00—16:00
  • Matthias Nießner
  • Remote talk on Zoom

Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus. One recent approach proposes self-supervision based on non-rigid reconstruction. Unfortunately, this method fails for important cases such as highly non-rigid deformations. We first address this problem of lack of data by introducing a novel semi-supervised strategy to obtain dense interframe correspondences from a sparse set of annotations. This way, we obtain a large dataset of 400 scenes, over 390,000 RGB-D frames, and 2,537 densely aligned frame pairs; in addition, we provide a test set along with several metrics for evaluation. Based on this corpus, we introduce a data-driven non-rigid feature matching approach, which we integrate into an optimization-based reconstruction pipeline. Here, we propose a new neural network that operates on RGB-D frames, while maintaining robustness under large non-rigid deformations and producing accurate predictions. Our approach significantly outperforms both existing non-rigid reconstruction methods that do not use learned data terms, as well as learning-based approaches that only use self-supervision.

Organizers: Vassilis Choutas

Towards Commodity 3D Scanning for Content Creation

  • 16 July 2020 • 16:00—17:30
  • Angela Dai

In recent years, commodity 3D sensors have become widely available, spawning significant interest in both offline and real-time 3D reconstruction. While state-of-the-art reconstruction results from commodity RGB-D sensors are visually appealing, they are far from usable in practical computer graphics applications since they do not match the high quality of artist-modeled 3D graphics content. One of the biggest challenges in this context is that obtained 3D scans suffer from occlusions, thus resulting in incomplete 3D models. In this talk, I will present a data-driven approach towards generating high quality 3D models from commodity scan data, and the use of these geometrically complete 3D models towards semantic and texture understanding of real-world environments.

Organizers: Yinghao Huang

Intelligente Systeme Sommer-Kolloquium 2020 (Virtuelles Event)

IS Colloquium
  • 24 July 2020 • 14:00—17:00
  • Virtual Event

Das MPI-IS lädt Sie herzlich zum Sommer-Kolloquium 2020 ein

  • Dushyant Mehta

In our recent work, XNect, we propose a real-time solution for the challenging task of multi-person 3D human pose estimation from a single RGB camera. To achieve real-time performance without compromising on accuracy, our approach relies on a new efficient Convolutional Neural Network architecture, and a multi-staged pose formulation. The CNN architecture is approx. 1.3x faster than ResNet-50, while achieving the same accuracy on various tasks, and the benefits extend beyond inference speed to a much smaller training memory footprint and a much higher training throughput. The proposed pose formulation jointly reasons about all the subjects in the scene, ensuring that pose inference can be done in real time even with a large number of subjects in the scene. The key insight behind the accuracy of the formulation is to split the reasoning about human pose into two distinct stages. The first stage, which is fully convolutional, infers 2D and 3D pose of body parts supported by image evidence, and reasons jointly about all subjects. The second stage, which is a small fully connected network, operates on each individual subject, and uses the context of the visibly body parts and learned pose priors, to infer the 3D pose of the missing body parts. A third stage on top reconciles the 2D and 3D poses per frame and across time, to produce a temporally stable kinematic skeleton. In this talk, we will briefly discuss the proposed Convolutional Neural Network architecture and the possible benefits it might bring to your workflow. The other part of the talk would be on how the pose formulation proposed in this work came to be, what its advantages are, and how it can be extended to other related problems.

Organizers: Yinghao Huang

Machine Learning for Covid-19 Risk Awareness from Contact Tracing

Max Planck Lecture
  • 23 June 2020 • 17:30
  • Yoshua Bengio
  • Virtual Event

The Covid-19 pandemic has spread rapidly worldwide, overwhelming manual contact tracing in many countries, resulting in widespread lockdowns for emergency containment. Large-scale digital contact tracing (DCT) has emerged as a potential solution to resume economic and social activity without triggering a second outbreak. Various DCT methods have been proposed, each making trade-offs between privacy, mobility restriction, and public health. Many approaches model infection and encounters as binary events. With such approaches, called binary contact tracing, once a case is confirmed by a positive lab test result, it is propagated to people who were contacts of the infected person, typically recommending that these individuals should self-quarantine. This approach ignores the inherent uncertainty in contacts and the infection process, which could be used to tailor messaging to high-risk individuals, and prompt proactive testing or earlier self-quarantine. It also does not make use of observations such as symptoms or pre-existing medical conditions, which could be used to make more accurate risk predictions. Methods which may use such information have been proposed, but these typically require access to the graph of social interactions and/or centralization of sensitive personal data, which is incompatible with reasonable privacy and security constraints. We use an agent-based epidemiological simulation to develop and test ML methods that can be deployed to a smartphone to locally predict an individual's risk of infection from their contact history and other information, while respecting strong privacy and security constraints. We use this risk score to provide personalized recommendations to the user via an app, an approach we call probabilistic risk awareness (PRA). We show that PRA can significantly reduce spread of the disease compared to other methods, for equivalent average mobility and realistic assumptions about app adoption, and thereby save lives.

Organizers: Michael Black Bernhard Schölkopf Julia Braun Oliwia Gust

The sound of fermions

Physics Colloquium
  • 16 June 2020 • 16:15—18:15
  • Martin Zwierlein
  • WebEx (https://mpi-is.webex.com/mpi-is/onstage/g.php?MTID=e2189612fea810cac733067ed5b121127)

Fermions, particles with half-integer spin like the electron, proton and neutron, obey the Pauli principle: They cannot share one and the same quantum state. This “anti social” behavior is directly observed in experiments with ultracold gases of fermionic atoms: Pauli blocking in momentum space for a free Fermi gas, and in real space in gases confined to an optical lattice. When fermions interact, new, rather “social” behavior emerges, i.e. hydrodynamic flow, superfluidity and magnetism. The interplay of Pauli’s principle and strong interactions poses great difficulties to our understanding of complex Fermi systems, from nuclei to high-temperature superconducting materials and neutron stars. I will describe experiments on atomic Fermi gases where interactions become as strong as allowed by quantum mechanics – the unitary Fermi gas, fermions immersed in a Bose gas and the Fermi-Hubbard lattice gas. Sound and heat transport distinguish collisionally hydrodnamic from superfluid flow, while spin transport reveals the underlying mechanism responsible for quantum magnetism.

  • Scott Eaton

In this visual feast, Scott recounts results and revelations from four years of experimentation using machine learning as a ‘creative collaborator’ in his artistic process. He makes the case that AI, rather than rendering artists obsolete, will empower us and expand our creative horizons. In this visual feast, Scott shares an eclectic range of successes and failures encountered in his efforts to create powerful, but artistically controllable neural networks to use as tools to represent and abstract the human figure. Scott also gives a behinds-the-scenes look at creating the work for his recent Artist+AI exhibition in London.

Organizers: Ahmed Osman

Canonicalization for 3D Perception

  • 10 June 2020 • 16:00—17:00
  • Srinath Sridhar
  • Remote talk on zoom

In this talk, I will introduce the notion of 'canonicalization' and how it can be used to solve 3D computer vision tasks. I will describe Normalized Object Coordinate Space (NOCS), a 3D canonical container that we have developed for 3D estimation, aggregation, and synthesis tasks. I will demonstrate how NOCS allows us to address previously difficult tasks like category-level 6DoF object pose estimation, and correspondence-free multiview 3D shape aggregation. Finally, I will discuss future directions including opportunities to extend NOCS for tasks like articulated and non-rigid shape and pose estimation.

Organizers: Timo Bolkart

  • Manuel Gomez Rodriguez
  • Zoom & YouTube

Motivated by the current COVID-19 outbreak, we introduce a novel epidemic model based on marked temporal point processes that is specifically designed to make fine-grained spatiotemporal predictions about the course of the disease in a population. Our model can make use and benefit from data gathered by a variety of contact tracing technologies and it can quantify the effects that different testing and tracing strategies, social distancing measures, and business restrictions may have on the course of the disease. Building on our model, we use Bayesian optimization to estimate the risk of exposure of each individual at the sites they visit from historical longitudinal testing data. Experiments using real COVID-19 data and mobility patterns from several towns and regions in Germany and Switzerland demonstrate that our model can be used to quantify the effects of tracing, testing, and containment strategies at an unprecedented spatiotemporal resolution. To facilitate research and informed policy-making, particularly in the context of the current COVID-19 outbreak, we are releasing an open-source implementation of our framework at https://github.com/covid19-model.

Organizers: Bernhard Schölkopf

Towards spectro-microscopy at extreme limits

Physics Colloquium
  • 09 June 2020 • 16:15—18:15
  • Hanieh Fattahi
  • WebEx (https://mpi-is.webex.com/mpi-is/onstage/g.php?MTID=e5f2396ebed9860a681f9f98b288d9ec7)

This talk is devoted to modern methods for attosecond and femtosecond laser spectro-microscopy with the special focus on applications that require extreme spatial resolution. In the first part, I discuss how high-harmonic generation by high-energy, high-power light transients holds promise to deliver the required photon flux and photon energy for attosecond pump-probe spectroscopy at high spatiotemporal resolution in order to capture electron-dynamic in matter. I demonstrate the first prototype high-energy field synthesizer based on Yb:YAG, thin-disk laser technology for generating high-energy light transients. In the second part of my talk, I show resolving the complex electric field of light at PHz frequency by means of electro-optic sampling in ambient air, and discuss the potential of the technique in molecular spectroscopy and high-resolution, label-free imaging. 1. A. Alismail et al., "Multi-octave, CEP-stable source for high-energy field synthesis," Science Advances 6, eaax 3408 (2020) 2. H. Wang et al., "High Energy, Sub-Cycle, Field Synthesizers," IEEE Journal of Selected Topics in Quantum Electronics, (2019). 3. A. Sommer et al., " Attosecond nonlinear polarization and energy transfer in dielectrics," Nature 534, 86 (2016). 4. H. Fattahi, "Sub-cycle light transients for attosecond, X-ray, four-dimensional imaging," The Contemporary Physics Journal, 57, 1 (2016). 5. H. Fattahi et al., "Third-generation femtosecond technology," Optica 1, 45 (2014).

Robotic Manipulation: a Focus on Object Handovers

  • 09 June 2020 • 10:00—11:00
  • Valerio Ortenzi
  • remote talk on Zoom

Humans perform object manipulation in order to execute a specific task. Seldom is such action started with no goal in mind. In contrast, traditional robotic grasping (first stage for object manipulation) seems to focus purely on getting hold of the object—neglecting the goal of the manipulation. In this light, most metrics used in robotic grasping do not account for the final task in their judgement of quality and success. Since the overall goal of a manipulation task shapes the actions of humans and their grasps, the task itself should shape the metric of success. To this end, I will present a new metric centred on the task. The task is also very important in another action of object manipulation: the object handover. In the context of object handovers, humans display a high degree of flexibility and adaptation. These characteristics are key for robots to be able to interact with the same fluency and efficiency with humans. I will present my work on human-human and robot-human handovers and explain why an understanding of the task is of importance for robotic grasping.

Organizers: Katherine J. Kuchenbecker

AirCap – Aerial Outdoor Motion Capture

  • 18 May 2020 • 14:00—15:00
  • Aamir Ahmad
  • remote talk on Zoom

In this talk I will present an overview and the latest results of the project Aerial Outdoor Motion Capture (AirCap), running at the Perceiving Systems department. AirCap's goal is to achieve markerless and unconstrained human motion capture (MoCap) in unknown and unstructured outdoor environments. To this end, we have developed a flying MoCap system using a team of autonomous aerial robots with on-board, monocular RGB cameras. Our system is endowed with a range of novel functionalities which was developed by our group over the last 3 years. These include, i) cooperative detection and tracking that enables DNN-based detectors on board flying robots, ii) active cooperative perception in aerial robot teams to minimize joint tracking uncertainty, and iii) markerless human pose and shape estimation using images acquired from multiple views and approximately calibrated cameras. We have conducted several real experiments along with ground truth comparisons to validate our system. Overall, for outdoor scenarios we have demonstrated the first fully autonomous flying MoCap system involving multiple aerial robots.

Organizers: Katherine J. Kuchenbecker

Deep inverse rendering in the wild

  • 15 May 2020 • 11:00—12:00
  • Will Smith
  • Remote talk on zoom

In this talk I will consider the problem of scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled, outdoor image. This task is highly ill-posed, but we show that multiview self-supervision, a natural lighting prior and implicit lighting estimation allow an image-to-image CNN to solve the task, seemingly learning some general principles of shape-from-shading along the way. Adding a neural renderer and sky generator GAN, our approach allows us to synthesise photorealistic relit images under widely varying illumination. I will finish by briefly describing recent work in which some of these ideas have been combined with deep face model fitting replacing parameter regression with correspondence prediction enabling fully unsupervised training.

Organizers: Timo Bolkart