Header logo is
Institute Talks

Neural networks discovering quantum error correction strategies

IS Colloquium
  • 28 January 2019 • 11:15 12:15
  • Florian Marquardt
  • MPI IS building, Max Planck Ring 4, N0.002 (main seminar room)

Machine learning with artificial neural networks is revolutionizing science. The most advanced challenges require discovering answers autonomously. In the domain of reinforcement learning, control strategies are improved according to a reward function. The power of neural-network-based reinforcement learning has been highlighted by spectacular recent successes such as playing Go, but its benefits for physics are yet to be demonstrated. Here, we show how a network-based "agent" can discover complete quantum-error-correction strategies, protecting a collection of qubits against noise. These strategies require feedback adapted to measurement outcomes. Finding them from scratch without human guidance and tailored to different hardware resources is a formidable challenge due to the combinatorially large search space. To solve this challenge, we develop two ideas: two-stage learning with teacher and student networks and a reward quantifying the capability to recover the quantum information stored in a multiqubit system. Beyond its immediate impact on quantum computation, our work more generally demonstrates the promise of neural-network-based reinforcement learning in physics.


References: Reinforcement Learning with Neural Networks for Quantum Feedback Thomas Fösel, Petru Tighineanu, Talitha Weiss, Florian Marquardt Physical Review X 8(3) (2018)

Organizers: Matthias Bauer

  • Umar Iqbal
  • PS Aquarium

In this talk, I will present an overview of my Ph.D. research towards articulated human pose estimation from unconstrained images and videos. In the first part of the talk, I will present an approach to jointly model multi-person pose estimation and tracking in a single formulation. The approach represents body joint detections in a video by a spatiotemporal graph and solves an integer linear program to partition the graph into sub-graphs that correspond to plausible body pose trajectories for each person. I will also introduce the PoseTrack dataset and benchmark which is now the de-facto standard for multi-person pose estimation and tracking. In the second half of the talk, I will present a new method for 3D pose estimation from a monocular image through a novel 2.5D pose representation. The new 2.5D representation can be reliably estimated from an RGB image. Furthermore, it allows to exactly reconstruct the absolute 3D body pose up to a scaling factor, which can be estimated additionally if a prior of the body size is given. I will also describe a novel CNN architecture to implicitly learn the heatmaps and depth-maps for human body key-points from a single RGB image.

Organizers: Dimitris Tzionas


  • Prof. Dr. Rahmi Oklu
  • 3P02

Minimally invasive approaches to vascular disease and cancer have revolutionized medicine. I will discuss novel approaches to vascular bleeding, aneurysm treatment and tumor ablation.

Organizers: Metin Sitti


  • Prof. Eric Tytell
  • MPI-IS Stuttgart, Werner-Köster lecture hall

Many fishes swim efficiently over long distances to find food or during migrations. They also have to accelerate rapidly to escape predators. These two behaviors require different body mechanics: for efficient swimming, fish should be very flexible, but for rapid acceleration, they should be stiffer. Here, I will discuss recent experiments that show that they can use their muscles to tune their effective body mechanics. Control strategies inspired by the muscle activity in fishes may help design better soft robotic devices.

Organizers: Ardian Jusufi


  • Prof. Dr. Stefan Roth
  • N0.002

Supervised learning with deep convolutional networks is the workhorse of the majority of computer vision research today. While much progress has been made already, exploiting deep architectures with standard components, enormous datasets, and massive computational power, I will argue that it pays to scrutinize some of the components of modern deep networks. I will begin with looking at the common pooling operation and show how we can replace standard pooling layers with a perceptually-motivated alternative, with consistent gains in accuracy. Next, I will show how we can leverage self-similarity, a well known concept from the study of natural images, to derive non-local layers for various vision tasks that boost the discriminative power. Finally, I will present a lightweight approach to obtaining predictive probabilities in deep networks, allowing to judge the reliability of the prediction.

Organizers: Michael Black


A fine-grained perspective onto object interactions

Talk
  • 30 October 2018 • 10:30 11:30
  • Dima Damen
  • N0.002

This talk aims to argue for a fine-grained perspective onto human-object interactions, from video sequences. I will present approaches for the understanding of ‘what’ objects one interacts with during daily activities, ‘when’ should we label the temporal boundaries of interactions, ‘which’ semantic labels one can use to describe such interactions and ‘who’ is better when contrasting people perform the same interaction. I will detail my group’s latest works on sub-topics related to: (1) assessing action ‘completion’ – when an interaction is attempted but not completed [BMVC 2018], (2) determining skill or expertise from video sequences [CVPR 2018] and (3) finding unequivocal semantic representations for object interactions [ongoing work]. I will also introduce EPIC-KITCHENS 2018, the recently released largest dataset of object interactions in people’s homes, recorded using wearable cameras. The dataset includes 11.5M frames fully annotated with objects and actions, based on unique annotations from the participants narrating their own videos, thus reflecting true intention. Three open challenges are now available on object detection, action recognition and action anticipation [http://epic-kitchens.github.io]

Organizers: Mohamed Hassan


Artificial haptic intelligence for human-machine systems

IS Colloquium
  • 25 October 2018 • 11:00 11:00
  • Veronica J. Santos
  • N2.025 at MPI-IS in Tübingen

The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.

Organizers: Katherine J. Kuchenbecker Adam Spiers


Artificial haptic intelligence for human-machine systems

IS Colloquium
  • 24 October 2018 • 11:00 12:00
  • Veronica J. Santos
  • 5H7 at MPI-IS in Stuttgart

The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.

Organizers: Katherine J. Kuchenbecker


Learning to Act with Confidence

Talk
  • 23 October 2018 • 12:00 13:00
  • Andreas Krause
  • MPI-IS Tübingen, N0.002

Actively acquiring decision-relevant information is a key capability of intelligent systems, and plays a central role in the scientific process. In this talk I will present research from my group on this topic at the intersection of statistical learning, optimization and decision making. In particular, I will discuss how statistical confidence bounds can guide data acquisition in a principled way to make effective and reliable decisions in a variety of complex domains. I will also discuss several applications, ranging from autonomously guiding wetlab experiments in protein function optimization to safe exploration in robotics.


Control Systems for a Surgical Robot on the Space Station

IS Colloquium
  • 23 October 2018 • 16:30 17:30
  • Chris Macnab
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

As part of a proposed design for a surgical robot on the space station, my research group has been asked to look at controls that can provide literally surgical precision. Due to excessive time delay, we envision a system with a local model being controlled by a surgeon while the remote system on the space station follows along in a safe manner. Two of the major design considerations that come into play for the low-level feedback loops on the remote side are 1) the harmonic drives in a robot will cause excessive vibrations in a micro-gravity environment unless active damping strategies are employed and 2) when interacting with a human tissue environment the robot must apply smooth control signals that result in precise positions and forces. Thus, we envision intelligent strategies that utilize nonlinear, adaptive, neural-network, and/or fuzzy control theory as the most suitable. However, space agencies, or their engineering sub-contractors, typically provide gain and phase margin characteristics as requirements to the engineers involved in a control system design, which are normally associated with PID or other traditional linear control schemes. We are currently endeavouring to create intelligent controls that have guaranteed gain and phase margins using the Cerebellar Model Articulation Controller.

Organizers: Katherine J. Kuchenbecker


  • Ravi Haksar
  • MPI-IS Stuttgart, seminar room 2P4

What do forest fires, disease outbreaks, robot swarms, and social networks have in common? How can we develop a common set of tools for these applications? In this talk, I will first introduce a modeling framework that describes large-scale phenomena and which is based on the idea of "local interactions." I will then describe my work on creating estimation and control methods for a single agent and for a cooperative team of autonomous agents. In particular, these algorithms are scalable as the solution does not change if the number of agents or environment size changes. Forest fires and the 2013 Ebola outbreak in West Africa are presented as examples.

Organizers: Sebastian Trimpe