Header logo is
Institute Talks

"Exploring” Haptics: Human-Machine Interactive Applications from Mid-Air Laser Haptics to Sensorimotor Skill Learning

Talk
  • 25 February 2019 • 10:30 11:15
  • Hojin Lee
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Haptic technologies in both kinesthetic and tactile aspects benefit a brand-new opportunity to recent human-machine interactive applications. In this talk, I, who believe in that one of the essential role of a researcher is pioneering new insights and knowledge, will present my previous research topics about haptic technologies and human-machine interactive applications in two branches: laser-based mid-air haptics and sensorimotor skill learning. For the former branch, I will introduce our approach named indirect laser radiation and its application. Indirect laser radiation utilizes a laser and a light-absorbing elastic medium to evoke a tapping-like tactile sensation. For the latter, I will introduce our data-driven approach for both modeling and learning of sensorimotor skills (especially, driving) with kinesthetic assistance and artificial neural networks; I call it human-like haptic assistance. To unify two different branches of my earlier studies for exploring the feasibility of the sensory channel named "touch", I will present a general research paradigm for human-machine interactive applications to which current haptic technologies can aim in future.

Organizers: Katherine J. Kuchenbecker


Virtual Reality Based Needle Insertion Simulation With Haptic Feedback: A Psychophysical Study

Talk
  • 25 February 2019 • 11:15 12:00
  • Ravali Gourishetti
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Needle insertion is the most essential skill in medical care; training has to be imparted not only for physicians but also for nurses and paramedics. In most needle insertion procedures, haptic feedback from the needle is the main stimulus that novices are to be trained in. For better patient safety, the classical methods of training the haptic skills have to be replaced with simulators based on new robotic and graphics technologies. The main objective of this work is to develop analytical models of needle insertion (a special case of epidural anesthesia) including the biomechanical and psychophysical concepts that simulate the needle-tissue interaction forces in linear heterogeneous tissues and to validate the model with a series of experiments. The biomechanical and perception models were validated with experiments in two stages: with and without the human intervention. The second stage is the validation using the Turing test with two different experiments: 1) to observe the perceptual difference between the simulated and the physical phantom model, and 2) to verify the effectiveness of perceptual filter between the unfiltered and filtered model response. The results showed that the model could replicate the physical phantom tissues with good accuracy. This can be further extended to a non-linear heterogeneous model. The proposed needle/tissue interaction force models can be used more often in improving realism, performance and enabling future applications in needle simulators in heterogeneous tissue. Needle insertion training simulator was developed with the simulated models using Omni Phantom and clinical trials are conducted for the face validity and construct validity. The face validity results showed that the degree of realism of virtual environments and instruments had the overall lowest mean score and ease of usage and training in hand – eye coordination had the highest mean score. The construct validity results showed that the simulator was able to successfully differentiate force and psychomotor signatures of anesthesiologists with experiences less than 5 years and more than 5 years. For the performance index of the trainees, a novel measure, Just Controllable Difference (JCD) was proposed and a preliminary study on JCD measure is explored using two experiments for the novice. A preliminary study on the use of clinical training simulations, especially needle insertion procedure in virtual environments is emphasized on two objectives: Firstly, measures of force JND with the three fingers and secondly, comparison of these measures in Non-Immersive Virtual Reality (NIVR) to that of the Immersive Virtual Reality (IVR) using psychophysical study with the Force Matching task, Constant Stimuli method, and Isometric Force Probing stimuli. The results showed a better force JND in the IVR compared to that of the NIVR. Also, a simple state observer model was proposed to explain the improvement of force JND in the IVR. This study would quantitatively reinforce the use of the IVR for the design of various medical simulators.

Organizers: Katherine J. Kuchenbecker


Design of functional polymers for biomedical applications

Talk
  • 27 February 2019
  • Dr. Salvador Borrós Gómez

Functional polymers can be easily tailored for their interaction with living organismes. In our Group, we have worked during the last 15 years in the development of this kind of polymeric materials with different funcionalities, high biocompatibility and in different forms. In this talk, we will describe the synthesis of thermosensitive thin films that can be used to prevent biofilm formation in medical devices, the preparation of biodegradable polymers specially designed for vectors for gene transfection and a new familliy of zwitterionic polymers that are able to cross intestine mucouse for oral delivery applications. The relationship between structure-functionality- applications will be discussed for every example.


A new path to understanding biological/human vision: theory and experiments

IS Colloquium
  • 11 March 2019 • 14:00 15:00
  • Zhaoping Li
  • MPI-IS lecture hall (N0.002)

Since Hubel and Wiesel's seminal findings in the primary visual cortex (V1) more than 50 years ago, progress in vision science has been very limited along previous frameworks and schools of thoughts on understanding vision. Have we been asking the right questions? I will show observations motivating the new path. First, a drastic information bottleneck forces the brain to process only a tiny fraction of the massive visual input information; this selection is called the attentional selection, how to select this tiny fraction is critical. Second, a large body of evidence has been accumulating to suggest that the primary visual cortex (V1) is where this selection starts, suggesting that the visual cortical areas along the visual pathway beyond V1 must be investigated in light of this selection in V1. Placing attentional selection as the center stage, a new path to understanding vision is proposed (articulated in my book "Understanding vision: theory, models, and data", Oxford University Press 2014). I will show a first example of using this new path, which aims to ask new questions and make fresh progresses. I will relate our insights to artificial vision systems to discuss issues like top-down feedbacks in hierachical processing, analysis-by-synthesis, and image understanding.

Organizers: Timo Bolkart Aamir Ahmad

  • Ingmar H. Riedel-Kruse
  • Max-Planck-Institute for Intelligent Systems, Heisenbergstraße 3, Stuttgart, Room 2P4

I will share my vision that microbiological systems should be as programmable, interactive, accessible, constructible, and useful as our personal electronic devices. Natural multi-cellular organisms and symbiotic systems achieve complex tasks through division of labor among cells. Such systems transcend current electronics and robotics in many ways, e.g., they synthesize chemicals, generate active physical forms, and self-replicate. Harnessing these features promises significant impact for manufacturing (bioelectronics / smart materials /swarm robotics), health (tissue engineering), chemistry (pathway modularization), ecology (bioremediation), biodesign (art), and more. My lab takes a synergistic bottom-up / top-down approach to achieve such transformative applications: (1) We utilize synthetic biology and biophysics approaches to engineer and understand multi-cell bacterial assemblies. We developed the first synthetic cell-cell adhesion toolbox [1] and optogenetic cell-surface adhesion toolbox (‘Biofilm Lithography’) [2]. Integration with standard synthetic biology components (e.g., for signaling, differentiation, logic) now enables a new intelligent materials paradigm that rests on versatile, modular, and composable smart particles (i.e., cells). (2) We pioneered ‘Interactive Biotechnology’ that enables humans to directly interact with living multi-cell assemblies in real-time. I will provide the rational for this interactivity, demonstrate multiple applications using phototactic Euglena cells (e.g., tangible museum exhibits [3], biology cloud experimentation labs [4], biotic video games [5]), and show how this technology aided the discovery of new microswimmer phototaxis control strategies [6]. Finally, I discuss architecture and swarm programming languages for future bio-electronic devices (i.e., ‘Biotic Processing Units’ – BPUs) [7,8]. REFs: [1] Glass, Cell ’18; [2] Jin, PNAS ’18; [3] Lee, CHI ACM ’15; [4] Hossain, Nature Biotech ‘16; [5] Cira, PLoS Biology ‘15; [6] Tsang, Nature Physics ’18; [7] Lam LOC ‘17; [8] Washington, PNAS ‘19.


Perceptual and Affective Characteristics of Tactile Stimuli

Talk
  • 14 February 2019 • 15:00 16:00
  • Yongjae Yoo
  • 2P4 in Heisenbergstr. 3

With the advent of technology, tactile stimuli are adopted widely in many human-computer interactions. However, their perceptual and emotional characteristics are not much studied yet. In this talk, to help in understanding these characteristics, I will introduce my perception and emotion studies, as well as my future research plan. For perceptual characteristics, I will introduce an estimation method for perceived intensity of superimposed vibrations, verbal expressions for vibrotactile stimuli, and adjectival magnitude functions. Then, I will present a vibrotactile authoring tool that utilizes the adjectival magnitude functions as an application. For affective characteristics, I will introduce my emotion studies that investigate the effects of physical parameters of vibrotactile and thermal stimuli on the emotional responses using the valence-arousal space (V-A space). Then, as an application, I will present an emotion augmenting method that changes the emotion of visual stimuli in mobile devices using tactile stimuli.

Organizers: Katherine J. Kuchenbecker


Neural networks discovering quantum error correction strategies

IS Colloquium
  • 28 January 2019 • 11:15 12:15
  • Florian Marquardt
  • MPI IS building, Max Planck Ring 4, N0.002 (main seminar room)

Machine learning with artificial neural networks is revolutionizing science. The most advanced challenges require discovering answers autonomously. In the domain of reinforcement learning, control strategies are improved according to a reward function. The power of neural-network-based reinforcement learning has been highlighted by spectacular recent successes such as playing Go, but its benefits for physics are yet to be demonstrated. Here, we show how a network-based "agent" can discover complete quantum-error-correction strategies, protecting a collection of qubits against noise. These strategies require feedback adapted to measurement outcomes. Finding them from scratch without human guidance and tailored to different hardware resources is a formidable challenge due to the combinatorially large search space. To solve this challenge, we develop two ideas: two-stage learning with teacher and student networks and a reward quantifying the capability to recover the quantum information stored in a multiqubit system. Beyond its immediate impact on quantum computation, our work more generally demonstrates the promise of neural-network-based reinforcement learning in physics.


References: Reinforcement Learning with Neural Networks for Quantum Feedback Thomas Fösel, Petru Tighineanu, Talitha Weiss, Florian Marquardt Physical Review X 8(3) (2018)

Organizers: Matthias Bauer


  • Yuliang Xiu
  • PS Aquarium

Multi-person articulated pose tracking is an important while challenging problem in human behavior understanding. In this talk, going along the road of top-down approaches, I will introduce a decent and efficient pose tracker based on pose flows. This approach can achieve real-time pose tracking without loss of accuracy. Besides, to better understand human activities in visual contents, clothes texture and geometric details also play indispensable roles. However, extrapolating them from a single image is much more difficult than rigid objects due to its large variations in pose, shape, and cloth. I will present a two-stage pipeline to predict human bodies and synthesize human novel views from one single-view image.

Organizers: Siyu Tang


Mind Games

IS Colloquium
  • 21 December 2018 • 11:00 12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.


  • Yao Feng
  • PS Aquarium

In this talk, I will present my understanding on 3D face reconstruction, modelling and applications from a deep learning perspective. In the first part of my talk, I will discuss the relationship between representations (point clouds, meshes, etc) and network layers (CNN, GCN, etc) on face reconstruction task, then present my ECCV work PRN which proposed a new representation to help achieve state-of-the-art performance on face reconstruction and dense alignment tasks. I will also introduce my open source project face3d that provides examples for generating different 3D face representations. In the second part of the talk, I will talk some publications in integrating 3D techniques into deep networks, then introduce my upcoming work which implements this. In the third part, I will present how related tasks could promote each other in deep learning, including face recognition for face reconstruction task and face reconstruction for face anti-spoofing task. Finally, with such understanding of these three parts, I will present my plans on 3D face modelling and applications.

Organizers: Timo Bolkart


Generating Faces & Heads: Texture, Shape and Beyond.

Talk
  • 17 December 2018 • 11:00 12:00
  • Stefanos Zafeiriou
  • PS Aquarium

The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that are needed. In this talk, I will show how, using DCNNs, we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. Finally, I will demonstrate how to create very lightweight networks for representing 3D face texture and shape structure by capitalising upon intrinsic mesh convolutions.

Organizers: Dimitris Tzionas


  • Prof. Dr. Björn Ommer
  • PS Aquarium

Understanding objects and their behavior from images and videos is a difficult inverse problem. It requires learning a metric in image space that reflects object relations in real world. This metric learning problem calls for large volumes of training data. While images and videos are easily available, labels are not, thus motivating self-supervised metric and representation learning. Furthermore, I will present a widely applicable strategy based on deep reinforcement learning to improve the surrogate tasks underlying self-supervision. Thereafter, the talk will cover the learning of disentangled representations that explicitly separate different object characteristics. Our approach is based on an analysis-by-synthesis paradigm and can generate novel object instances with flexible changes to individual characteristics such as their appearance and pose. It nicely addresses diverse applications in human and animal behavior analysis, a topic we have intensive collaboration on with neuroscientists. Time permitting, I will discuss the disentangling of representations from a wider perspective including novel strategies to image stylization and new strategies for regularization of the latent space of generator networks.

Organizers: Joel Janai


  • Yanxi Liu
  • Aquarium (N3.022)

Human pose stability analysis is the key to understanding locomotion and control of body equilibrium, with numerous applications in the fields of Kinesiology, Medicine and Robotics. We propose and validate a novel approach to learn dynamics from kinematics of a human body to aid stability analysis. More specifically, we propose an end-to-end deep learning architecture to regress foot pressure from a human pose derived from video. We have collected and utilized a set of long (5min +) choreographed Taiji (Tai Chi) sequences of multiple subjects with synchronized motion capture, foot pressure and video data. The derived human pose data and corresponding foot pressure maps are used jointly in training a convolutional neural network with residual architecture, named “PressNET”. Cross validation results show promising performance of PressNet, significantly outperforming the baseline method under reasonable sensor noise ranges.

Organizers: Nadine Rueegg


Physical Reasoning and Robot Manipulation

Talk
  • 11 December 2018 • 15:00 16:00
  • Marc Toussaint
  • 2R4 Werner Köster lecture hall

Animals and humans are excellent in conceiving of solutions to physical and geometric problems, for instance in using tools, coming up with creative constructions, or eventually inventing novel mechanisms and machines. Cognitive scientists coined the term intuitive physics in this context. It is a shame we do not yet have good computational models of such capabilities. A main stream of current robotics research focusses on training robots for narrow manipulation skills - often using massive data from physical simulators. Complementary to that we should also try to understand how basic principles underlying physics can directly be used to enable general purpose physical reasoning in robots, rather than sampling data from physical simulations. In this talk I will discuss an approach called Logic-Geometric Programming, which builds a bridge between control theory, AI planning and robot manipulation. It demonstrates strong performance on sequential manipulation problems, but also raises a number of highly interesting fundamental problems, including its probabilistic formulation, reactive execution and learning.

Organizers: Katherine J. Kuchenbecker Ildikó Papp-Wiedmann Barbara Kettemann Matthias Tröndle