Max Planck Intelligent Systems Colloquium

Haptic Texture Compression and Perceptual Quality Evaluation / Understanding Friction Based Haptic Feedback

  • Date: May 5, 2017
  • Time: 14:00 - 14:50
  • Speaker: Rahul Chaudhari, David Gueorguiev
  • Room: Max-Planck Institute Tübingen in room N2.025 and Broadcast to Max-Planck Institute Stuttgart in room 2P4
  • Host: Dr. Katherine J. Kuchenbecker
  • Contact:

Rahul Chaudhari:

Abstract: Haptic Texture Compression and Perceptual Quality Evaluation

By imparting us the ability to touch and feel virtual or remote environments, haptics introduces physical interactivity into multimodal communication systems. The inclusion of haptic media has been shown to improve task performance, immersiveness, and the overall experience of task execution. While several decades of research have been dedicated to the acquisition, processing, coding, and display of audio-visual streams in multimodal systems, similar aspects for haptic data have started getting addressed only recently. This talk presents a novel data compression algorithm for haptic signals generated by stroking a textured surface with a mechanical tool. Mechanisms of texture signal production and haptic perception are described. Mathematical models for these mechanisms are then used as the basis for the presented data compression approach. The performance of the compression algorithm is evaluated both objectively as well as through subjective user studies. These evaluations show that the compression algorithm achieves, for a comparable perceptual quality, a compression ratio two times better than the state-of-the-art.

David Gueorguiev:

Abstract: Understanding Friction Based Haptic Feedback

The sense of touch is less understood than the visual and auditory senses even though we are continuously using it to interact with the world around us through texture perception, shape detection or assessment of the position of our body and limbs. Experienced frictional forces provide essential sensory cues to adapt our behavior, for example when half asleep, we reach for a cup of coffee and bring it to our lips without letting it slip through the fingers, or when we slide our finger against the screen of a smartphone. Recent studies have highlighted how sensitive humans are to frictional patterns and started to suggest mechanical and psychological principles for their perception.


Rahul Chaudhari has been working as a Software Engineer at a couple of TU Munich startups in the indoor and outdoor navigation industry for the past two years. From 2010--2015, he was a member of Research and Teaching staff at the Chair of Media Technology at TUM. His research focused on perceptually transparent compression of haptic (vibrotactile) texture signals, and objective evaluation of the perceptual quality of compressed signals. He graduated with a PhD (Summa Cum Laude) in May 2015. In 2009, he received a master’s degree in Communication Systems from TUM. His master's thesis addressed the topic of haptic signal processing - in particular, compression/reduction - of data for kinesthetic haptic communication. Before that, he received an undergraduate degree (Bachelor of Engineering) in Electronics and Telecommunications from the University of Pune, India, graduating in 2006 as the top student in class.

David Gueorguiev obtained his Bachelor Degree in Physics at the Free University of Brussels. He then continued with a Master in computational neuroscience in Paris, and an internship at the Cognition and Brain Sciences Unit in Cambridge where he investigated attention and consciousness to auditory and visual cues. In 2013, he started a PhD at Université catholique de Louvain during which he studied the tactile cognition of both natural textures and ultrasonic frictional haptic feedback. After completing his PhD in 2016, he became post-doctoral researcher in the MINT team at Inria Lille where he investigates new tactile strategies for human-machine interaction.

Those who are not present at one of the Institutes on this day are welcome to join remotely via the following link:

loading content