Header logo is
Institute Talks

Dexterous and non contact micromanipulation for micro-nano-assembly and biomedical applications

Talk
  • 24 September 2018 • 09:30 10:30
  • Dr. Aude Bolopion and Dr. Mich
  • 2P4

This talk presents an overview of recent activities of FEMTO-ST institute in the field of micro-nanomanipulation fo both micro nano assembly and biomedical applications. Microrobotic systems are currently limited by the number of degree of freedom addressed and also are very limited by their throughput. Two ways can be considered to improve both the velocity and the degrees of freedom: non-contact manipulation and dexterous micromanipulation. Indeed in both ways movement including rotation and translation are done locally and are only limited by the micro-nano-objects inertia which is very low. It consequently enable to generate 6DOF and to induce high dynamics. The talk presents recent works which have shown that controlled trajectories in non contact manipulation enable to manipulate micro-objects in high speed. Dexterous manipulation on a 4 fingers microtweezers have been also experimented and show that in-hand micromanipulations are possible in micro-nanoscale based on original finger trajectory planning. These two approaches have been applied to perform micro-nano-assemby and biomedical operations

Learning to align images and surfaces

Talk
  • 24 September 2018 • 11:00 12:00
  • Iasonas Kokkinos
  • Ground Floor Seminar Room (N0.002)

In this talk I will be presenting recent work on combining ideas from deformable models with deep learning. I will start by describing DenseReg and DensePose, two recently introduced systems for establishing dense correspondences between 2D images and 3D surface models ``in the wild'', namely in the presence of background, occlusions, and multiple objects. For DensePose in particular we introduce DensePose-COCO, a large-scale dataset for dense pose estimation, and DensePose-RCNN, a system which operates at multiple frames per second on a single GPU while handling multiple humans simultaneously. I will then present Deforming AutoEncoders, a method for unsupervised dense correspondence estimation. We show that we can disentangle deformations from appearance variation in an entirely unsupervised manner, and also provide promising results for a more thorough disentanglement of images into deformations, albedo and shading. Time permitting we will discuss a parallel line of work aiming at combining grouping with deep learning, and see how both grouping and correspondence can be understood as establishing associations between neurons.

Organizers: Vassilis Choutas

Soft Feel by Soft Robotic Hand: New way of robotic sensing

IS Colloquium
  • 04 October 2018 • 13:30 - 04 September 2018 • 14:30
  • Prof. Koh Hosoda
  • MPI-IS Stuttgart, Werner-Köster lecture hall

This lecture will show some interesting examples how soft body/skin will change your idea of robotic sensing. Soft Robotics does not only discuss about compliance and safety; soft structure will change the way to categorize objects by dynamic exploration and enables the robot to learn sense of slip. Soft Robotics will entirely change your idea how to design sensing and open up a new way to understand human sensing.

Organizers: Ardian Jusufi

Medical Robots with a Haptic Touch – First Experiences with the FLEXMIN System

IS Colloquium
  • 04 October 2018 • 10:00 11:00
  • Prof. Peter Pott
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

The FLEXMIN haptic robotic system is a single-port tele-manipulator for robotic surgery in the small pelvis. Using a transanal approach it allows bi-manual tasks such as grasping, monopolar cutting, and suturing with a footprint of Ø 160 x 240 mm³. Forces up to 5 N in all direction can be applied easily. In addition to provide low latency and highly dynamic control over its movements, high-fidelity haptic feedback was realised using built-in force sensors, lightweight and friction-optimized kinematics as well as dedicated parallel kinematics input devices. After a brief description of the system and some of its key aspects, first evaluation results will be presented. In the second half of the talk the Institute of Medical Device Technology will be presented. The institute was founded in July 2017 and has ever since started a number of projects in the field of biomedical actuation, medical systems and robotics and advanced light microscopy. To illustrate this a few snapshots of bits and pieces will be presented that are condensation nuclei for the future.

Organizers: Katherine Kuchenbecker

Interactive and Effective Representation of Digital Content through Touch using Local Tactile Feedback

Talk
  • 05 October 2018 • 11:00 12:00
  • Mariacarla Memeo
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

The increasing availability of on-line resources and the widespread practice of storing data over the internet arise the problem of their accessibility for visually impaired people. A translation from the visual domain to the available modalities is therefore necessary to study if this access is somewhat possible. However, the translation of information from vision to touch is necessarily impaired due to the superiority of vision during the acquisition process. Yet, compromises exist as visual information can be simplified, sketched. A picture can become a map. An object can become a geometrical shape. Under some circumstances, and with a reasonable loss of generality, touch can substitute vision. In particular, when touch substitutes vision, data can be differentiated by adding a further dimension to the tactile feedback, i.e. extending tactile feedback to three dimensions instead of two. This mode has been chosen because it mimics our natural way of following object profiles with fingers. Specifically, regardless if a hand lying on an object is moving or not, our tactile and proprioceptive systems are both stimulated and tell us something about which object we are manipulating, what can be its shape and size. The goal of this talk is to describe how to exploit tactile stimulation to render digital information non visually, so that cognitive maps associated with this information can be efficiently elicited from visually impaired persons. In particular, the focus is to deliver geometrical information in a learning scenario. Moreover, a completely blind interaction with virtual environment in a learning scenario is something little investigated because visually impaired subjects are often passive agents of exercises with fixed environment constraints. For this reason, during the talk I will provide my personal answer to the question: can visually impaired people manipulate dynamic virtual content through touch? This process is much more challenging than only exploring and learning a virtual content, but at the same time it leads to a more conscious and dynamic creation of the spatial understanding of an environment during tactile exploration.

Organizers: Katherine Kuchenbecker

Autonomous Robots that Walk and Fly

Talk
  • 22 October 2018 • 11:00 12:00
  • Roland Siegwart
  • MPI, Lecture Hall 2D5, Heisenbergstraße 1, Stuttgart

While robots are already doing a wonderful job as factory workhorses, they are now gradually appearing in our daily environments and offering their services as autonomous cars, delivery drones, helpers in search and rescue and much more. This talk will present some recent highlights in the field of autonomous mobile robotics research and touch on some of the great challenges and opportunities. Legged robots are able to overcome the limitations of wheeled or tracked ground vehicles. ETH’s electrically powered legged quadruped robots are designed for high agility, efficiency and robustness in rough terrain. This is realized through an optimal exploitation of the natural dynamics and serial elastic actuation. For fast inspection of complex environments, flying robots are probably the most efficient and versatile devices. However, the limited payload and computing power of drones renders autonomous navigation quite challenging. Thanks to our custom designed visual-inertial sensor, real-time on-board localization, mapping and planning has become feasible and enables our multi-copters and solar-powered fixed wing drones for advanced rescue and inspection tasks or support in precision farming, even in GPS-denied environments.

Organizers: Katherine Kuchenbecker Matthias Tröndle Ildikó Papp-Wiedmann

  • Peter Gehler

n this talk I will present recent work on two different topics from low- and high-level computer vision: Intrinsic Image Recovery and Efficient object detection. By intrinsic image decomposition we refer to the challenging task of decoupling material properties from lighting properties given a single image. We propose a probabilistic model that incorporates previous attempts exploiting edge information and combine it with a novel prior on material reflectances in the image. This results in a random field model with global, latent variables and pixel-accurate output reflectance values. I will present experiments on a recently proposed ground-truth database.

The proposed model is found to outperform previous models that have been proposed. Then I will also discuss some possible future developments in this field. In the second part of the talk I will present an efficient object detection scheme that breaks the computational complexity of commonly used detection algorithms, eg sliding windows. We pose the detection problem naturally as a structured prediction problem for which we decompose the inference procedure into an adaptive best-first search.

This results in test-time inference that scales sub-linearly in the size of the search space and detection requires usually less than 100 classifier evaluations. This paves the way for using strong (but costly) classifiers such as non-linear SVMs. The algorithmic properties are demonstrated using the VOC'07 dataset. This work is part of the Visipedia project, in collaboration with Steve Branson, Catherine Wah, Florian Schroff, Boris Babenko, Peter Welinder and Pietro Perona.


  • Yusuf Sahillioglu

3D shape correspondence methods seek on two given shapes for pairs of surface points that are semantically equivalent. We present three automatic algorithms that address three different aspects of this problem: 1) coarse, 2) dense, and 3) partial correspondence. In 1), after sampling evenly-spaced base vertices on shapes, we formulate the problem of shape correspondence as combinatorial optimization over the domain of all possible mappings of bases, which then reduces within a probabilistic framework to a log-likelihood maximization problem that we solve via EM (Expectation Maximization) algorithm.

Due to computational limitations, we change this algorithm to a coarse-to-fine one (2) to achieve dense correspondence between all vertices. Our scale-invariant isometric distortion measure makes partial matching (3) possible as well.


  • Serge Belongie

We present an interactive, hybrid human-computer method for object classification. The method applies to classes of problems that are difficult for most people, but are recognizable by people with the appropriate expertise (e.g., animal species or airplane model recognition). The classification method can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively.

The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. Incorporating user input drives up recognition accuracy to levels that are good enough for practical applications; at the same time, computer vision reduces the amount of human interaction required. The resulting hybrid system is able to handle difficult, large multi-class problems with tightly-related categories.

We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate the accuracy and computational properties of different computer vision algorithms and the effects of noisy user responses on a dataset of 200 bird species and on the Animals With Attributes dataset.

Our results demonstrate the effectiveness and practicality of the hybrid human-computer classification paradigm. This work is part of the Visipedia project, in collaboration with Steve Branson, Catherine Wah, Florian Schroff, Boris Babenko, Peter Welinder and Pietro Perona.


  • Javier Romero

Object grasping and manipulation is a crucial part of daily human activities. The study of these actions represents a central component in the development of systems that attempt to understand human activities and robots that are able to act in human environments.

Three essential parts of this problem are tackled in this talk: the perception of the human hand in interaction with objects, the modeling of human grasping actions and the refinement of the execution of a robotic grasp. The estimation of the human hand pose is carrried out with a markerless visual system that performs in real time under object occlusions. Low dimensional models of various grasping actions are created by exploiting the correlations between different hand joints in a non-linear manner with Gaussian Process Latent Variable Models (GPLVM). Finally, robot grasping actions are perfected by exploiting the appearance of the robot during action execution.


  • Fernando De La Torre

Enabling computers to understand human behavior has the potential to revolutionize many areas that benefit society such as clinical diagnosis, human computer interaction, and social robotics. A critical element in the design of any behavioral sensing system is to find a good representation of the data for encoding, segmenting, classifying and predicting subtle human behavior.

In this talk I will propose several extensions of Component Analysis (CA) techniques (e.g., kernel principal component analysis, support vector machines, spectral clustering) that are able to learn spatio-temporal representations or components useful in many human sensing tasks. In the first part of the talk I will give an overview of several ongoing projects in the CMU Human Sensing Laboratory, including our current work on depression assessment from video, as well as hot-flash detection from wearable sensors.

In the second part of the talk I will show how several extensions of the CA methods outperform state-of-the-art algorithms in problems such as temporal alignment of human motion, temporal segmentation/clustering of human activities, joint segmentation and classification of human behavior, facial expression analysis, and facial feature detection in images. The talk will be adaptive, and I will discuss the topics of major interest to the audience.


  • Arto Nurmikko

Semiconductor light emitters, based on single crystal epitaxial inorganic semiconductor heterostructures are ubiquitous. In spite of their extraordinary versatility and technological maturity, penetrating the full visible spectrum using a single material system for red, green, and blue (RGB) in a seamless way remains, nonetheless, an elusive challenge.

Semiconductor nanocrystals, or quantum dots (QDs), synthesized by solution-based methods of colloidal chemistry represent a strongly contrasting basis for active optical materials. While possessing an ability to absorb and efficiently luminesce across the RGB by simple nanocrystal particle size control within a single material system, these preparations have yet to make a significant impact as viable light emitting devices, mainly due to the difficulties in casting such materials from their natural habitat, that is “from the chemist’s bottle” to a useful solid thin film form for device use. In this presentation we show how II-VI compound nanocrystals can be transitioned to solid templates with targeted spatial control and placeme


  • Arto Nurmikko

Invasive access by microprobe arrays inserted safely into the brain is now enabling us to “listen” to local neural circuits at levels of spatial and temporal detail which, in addition to enriching fundamental brain science, has led to the possibility of a new generation of neurotechnologies to overcome disabilities due to a range of neurological injuries where pathways from the brain to the rest of the central and peripheral nervous systems have been injured or severed. In this presentation we discuss the biomedical engineering challenges and opportunities with these incipient technologies, with emphasis on implantable wireless neural interfaces for communicating with the brain.

A second topic, related to the possibility of sending direct inputs of information back to the brain by implanted devices is also explored, focusing on recently discovered means to render selected neural cell types and microcircuits to be light sensitized, following local microbiologically induced conditioning.


  • Henning Hamer

The scope of this work is hand-object interaction. As a starting point, we observe hands manipulating objects and derive information based on computer vision methods. After considering hands and objects in isolation, we focus on the inherent interdependencies. One application of the gained knowledge is the synthesis of interactive hand motion for animated sequences.


  • Vittorio Ferrari

Vision is a crucial sense for computational systems to interact with their environments as biological systems do. A major task is interpreting images of complex scenes, by recognizing and localizing objects, persons and actions. This involves learning a large number of visual models, ideally autonomously.

In this talk I will present two ways of reducing the amount of human supervision required by this learning process. The first way is labeling images only by the object class they contain. Learning from cluttered images is very challenging in this weakly supervised setting. In the traditional paradigm, each class is learned starting from scratch. In our work instead, knowledge generic over classes is first learned during a meta-training stage from images of diverse classes with given object locations, and is then used to support learning any new class without location annotation. Generic knowledge helps because during meta-training the system can learn about localizing objects in general. As demonstrated experimentally, this approach enables learning from more challenging images than possible before, such as the PASCAL VOC 2007, containing extensive clutter and large scale and appearance variations between object instances.

The second way is the analysis of news items consisting of images and text captions. We associate names and action verbs in the captions to the face and body pose of the persons in the images. We introduce a joint probabilistic model for simultaneously recovering image-caption correspondences and learning appearance models for the face and pose classes occurring in the corpus. As demonstrated experimentally, this joint `face and pose' model solves the correspondence problem better than earlier models covering only the face.

I will conclude with an outlook on the idea of visual culture, where new visual concepts are learned incrementally on top of all visual knowledge acquired so far. Beside generic knowledge, visual culture includes also knowledge specific to a class, knowledge of scene structures and other forms of visual knowledge. Potentially, this approach could considerably extend current visual recognition capabilities and produce an integrated body of visual knowledge.


  • Jim Little

I will survey our work on tracking and measurement, waypoints on the path to activity recognition and understanding, in sports video, highlighting some of our recent work on rectification and player tracking, not just in hockey but more recently in basketball, where we have addressed player identification both in a fully supervised and semi-supervised manner.