Header logo is de


2018


A Value-Driven Eldercare Robot: Virtual and Physical Instantiations of a Case-Supported Principle-Based Behavior Paradigm
A Value-Driven Eldercare Robot: Virtual and Physical Instantiations of a Case-Supported Principle-Based Behavior Paradigm

Anderson, M., Anderson, S., Berenz, V.

Proceedings of the IEEE, pages: 1,15, October 2018 (article)

Abstract
In this paper, a case-supported principle-based behavior paradigm is proposed to help ensure ethical behavior of autonomous machines. We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake. We believe that this is the case since we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles that balance these features when they are in conflict. Such principles not only help ensure ethical behavior of complex and dynamic systems but also can serve as a basis for justification of this behavior. The requirements, methods, implementation, and evaluation components of the paradigm are detailed as well as its instantiation in both a simulated and real robot functioning in the domain of eldercare.

am

link (url) DOI [BibTex]

2018



Softness, Warmth, and Responsiveness Improve Robot Hugs
Softness, Warmth, and Responsiveness Improve Robot Hugs

Block, A. E., Kuchenbecker, K. J.

International Journal of Social Robotics, 11(1):49-64, October 2018 (article)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, roboticists are naturally interested in having robots one day hug humans as seamlessly as humans hug other humans. This project's purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a soft, warm, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty relatively young and rather technical participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use.

hi

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Playful: Reactive Programming for Orchestrating Robotic Behavior
Playful: Reactive Programming for Orchestrating Robotic Behavior

Berenz, V., Schaal, S.

IEEE Robotics Automation Magazine, 25(3):49-60, September 2018 (article) In press

Abstract
For many service robots, reactivity to changes in their surroundings is a must. However, developing software suitable for dynamic environments is difficult. Existing robotic middleware allows engineers to design behavior graphs by organizing communication between components. But because these graphs are structurally inflexible, they hardly support the development of complex reactive behavior. To address this limitation, we propose Playful, a software platform that applies reactive programming to the specification of robotic behavior.

am

playful website playful_IEEE_RAM link (url) DOI [BibTex]


ClusterNet: Instance Segmentation in RGB-D Images
ClusterNet: Instance Segmentation in RGB-D Images

Shao, L., Tian, Y., Bohg, J.

arXiv, September 2018, Submitted to ICRA'19 (article) Submitted

Abstract
We propose a method for instance-level segmentation that uses RGB-D data as input and provides detailed information about the location, geometry and number of {\em individual\/} objects in the scene. This level of understanding is fundamental for autonomous robots. It enables safe and robust decision-making under the large uncertainty of the real-world. In our model, we propose to use the first and second order moments of the object occupancy function to represent an object instance. We train an hourglass Deep Neural Network (DNN) where each pixel in the output votes for the 3D position of the corresponding object center and for the object's size and pose. The final instance segmentation is achieved through clustering in the space of moments. The object-centric training loss is defined on the output of the clustering. Our method outperforms the state-of-the-art instance segmentation method on our synthesized dataset. We show that our method generalizes well on real-world data achieving visually better segmentation results.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Complexity, Rate, and Scale in Sliding Friction Dynamics Between a Finger and Textured Surface

Khojasteh, B., Janko, M., Visell, Y.

Nature Scientific Reports, 8(13710), September 2018 (article)

Abstract
Sliding friction between the skin and a touched surface is highly complex, but lies at the heart of our ability to discriminate surface texture through touch. Prior research has elucidated neural mechanisms of tactile texture perception, but our understanding of the nonlinear dynamics of frictional sliding between the finger and textured surfaces, with which the neural signals that encode texture originate, is incomplete. To address this, we compared measurements from human fingertips sliding against textured counter surfaces with predictions of numerical simulations of a model finger that resembled a real finger, with similar geometry, tissue heterogeneity, hyperelasticity, and interfacial adhesion. Modeled and measured forces exhibited similar complex, nonlinear sliding friction dynamics, force fluctuations, and prominent regularities related to the surface geometry. We comparatively analysed measured and simulated forces patterns in matched conditions using linear and nonlinear methods, including recurrence analysis. The model had greatest predictive power for faster sliding and for surface textures with length scales greater than about one millimeter. This could be attributed to the the tendency of sliding at slower speeds, or on finer surfaces, to complexly engage fine features of skin or surface, such as fingerprints or surface asperities. The results elucidate the dynamical forces felt during tactile exploration and highlight the challenges involved in the biological perception of surface texture via touch.

hi

DOI [BibTex]

DOI [BibTex]


no image
Design of curved composite panels for optimal dynamic response using lamination parameters

Serhat, G., Basdogan, I.

Composites Part B: Engineering, 147, pages: 135–146, August 2018 (article)

Abstract
In this paper, dynamic response of composite panels is investigated using lamination parameters as design variables. Finite element analyses are performed to observe the individual and combined effects of different panel aspect ratios, curvatures and boundary conditions on the dynamic responses. Fundamental frequency contours for curved panels are obtained in lamination parameters domain and optimal points yielding maximum values are found. Subsequently, forced dynamic analyses are carried out to calculate equivalent radiated power (ERP) for the panels under harmonic pressure excitation. ERP contours at the maximum fundamental frequency are presented. Optimal lamination parameters providing minimum ERP are determined for different excitation frequencies and their effective frequency bands are shown. The relationship between the designs optimized for maximum fundamental frequency and minimum ERP responses is investigated to study the effectiveness of the frequency maximization technique. The results demonstrate the potential of using lamination parameters technique in the design of curved composite panels for optimal dynamic response and provide valuable insight on the effect of various design parameters.

hi

DOI [BibTex]

DOI [BibTex]


no image
A Robust Soft Lens for Tunable Camera Application Using Dielectric Elastomer Actuators

Nam, S., Yun, S., Yoon, J. W., Park, S., Park, S. K., Mun, S., Park, B., Kyung, K.

Soft robotics, Mary Ann Liebert, Inc., August 2018 (article)

Abstract
Developing tunable lenses, an expansion-based mechanism for dynamic focus adjustment can provide a larger focal length tuning range than a contraction-based mechanism. Here, we develop an expansion-tunable soft lens module using a disk-type dielectric elastomer actuator (DEA) that creates axially symmetric pulling forces on a soft lens. Adopted from a biological accommodation mechanism in human eyes, a soft lens at the annular center of a disk-type DEA pair is efficiently stretched to change the focal length in a highly reliable manner. A soft lens with a diameter of 3mm shows a 65.7% change in the focal length (14.3–23.7mm) under a dynamic driving voltage signal control. We confirm a quadratic relation between lens expansion and focal length that leads to large focal length tunability obtainable in the proposed approach. The fabricated tunable lens module can be used for soft, lightweight, and compact vision components in robots, drones, vehicles, and so on.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Task-Driven PCA-Based Design Optimization of Wearable Cutaneous Devices

Pacchierotti, C., Young, E. M., Kuchenbecker, K. J.

IEEE Robotics and Automation Letters, 3(3):2214-2221, July 2018, Presented at ICRA 2018 (article)

Abstract
Small size and low weight are critical requirements for wearable and portable haptic interfaces, making it essential to work toward the optimization of their sensing and actuation systems. This paper presents a new approach for task-driven design optimization of fingertip cutaneous haptic devices. Given one (or more) target tactile interactions to render and a cutaneous device to optimize, we evaluate the minimum number and best configuration of the device’s actuators to minimize the estimated haptic rendering error. First, we calculate the motion needed for the original cutaneous device to render the considered target interaction. Then, we run a principal component analysis (PCA) to search for possible couplings between the original motor inputs, looking also for the best way to reconfigure them. If some couplings exist, we can re-design our cutaneous device with fewer motors, optimally configured to render the target tactile sensation. The proposed approach is quite general and can be applied to different tactile sensors and cutaneous devices. We validated it using a BioTac tactile sensor and custom plate-based 3-DoF and 6-DoF fingertip cutaneous devices, considering six representative target tactile interactions. The algorithm was able to find couplings between each device’s motor inputs, proving it to be a viable approach to optimize the design of wearable and portable cutaneous devices. Finally, we present two examples of optimized designs for our 3-DoF fingertip cutaneous device.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn {IMU}s
Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn IMUs

Fitter, N. T., Kuchenbecker, K. J.

Frontiers in Robotics and Artificial Intelligence, 5(85), July 2018 (article)

Abstract
Colleagues often shake hands in greeting, friends connect through high fives, and children around the world rejoice in hand-clapping games. As robots become more common in everyday human life, they will have the opportunity to join in these social-physical interactions, but few current robots are intended to touch people in friendly ways. This article describes how we enabled a Baxter Research Robot to both teach and learn bimanual hand-clapping games with a human partner. Our system monitors the user's motions via a pair of inertial measurement units (IMUs) worn on the wrists. We recorded a labeled library of 10 common hand-clapping movements from 10 participants; this dataset was used to train an SVM classifier to automatically identify hand-clapping motions from previously unseen participants with a test-set classification accuracy of 97.0%. Baxter uses these sensors and this classifier to quickly identify the motions of its human gameplay partner, so that it can join in hand-clapping games. This system was evaluated by N = 24 naïve users in an experiment that involved learning sequences of eight motions from Baxter, teaching Baxter eight-motion game patterns, and completing a free interaction period. The motion classification accuracy in this less structured setting was 85.9%, primarily due to unexpected variations in motion timing. The quantitative task performance results and qualitative participant survey responses showed that learning games from Baxter was significantly easier than teaching games to Baxter, and that the teaching role caused users to consider more teamwork aspects of the gameplay. Over the course of the experiment, people felt more understood by Baxter and became more willing to follow the example of the robot. Users felt uniformly safe interacting with Baxter, and they expressed positive opinions of Baxter and reported fun interacting with the robot. Taken together, the results indicate that this robot achieved credible social-physical interaction with humans and that its ability to both lead and follow systematically changed the human partner's experience.

hi

DOI [BibTex]

DOI [BibTex]


Real-time Perception meets Reactive Motion Generation
Real-time Perception meets Reactive Motion Generation

(Best Systems Paper Finalists - Amazon Robotics Best Paper Awards in Manipulation)

Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.

IEEE Robotics and Automation Letters, 3(3):1864-1871, July 2018 (article)

Abstract
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.

am

arxiv video video link (url) DOI Project Page [BibTex]


Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs
Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs

Sproewitz, A., Tuleu, A., Ajallooeian, M., Vespignani, M., Moeckel, R., Eckert, P., D’Haene, M., Degrave, J., Nordmann, A., Schrauwen, B., Steil, J., Ijspeert, A. J.

Frontiers in Robotics and AI, 5(67), June 2018, arXiv: 1803.06259 (article)

Abstract
We present Oncilla robot, a novel mobile, quadruped legged locomotion machine. This large-cat sized, 5.1 robot is one of a kind of a recent, bioinspired legged robot class designed with the capability of model-free locomotion control. Animal legged locomotion in rough terrain is clearly shaped by sensor feedback systems. Results with Oncilla robot show that agile and versatile locomotion is possible without sensory signals to some extend, and tracking becomes robust when feedback control is added (Ajaoolleian 2015). By incorporating mechanical and control blueprints inspired from animals, and by observing the resulting robot locomotion characteristics, we aim to understand the contribution of individual components. Legged robots have a wide mechanical and control design parameter space, and a unique potential as research tools to investigate principles of biomechanics and legged locomotion control. But the hardware and controller design can be a steep initial hurdle for academic research. To facilitate the easy start and development of legged robots, Oncilla-robot's blueprints are available through open-source. [...]

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Learning 3D Shape Completion under Weak Supervision
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

Arxiv, May 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with fully supervised baselines and outperforms data-driven approaches, while requiring less supervision and being significantly faster.

avg

PDF Project Page Project Page [BibTex]


no image
Automatically Rating Trainee Skill at a Pediatric Laparoscopic Suturing Task

Oquendo, Y. A., Riddle, E. W., Hiller, D., Blinman, T. A., Kuchenbecker, K. J.

Surgical Endoscopy, 32(4):1840-1857, April 2018 (article)

hi

DOI [BibTex]

DOI [BibTex]


no image
Electroelastic modeling of thin-laminated composite plates with surface-bonded piezo-patches using Rayleigh–Ritz method

Gozum, M. M., Aghakhani, A., Serhat, G., Basdogan, I.

Journal of Intelligent Material Systems and Structures, 29(10):2192–2205, March 2018 (article)

Abstract
Laminated composite panels are extensively used in various engineering applications. Piezoelectric transducers can be integrated into such composite structures for a variety of vibration control and energy harvesting applications. Analyzing the structural dynamics of such electromechanical systems requires precise modeling tools which properly consider the coupling between the piezoelectric elements and the laminates. Although previous analytical models in the literature cover vibration analysis of laminated composite plates with fully covered piezoelectric layers, they do not provide a formulation for modeling the piezoelectric patches that partially cover the plate surface. In this study, a methodology for vibration analysis of laminated composite plates with surface-bonded piezo-patches is developed. Rayleigh–Ritz method is used for solving the modal analysis and obtaining the frequency response functions. The developed model includes mass and stiffness contribution of the piezo-patches as well as the two-way electromechanical coupling effect. Moreover, an accelerated method is developed for reducing the computation time of the modal analysis solution. For validations, system-level finite element simulations are performed in ANSYS software. The results show that the developed analytical model can be utilized for accurate and efficient analysis and design of laminated composite plates with surface-bonded piezo-patches.

hi pi

DOI [BibTex]

DOI [BibTex]


Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices
Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices

Mun, S., Yun, S., Nam, S., Park, S. K., Park, S., Park, B. J., Lim, J. M., Kyung, K. U.

IEEE Transactions on Haptics, 11(1):15-21, Febuary 2018 (article)

Abstract
This paper reports soft actuator based tactile stimulation interfaces applicable to wearable devices. The soft actuator is prepared by multi-layered accumulation of thin electro-active polymer (EAP) films. The multi-layered actuator is designed to produce electrically-induced convex protrusive deformation, which can be dynamically programmable for wide range of tactile stimuli. The maximum vertical protrusion is 650 μm and the output force is up to 255 mN. The soft actuators are embedded into the fingertip part of a glove and front part of a forearm band, respectively. We have conducted two kinds of experiments with 15 subjects. Perceived magnitudes of actuator's protrusion and vibrotactile intensity were measured with frequency of 1 Hz and 191 Hz, respectively. Analysis of the user tests shows participants perceive variation of protrusion height at the finger pad and modulation of vibration intensity through the proposed soft actuator based tactile interface.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Robotic Motion Learning Framework to Promote Social Engagement
Robotic Motion Learning Framework to Promote Social Engagement

Burns, R., Jeon, M., Park, C. H.

Applied Sciences, 8(2):241, Febuary 2018, Special Issue "Social Robotics" (article)

Abstract
Imitation is a powerful component of communication between people, and it poses an important implication in improving the quality of interaction in the field of human–robot interaction (HRI). This paper discusses a novel framework designed to improve human–robot interaction through robotic imitation of a participant’s gestures. In our experiment, a humanoid robotic agent socializes with and plays games with a participant. For the experimental group, the robot additionally imitates one of the participant’s novel gestures during a play session. We hypothesize that the robot’s use of imitation will increase the participant’s openness towards engaging with the robot. Experimental results from a user study of 12 subjects show that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts did. These results point to an increased participant interest in engagement fueled by personalized imitation during interaction.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Distributed Event-Based State Estimation for Networked Systems: An LMI Approach

Muehlebach, M., Trimpe, S.

IEEE Transactions on Automatic Control, 63(1):269-276, January 2018 (article)

am ics

arXiv (extended version) DOI Project Page [BibTex]

arXiv (extended version) DOI Project Page [BibTex]


no image
Memristor-enhanced humanoid robot control system–Part I: theory behind the novel memcomputing paradigm

Ascoli, A., Baumann, D., Tetzlaff, R., Chua, L. O., Hild, M.

International Journal of Circuit Theory and Applications, 46(1):155-183, 2018 (article)

am

DOI [BibTex]

DOI [BibTex]


Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes
Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes

Alhaija, H., Mustikovela, S., Mescheder, L., Geiger, A., Rother, C.

International Journal of Computer Vision (IJCV), 2018, 2018 (article)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Immersive Low-Cost Virtual Reality Treatment for Phantom Limb Pain: Evidence from Two Cases

Ambron, E., Miller, A., Kuchenbecker, K. J., Buxbaum, L. J., Coslett, H. B.

Frontiers in Neurology, 9(67):1-7, 2018 (article)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Learning 3D Shape Completion under Weak Supervision
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

International Journal of Computer Vision (IJCV), 2018, 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and outperforms the data-driven approach of Engelmann et al., while requiring less supervision and being significantly faster.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Memristor-enhanced humanoid robot control system–Part II: circuit theoretic model and performance analysis

Baumann, D., Ascoli, A., Tetzlaff, R., Chua, L. O., Hild, M.

International Journal of Circuit Theory and Applications, 46(1):184-220, 2018 (article)

am

DOI [BibTex]

DOI [BibTex]


Tactile Masking by Electrovibration
Tactile Masking by Electrovibration

Vardar, Y., Güçlü, B., Basdogan, C.

IEEE Transactions on Haptics, 11(4):623-635, 2018 (article)

Abstract
Future touch screen applications will include multiple tactile stimuli displayed simultaneously or consecutively to single finger or multiple fingers. These applications should be designed by considering human tactile masking mechanism since it is known that presenting one stimulus may interfere with the perception of the other. In this study, we investigate the effect of masking on tactile perception of electrovibration displayed on touch screens. Through conducting psychophysical experiments with nine subjects, we measured the masked thresholds of sinusoidal electrovibration bursts (125 Hz) under two masking conditions: simultaneous and pedestal. The masking stimuli were noise bursts, applied at five different sensation levels varying from 2 to 22 dB SL, also presented by electrovibration. For each subject, the detection thresholds were elevated as linear functions of masking levels for both masking types. We observed that the masking effectiveness was larger with pedestal masking than simultaneous masking. Moreover, in order to investigate the effect of tactile masking on our haptic perception of edge sharpness, we compared the perceived sharpness of edges separating two textured regions displayed with and without various masking stimuli. Our results suggest that sharpness perception depends on the local contrast between background and foreground stimuli, which varies as a function of masking amplitude and activation levels of frequency-dependent psychophysical channels.

hi

vardar_toh2018 DOI [BibTex]

vardar_toh2018 DOI [BibTex]


Object Scene Flow
Object Scene Flow

Menze, M., Heipke, C., Geiger, A.

ISPRS Journal of Photogrammetry and Remote Sensing, 2018 (article)

Abstract
This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.

avg

Project Page [BibTex]

Project Page [BibTex]

2014


no image
Haptic Robotization of Human Body via Data-Driven Vibrotactile Feedback

Kurihara, Y., Takei, S., Nakai, Y., Hachisu, T., Kuchenbecker, K. J., Kajimoto, H.

Entertainment Computing, 5(4):485-494, December 2014 (article)

hi

[BibTex]

2014


[BibTex]


no image
Wenn es was zu sagen gibt

(Klaus Tschira Award 2014 in Computer Science)

Trimpe, S.

Bild der Wissenschaft, pages: 20-23, November 2014, (popular science article in German) (article)

am ics

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
Robotics and Neuroscience

Floreano, Dario, Ijspeert, Auke Jan, Schaal, S.

Current Biology, 24(18):R910-R920, sep 2014 (article)

am

[BibTex]

[BibTex]


no image
Modeling and Rendering Realistic Textures from Unconstrained Tool-Surface Interactions

Culbertson, H., Unwin, J., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 7(3):381-292, July 2014 (article)

hi

[BibTex]

[BibTex]


Nonmyopic View Planning for Active Object Classification and Pose Estimation
Nonmyopic View Planning for Active Object Classification and Pose Estimation

Atanasov, N., Sankaran, B., Le Ny, J., Pappas, G., Daniilidis, K.

IEEE Transactions on Robotics, May 2014, clmc (article)

Abstract
One of the central problems in computer vision is the detection of semantically important objects and the estimation of their pose. Most of the work in object detection has been based on single image processing and its performance is limited by occlusions and ambiguity in appearance and geometry. This paper proposes an active approach to object detection by controlling the point of view of a mobile depth camera. When an initial static detection phase identifies an object of interest, several hypotheses are made about its class and orientation. The sensor then plans a sequence of viewpoints, which balances the amount of energy used to move with the chance of identifying the correct hypothesis. We formulate an active M-ary hypothesis testing problem, which includes sensor mobility, and solve it using a point-based approximate POMDP algorithm. The validity of our approach is verified through simulation and real-world experiments with the PR2 robot. The results suggest a significant improvement over static object detection

am

Web pdf link (url) [BibTex]

Web pdf link (url) [BibTex]


3D Traffic Scene Understanding from Movable Platforms
3D Traffic Scene Understanding from Movable Platforms

Geiger, A., Lauer, M., Wojek, C., Stiller, C., Urtasun, R.

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(5):1012-1025, published, IEEE, Los Alamitos, CA, May 2014 (article)

Abstract
In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow and occupancy grids. For each of these cues we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

avg ps

pdf link (url) [BibTex]

pdf link (url) [BibTex]


Data-Driven Grasp Synthesis - A Survey
Data-Driven Grasp Synthesis - A Survey

Bohg, J., Morales, A., Asfour, T., Kragic, D.

IEEE Transactions on Robotics, 30, pages: 289 - 309, IEEE, April 2014 (article)

Abstract
We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.

am

PDF link (url) DOI Project Page [BibTex]

PDF link (url) DOI Project Page [BibTex]


Targets-Drives-Means: {A} declarative approach to dynamic behavior specification with higher usability
Targets-Drives-Means: A declarative approach to dynamic behavior specification with higher usability

Berenz, V., Suzuki, K.

Robotics and Autonomous Systems, 62(4):545-555, 2014 (article)

am

link (url) DOI [BibTex]


Roombots: A hardware perspective on 3D self-reconfiguration and locomotion with a homogeneous modular robot
Roombots: A hardware perspective on 3D self-reconfiguration and locomotion with a homogeneous modular robot

Spröwitz, A., Moeckel, R., Vespignani, M., Bonardi, S., Ijspeert, A. J.

{Robotics and Autonomous Systems}, 62(7):1016-1033, Elsevier, Amsterdam, 2014 (article)

Abstract
In this work we provide hands-on experience on designing and testing a self-reconfiguring modular robotic system, Roombots (RB), to be used among others for adaptive furniture. In the long term, we envision that RB can be used to create sets of furniture, such as stools, chairs and tables that can move in their environment and that change shape and functionality during the day. In this article, we present the first, incremental results towards that long term vision. We demonstrate locomotion and reconfiguration of single and metamodule RB over 3D surfaces, in a structured environment equipped with embedded connection ports. RB assemblies can move around in non-structured environments, by using rotational or wheel-like locomotion. We show a proof of concept for transferring a Roombots metamodule (two in-series coupled RB modules) from the non-structured environment back into the structured grid, by aligning the RB metamodule in an entrapment mechanism. Finally, we analyze the remaining challenges to master the full Roombots scenario, and discuss the impact on future Roombots hardware.

dlg

DOI [BibTex]

DOI [BibTex]


no image
A Limiting Property of the Matrix Exponential

Trimpe, S., D’Andrea, R.

IEEE Transactions on Automatic Control, 59(4):1105-1110, 2014 (article)

am ics

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Event-Based State Estimation With Variance-Based Triggering

Trimpe, S., D’Andrea, R.

IEEE Transactions on Automatic Control, 59(12):3266-3281, 2014 (article)

am ics

PDF Supplementary material DOI Project Page [BibTex]

PDF Supplementary material DOI Project Page [BibTex]


Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs
Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs

Spröwitz, A. T., Ajallooeian, M., Tuleu, A., Ijspeert, A. J.

Frontiers in Computational Neuroscience, 8(27):1-13, 2014 (article)

Abstract
In this work we research the role of body dynamics in the complexity of kinematic patterns in a quadruped robot with compliant legs. Two gait patterns, lateral sequence walk and trot, along with leg length control patterns of different complexity were implemented in a modular, feed-forward locomotion controller. The controller was tested on a small, quadruped robot with compliant, segmented leg design, and led to self-stable and self-stabilizing robot locomotion. In-air stepping and on-ground locomotion leg kinematics were recorded, and the number and shapes of motion primitives accounting for 95\% of the variance of kinematic leg data were extracted. This revealed that kinematic patterns resulting from feed-forward control had a lower complexity (in-air stepping, 2–3 primitives) than kinematic patterns from on-ground locomotion (νm4 primitives), although both experiments applied identical motor patterns. The complexity of on-ground kinematic patterns had increased, through ground contact and mechanical entrainment. The complexity of observed kinematic on-ground data matches those reported from level-ground locomotion data of legged animals. Results indicate that a very low complexity of modular, rhythmic, feed-forward motor control is sufficient for level-ground locomotion in combination with passive compliant legged hardware.

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Perspective: Intelligent Systems: Bits and Bots

Spatz, J. P., Schaal, S.

Nature, (509), 2014, clmc (article)

Abstract
What is intelligence, and can we create it? Animals can perceive, reason, react and learn, but they are just one example of an intelligent system. Intelligent systems could be robots as large as humans, helping with search-and- rescue operations in dangerous places, or smart devices as tiny as a cell, delivering drugs to a target within the body. Even computing systems can be intelligent, by perceiving the world, crawling the web and processing â??big dataâ?? to extract and learn from complex information.Understanding not only how intelligence can be reproduced, but also how to build systems that put these ideas into practice, will be a challenge. Small intelligent systems will require new materials and fabrication methods, as well as com- pact information processors and power sources. And for nano-sized systems, the rules change altogether. The laws of physics operate very differently at tiny scales: for a nanorobot, swimming through water is like struggling through treacle.Researchers at the Max Planck Institute for Intelligent Systems have begun to solve these problems by developing new computational methods, experiment- ing with unique robotic systems and fabricating tiny, artificial propellers, like bacterial flagella, to propel nanocreations through their environment.

am

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
An autonomous manipulation system based on force control and optimization

Righetti, L., Kalakrishnan, M., Pastor, P., Binney, J., Kelly, J., Voorhies, R. C., Sukhatme, G. S., Schaal, S.

Autonomous Robots, 36(1-2):11-30, January 2014 (article)

Abstract
In this paper we present an architecture for autonomous manipulation. Our approach is based on the belief that contact interactions during manipulation should be exploited to improve dexterity and that optimizing motion plans is useful to create more robust and repeatable manipulation behaviors. We therefore propose an architecture where state of the art force/torque control and optimization-based motion planning are the core components of the system. We give a detailed description of the modules that constitute the complete system and discuss the challenges inherent to creating such a system. We present experimental results for several grasping and manipulation tasks to demonstrate the performance and robustness of our approach.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning of grasp selection based on shape-templates

Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T., Schaal, S.

Autonomous Robots, 36(1-2):51-65, January 2014 (article)

Abstract
The ability to grasp unknown objects still remains an unsolved problem in the robotics community. One of the challenges is to choose an appropriate grasp configuration, i.e., the 6D pose of the hand relative to the object and its finger configuration. In this paper, we introduce an algorithm that is based on the assumption that similarly shaped objects can be grasped in a similar way. It is able to synthesize good grasp poses for unknown objects by finding the best matching object shape templates associated with previously demonstrated grasps. The grasp selection algorithm is able to improve over time by using the information of previous grasp attempts to adapt the ranking of the templates to new situations. We tested our approach on two different platforms, the Willow Garage PR2 and the Barrett WAM robot, which have very different hand kinematics. Furthermore, we compared our algorithm with other grasp planners and demonstrated its superior performance. The results presented in this paper show that the algorithm is able to find good grasp configurations for a large set of unknown objects from a relatively small set of demonstrations, and does improve its performance over time.

am mg

link (url) DOI [BibTex]

2013


no image
A Practical System For Recording Instrument Interactions During Live Robotic Surgery

McMahan, W., Gomez, E. D., Chen, L., Bark, K., Nappo, J. C., Koch, E. I., Lee, D. I., Dumon, K., Williams, N., Kuchenbecker, K. J.

Journal of Robotic Surgery, 7(4):351-358, 2013 (article)

hi

[BibTex]

2013


[BibTex]


3-D Object Reconstruction of Symmetric Objects by Fusing Visual and Tactile Sensing
3-D Object Reconstruction of Symmetric Objects by Fusing Visual and Tactile Sensing

Illonen, J., Bohg, J., Kyrki, V.

The International Journal of Robotics Research, 33(2):321-341, Sage, October 2013 (article)

Abstract
In this work, we propose to reconstruct a complete 3-D model of an unknown object by fusion of visual and tactile information while the object is grasped. Assuming the object is symmetric, a first hypothesis of its complete 3-D shape is generated. A grasp is executed on the object with a robotic manipulator equipped with tactile sensors. Given the detected contacts between the fingers and the object, the initial full object model including the symmetry parameters can be refined. This refined model will then allow the planning of more complex manipulation tasks. The main contribution of this work is an optimal estimation approach for the fusion of visual and tactile data applying the constraint of object symmetry. The fusion is formulated as a state estimation problem and solved with an iterative extended Kalman filter. The approach is validated experimentally using both artificial and real data from two different robotic platforms.

am

Web DOI Project Page [BibTex]

Web DOI Project Page [BibTex]


Vision meets Robotics: The {KITTI} Dataset
Vision meets Robotics: The KITTI Dataset

Geiger, A., Lenz, P., Stiller, C., Urtasun, R.

International Journal of Robotics Research, 32(11):1231 - 1237 , Sage Publishing, September 2013 (article)

Abstract
We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Vibrotactile Display: Perception, Technology, and Applications

Choi, S., Kuchenbecker, K. J.

Proceedings of the IEEE, 101(9):2093-2104, sep 2013 (article)

hi

[BibTex]

[BibTex]


no image
ROS Open-source Audio Recognizer: ROAR Environmental Sound Detection Tools for Robot Programming

Romano, J. M., Brindza, J. P., Kuchenbecker, K. J.

Autonomous Robots, 34(3):207-215, April 2013 (article)

hi

[BibTex]

[BibTex]


no image
In Vivo Validation of a System for Haptic Feedback of Tool Vibrations in Robotic Surgery

Bark, K., McMahan, W., Remington, A., Gewirtz, J., Wedmid, A., Lee, D. I., Kuchenbecker, K. J.

Surgical Endoscopy, 27(2):656-664, February 2013, dynamic article (paper plus video), available at \href{http://www.springerlink.com/content/417j532708417342/}{http://www.springerlink.com/content/417j532708417342/} (article)

hi

[BibTex]

[BibTex]


no image
Perception of Springs with Visual and Proprioceptive Motion Cues: Implications for Prosthetics

Gurari, N., Kuchenbecker, K. J., Okamura, A. M.

IEEE Transactions on Human-Machine Systems, 43, pages: 102-114, January 2013, \href{http://www.youtube.com/watch?v=DBRw87Wk29E\&feature=youtu.be}{Video} (article)

hi

[BibTex]

[BibTex]


Towards Dynamic Trot Gait Locomotion: Design, Control, and Experiments with Cheetah-cub, a Compliant Quadruped Robot
Towards Dynamic Trot Gait Locomotion: Design, Control, and Experiments with Cheetah-cub, a Compliant Quadruped Robot

Spröwitz, A., Tuleu, A., Vespignani, M., Ajallooeian, M., Badri, E., Ijspeert, A. J.

{The International Journal of Robotics Research}, 32(8):932-950, Sage Publications, Inc., Cambridge, MA, 2013 (article)

Abstract
We present the design of a novel compliant quadruped robot, called Cheetah-cub, and a series of locomotion experiments with fast trotting gaits. The robot’s leg configuration is based on a spring-loaded, pantograph mechanism with multiple segments. A dedicated open-loop locomotion controller was derived and implemented. Experiments were run in simulation and in hardware on flat terrain and with a step down, demonstrating the robot’s self-stabilizing properties. The robot reached a running trot with short flight phases with a maximum Froude number of FR = 1.30, or 6.9 body lengths per second. Morphological parameters such as the leg design also played a role. By adding distal in-series elasticity, self- stability and maximum robot speed improved. Our robot has several advantages, especially when compared with larger and stiffer quadruped robot designs. (1) It is, to the best of the authors’ knowledge, the fastest of all quadruped robots below 30 kg (in terms of Froude number and body lengths per second). (2) It shows self-stabilizing behavior over a large range of speeds with open-loop control. (3) It is lightweight, compact, and electrically powered. (4) It is cheap, easy to reproduce, robust, and safe to handle. This makes it an excellent tool for research of multi-segment legs in quadruped robots.

dlg

Youtube1 Youtube2 Youtube3 Youtube4 Youtube5 DOI Project Page [BibTex]

Youtube1 Youtube2 Youtube3 Youtube4 Youtube5 DOI Project Page [BibTex]


no image
Optimal control of reaching includes kinematic constraints

Mistry, M., Theodorou, E., Schaal, S., Kawato, M.

Journal of Neurophysiology, 2013, clmc (article)

Abstract
We investigate adaptation under a reaching task with an acceleration-based force field perturbation designed to alter the nominal straight hand trajectory in a potentially benign manner:pushing the hand of course in one direction before subsequently restoring towards the target. In this particular task, an explicit strategy to reduce motor effort requires a distinct deviation from the nominal rectilinear hand trajectory. Rather, our results display a clear directional preference during learning, as subjects adapted perturbed curved trajectories towards their initial baselines. We model this behavior using the framework of stochastic optimal control theory and an objective function that trades-of the discordant requirements of 1) target accuracy, 2) motor effort, and 3) desired trajectory. Our work addresses the underlying objective of a reaching movement, and we suggest that robustness, particularly against internal model uncertainly, is as essential to the reaching task as terminal accuracy and energy effciency.

am

PDF [BibTex]

PDF [BibTex]


no image
Expectation and Attention in Hierarchical Auditory Prediction

Chennu, S., Noreika, V., Gueorguiev, D., Blenkmann, A., Kochen, S., Ibáñez, A., Owen, A. M., Bekinschtein, T. A.

Journal of Neuroscience, 33(27):11194-11205, Society for Neuroscience, 2013 (article)

Abstract
Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors

Ijspeert, A., Nakanishi, J., Pastor, P., Hoffmann, H., Schaal, S.

Neural Computation, (25):328-373, 2013, clmc (article)

Abstract
Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by meansof a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics.

am

link (url) [BibTex]

link (url) [BibTex]