Header logo is


2018


Softness, Warmth, and Responsiveness Improve Robot Hugs
Softness, Warmth, and Responsiveness Improve Robot Hugs

Block, A. E., Kuchenbecker, K. J.

International Journal of Social Robotics, 11(1):49-64, October 2018 (article)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, roboticists are naturally interested in having robots one day hug humans as seamlessly as humans hug other humans. This project's purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a soft, warm, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty relatively young and rather technical participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use.

hi

link (url) DOI Project Page [BibTex]

2018


link (url) DOI Project Page [BibTex]


no image
Complexity, Rate, and Scale in Sliding Friction Dynamics Between a Finger and Textured Surface

Khojasteh, B., Janko, M., Visell, Y.

Nature Scientific Reports, 8(13710), September 2018 (article)

Abstract
Sliding friction between the skin and a touched surface is highly complex, but lies at the heart of our ability to discriminate surface texture through touch. Prior research has elucidated neural mechanisms of tactile texture perception, but our understanding of the nonlinear dynamics of frictional sliding between the finger and textured surfaces, with which the neural signals that encode texture originate, is incomplete. To address this, we compared measurements from human fingertips sliding against textured counter surfaces with predictions of numerical simulations of a model finger that resembled a real finger, with similar geometry, tissue heterogeneity, hyperelasticity, and interfacial adhesion. Modeled and measured forces exhibited similar complex, nonlinear sliding friction dynamics, force fluctuations, and prominent regularities related to the surface geometry. We comparatively analysed measured and simulated forces patterns in matched conditions using linear and nonlinear methods, including recurrence analysis. The model had greatest predictive power for faster sliding and for surface textures with length scales greater than about one millimeter. This could be attributed to the the tendency of sliding at slower speeds, or on finer surfaces, to complexly engage fine features of skin or surface, such as fingerprints or surface asperities. The results elucidate the dynamical forces felt during tactile exploration and highlight the challenges involved in the biological perception of surface texture via touch.

hi

DOI [BibTex]

DOI [BibTex]


no image
Design of curved composite panels for optimal dynamic response using lamination parameters

Serhat, G., Basdogan, I.

Composites Part B: Engineering, 147, pages: 135–146, August 2018 (article)

Abstract
In this paper, dynamic response of composite panels is investigated using lamination parameters as design variables. Finite element analyses are performed to observe the individual and combined effects of different panel aspect ratios, curvatures and boundary conditions on the dynamic responses. Fundamental frequency contours for curved panels are obtained in lamination parameters domain and optimal points yielding maximum values are found. Subsequently, forced dynamic analyses are carried out to calculate equivalent radiated power (ERP) for the panels under harmonic pressure excitation. ERP contours at the maximum fundamental frequency are presented. Optimal lamination parameters providing minimum ERP are determined for different excitation frequencies and their effective frequency bands are shown. The relationship between the designs optimized for maximum fundamental frequency and minimum ERP responses is investigated to study the effectiveness of the frequency maximization technique. The results demonstrate the potential of using lamination parameters technique in the design of curved composite panels for optimal dynamic response and provide valuable insight on the effect of various design parameters.

hi

DOI [BibTex]

DOI [BibTex]


no image
A Robust Soft Lens for Tunable Camera Application Using Dielectric Elastomer Actuators

Nam, S., Yun, S., Yoon, J. W., Park, S., Park, S. K., Mun, S., Park, B., Kyung, K.

Soft robotics, Mary Ann Liebert, Inc., August 2018 (article)

Abstract
Developing tunable lenses, an expansion-based mechanism for dynamic focus adjustment can provide a larger focal length tuning range than a contraction-based mechanism. Here, we develop an expansion-tunable soft lens module using a disk-type dielectric elastomer actuator (DEA) that creates axially symmetric pulling forces on a soft lens. Adopted from a biological accommodation mechanism in human eyes, a soft lens at the annular center of a disk-type DEA pair is efficiently stretched to change the focal length in a highly reliable manner. A soft lens with a diameter of 3mm shows a 65.7% change in the focal length (14.3–23.7mm) under a dynamic driving voltage signal control. We confirm a quadratic relation between lens expansion and focal length that leads to large focal length tunability obtainable in the proposed approach. The fabricated tunable lens module can be used for soft, lightweight, and compact vision components in robots, drones, vehicles, and so on.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Task-Driven PCA-Based Design Optimization of Wearable Cutaneous Devices

Pacchierotti, C., Young, E. M., Kuchenbecker, K. J.

IEEE Robotics and Automation Letters, 3(3):2214-2221, July 2018, Presented at ICRA 2018 (article)

Abstract
Small size and low weight are critical requirements for wearable and portable haptic interfaces, making it essential to work toward the optimization of their sensing and actuation systems. This paper presents a new approach for task-driven design optimization of fingertip cutaneous haptic devices. Given one (or more) target tactile interactions to render and a cutaneous device to optimize, we evaluate the minimum number and best configuration of the device’s actuators to minimize the estimated haptic rendering error. First, we calculate the motion needed for the original cutaneous device to render the considered target interaction. Then, we run a principal component analysis (PCA) to search for possible couplings between the original motor inputs, looking also for the best way to reconfigure them. If some couplings exist, we can re-design our cutaneous device with fewer motors, optimally configured to render the target tactile sensation. The proposed approach is quite general and can be applied to different tactile sensors and cutaneous devices. We validated it using a BioTac tactile sensor and custom plate-based 3-DoF and 6-DoF fingertip cutaneous devices, considering six representative target tactile interactions. The algorithm was able to find couplings between each device’s motor inputs, proving it to be a viable approach to optimize the design of wearable and portable cutaneous devices. Finally, we present two examples of optimized designs for our 3-DoF fingertip cutaneous device.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Robust Physics-based Motion Retargeting with Realistic Body Shapes
Robust Physics-based Motion Retargeting with Realistic Body Shapes

Borno, M. A., Righetti, L., Black, M. J., Delp, S. L., Fiume, E., Romero, J.

Computer Graphics Forum, 37, pages: 6:1-12, July 2018 (article)

Abstract
Motion capture is often retargeted to new, and sometimes drastically different, characters. When the characters take on realistic human shapes, however, we become more sensitive to the motion looking right. This means adapting it to be consistent with the physical constraints imposed by different body shapes. We show how to take realistic 3D human shapes, approximate them using a simplified representation, and animate them so that they move realistically using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The approach automatically adapts the motion of the mocap subject to the body shape of a target subject. This motion respects the physical properties of the new body and every body shape results in a different and appropriate movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters. In an interactive environment, successful retargeting requires adapting the motion to unexpected external forces. We achieve robustness to such forces using a novel LQR-tree formulation. We show that the simulated motions look appropriate to each character’s anatomy and their actions are robust to perturbations.

mg ps

pdf video Project Page Project Page [BibTex]

pdf video Project Page Project Page [BibTex]


Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn {IMU}s
Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn IMUs

Fitter, N. T., Kuchenbecker, K. J.

Frontiers in Robotics and Artificial Intelligence, 5(85), July 2018 (article)

Abstract
Colleagues often shake hands in greeting, friends connect through high fives, and children around the world rejoice in hand-clapping games. As robots become more common in everyday human life, they will have the opportunity to join in these social-physical interactions, but few current robots are intended to touch people in friendly ways. This article describes how we enabled a Baxter Research Robot to both teach and learn bimanual hand-clapping games with a human partner. Our system monitors the user's motions via a pair of inertial measurement units (IMUs) worn on the wrists. We recorded a labeled library of 10 common hand-clapping movements from 10 participants; this dataset was used to train an SVM classifier to automatically identify hand-clapping motions from previously unseen participants with a test-set classification accuracy of 97.0%. Baxter uses these sensors and this classifier to quickly identify the motions of its human gameplay partner, so that it can join in hand-clapping games. This system was evaluated by N = 24 naïve users in an experiment that involved learning sequences of eight motions from Baxter, teaching Baxter eight-motion game patterns, and completing a free interaction period. The motion classification accuracy in this less structured setting was 85.9%, primarily due to unexpected variations in motion timing. The quantitative task performance results and qualitative participant survey responses showed that learning games from Baxter was significantly easier than teaching games to Baxter, and that the teaching role caused users to consider more teamwork aspects of the gameplay. Over the course of the experiment, people felt more understood by Baxter and became more willing to follow the example of the robot. Users felt uniformly safe interacting with Baxter, and they expressed positive opinions of Baxter and reported fun interacting with the robot. Taken together, the results indicate that this robot achieved credible social-physical interaction with humans and that its ability to both lead and follow systematically changed the human partner's experience.

hi

DOI [BibTex]

DOI [BibTex]


Learning 3D Shape Completion under Weak Supervision
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

Arxiv, May 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with fully supervised baselines and outperforms data-driven approaches, while requiring less supervision and being significantly faster.

avg

PDF Project Page Project Page [BibTex]


no image
Automatically Rating Trainee Skill at a Pediatric Laparoscopic Suturing Task

Oquendo, Y. A., Riddle, E. W., Hiller, D., Blinman, T. A., Kuchenbecker, K. J.

Surgical Endoscopy, 32(4):1840-1857, April 2018 (article)

hi

DOI [BibTex]

DOI [BibTex]


no image
Electroelastic modeling of thin-laminated composite plates with surface-bonded piezo-patches using Rayleigh–Ritz method

Gozum, M. M., Aghakhani, A., Serhat, G., Basdogan, I.

Journal of Intelligent Material Systems and Structures, 29(10):2192–2205, March 2018 (article)

Abstract
Laminated composite panels are extensively used in various engineering applications. Piezoelectric transducers can be integrated into such composite structures for a variety of vibration control and energy harvesting applications. Analyzing the structural dynamics of such electromechanical systems requires precise modeling tools which properly consider the coupling between the piezoelectric elements and the laminates. Although previous analytical models in the literature cover vibration analysis of laminated composite plates with fully covered piezoelectric layers, they do not provide a formulation for modeling the piezoelectric patches that partially cover the plate surface. In this study, a methodology for vibration analysis of laminated composite plates with surface-bonded piezo-patches is developed. Rayleigh–Ritz method is used for solving the modal analysis and obtaining the frequency response functions. The developed model includes mass and stiffness contribution of the piezo-patches as well as the two-way electromechanical coupling effect. Moreover, an accelerated method is developed for reducing the computation time of the modal analysis solution. For validations, system-level finite element simulations are performed in ANSYS software. The results show that the developed analytical model can be utilized for accurate and efficient analysis and design of laminated composite plates with surface-bonded piezo-patches.

hi pi

DOI [BibTex]

DOI [BibTex]


Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices
Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices

Mun, S., Yun, S., Nam, S., Park, S. K., Park, S., Park, B. J., Lim, J. M., Kyung, K. U.

IEEE Transactions on Haptics, 11(1):15-21, Febuary 2018 (article)

Abstract
This paper reports soft actuator based tactile stimulation interfaces applicable to wearable devices. The soft actuator is prepared by multi-layered accumulation of thin electro-active polymer (EAP) films. The multi-layered actuator is designed to produce electrically-induced convex protrusive deformation, which can be dynamically programmable for wide range of tactile stimuli. The maximum vertical protrusion is 650 μm and the output force is up to 255 mN. The soft actuators are embedded into the fingertip part of a glove and front part of a forearm band, respectively. We have conducted two kinds of experiments with 15 subjects. Perceived magnitudes of actuator's protrusion and vibrotactile intensity were measured with frequency of 1 Hz and 191 Hz, respectively. Analysis of the user tests shows participants perceive variation of protrusion height at the finger pad and modulation of vibration intensity through the proposed soft actuator based tactile interface.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Robotic Motion Learning Framework to Promote Social Engagement
Robotic Motion Learning Framework to Promote Social Engagement

Burns, R., Jeon, M., Park, C. H.

Applied Sciences, 8(2):241, Febuary 2018, Special Issue "Social Robotics" (article)

Abstract
Imitation is a powerful component of communication between people, and it poses an important implication in improving the quality of interaction in the field of human–robot interaction (HRI). This paper discusses a novel framework designed to improve human–robot interaction through robotic imitation of a participant’s gestures. In our experiment, a humanoid robotic agent socializes with and plays games with a participant. For the experimental group, the robot additionally imitates one of the participant’s novel gestures during a play session. We hypothesize that the robot’s use of imitation will increase the participant’s openness towards engaging with the robot. Experimental results from a user study of 12 subjects show that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts did. These results point to an increased participant interest in engagement fueled by personalized imitation during interaction.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes
Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes

Alhaija, H., Mustikovela, S., Mescheder, L., Geiger, A., Rother, C.

International Journal of Computer Vision (IJCV), 2018, 2018 (article)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Immersive Low-Cost Virtual Reality Treatment for Phantom Limb Pain: Evidence from Two Cases

Ambron, E., Miller, A., Kuchenbecker, K. J., Buxbaum, L. J., Coslett, H. B.

Frontiers in Neurology, 9(67):1-7, 2018 (article)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Learning 3D Shape Completion under Weak Supervision
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

International Journal of Computer Vision (IJCV), 2018, 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and outperforms the data-driven approach of Engelmann et al., while requiring less supervision and being significantly faster.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Tactile Masking by Electrovibration
Tactile Masking by Electrovibration

Vardar, Y., Güçlü, B., Basdogan, C.

IEEE Transactions on Haptics, 11(4):623-635, 2018 (article)

Abstract
Future touch screen applications will include multiple tactile stimuli displayed simultaneously or consecutively to single finger or multiple fingers. These applications should be designed by considering human tactile masking mechanism since it is known that presenting one stimulus may interfere with the perception of the other. In this study, we investigate the effect of masking on tactile perception of electrovibration displayed on touch screens. Through conducting psychophysical experiments with nine subjects, we measured the masked thresholds of sinusoidal electrovibration bursts (125 Hz) under two masking conditions: simultaneous and pedestal. The masking stimuli were noise bursts, applied at five different sensation levels varying from 2 to 22 dB SL, also presented by electrovibration. For each subject, the detection thresholds were elevated as linear functions of masking levels for both masking types. We observed that the masking effectiveness was larger with pedestal masking than simultaneous masking. Moreover, in order to investigate the effect of tactile masking on our haptic perception of edge sharpness, we compared the perceived sharpness of edges separating two textured regions displayed with and without various masking stimuli. Our results suggest that sharpness perception depends on the local contrast between background and foreground stimuli, which varies as a function of masking amplitude and activation levels of frequency-dependent psychophysical channels.

hi

vardar_toh2018 DOI [BibTex]

vardar_toh2018 DOI [BibTex]


Object Scene Flow
Object Scene Flow

Menze, M., Heipke, C., Geiger, A.

ISPRS Journal of Photogrammetry and Remote Sensing, 2018 (article)

Abstract
This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.

avg

Project Page [BibTex]

Project Page [BibTex]


no image
Learning a Structured Neural Network Policy for a Hopping Task.

Viereck, J., Kozolinsky, J., Herzog, A., Righetti, L.

IEEE Robotics and Automation Letters, 3(4):4092-4099, October 2018 (article)

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
The Impact of Robotics and Automation on Working Conditions and Employment [Ethical, Legal, and Societal Issues]

Pham, Q., Madhavan, R., Righetti, L., Smart, W., Chatila, R.

IEEE Robotics and Automation Magazine, 25(2):126-128, June 2018 (article)

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues]

Righetti, L., Pham, Q., Madhavan, R., Chatila, R.

IEEE Robotics \& Automation Magazine, 25(1):123-126, March 2018 (article)

Abstract
The topic of lethal autonomous weapon systems has recently caught public attention due to extensive news coverage and apocalyptic declarations from famous scientists and technologists. Weapon systems with increasing autonomy are being developed due to fast improvements in machine learning, robotics, and automation in general. These developments raise important and complex security, legal, ethical, societal, and technological issues that are being extensively discussed by scholars, nongovernmental organizations (NGOs), militaries, governments, and the international community. Unfortunately, the robotics community has stayed out of the debate, for the most part, despite being the main provider of autonomous technologies. In this column, we review the main issues raised by the increase of autonomy in weapon systems and the state of the international discussion. We argue that the robotics community has a fundamental role to play in these discussions, for its own sake, to provide the often-missing technical expertise necessary to frame the debate and promote technological development in line with the IEEE Robotics and Automation Society (RAS) objective of advancing technology to benefit humanity.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2015


no image
Reducing Student Anonymity and Increasing Engagement

Kuchenbecker, K. J.

University of Pennsylvania Almanac, 62(18):8, November 2015 (article)

hi

[BibTex]

2015


[BibTex]


no image
Surgeons and Non-Surgeons Prefer Haptic Feedback of Instrument Vibrations During Robotic Surgery

Koehn, J. K., Kuchenbecker, K. J.

Surgical Endoscopy, 29(10):2970-2983, October 2015 (article)

hi

[BibTex]

[BibTex]


no image
Displaying Sensed Tactile Cues with a Fingertip Haptic Device

Pacchierotti, C., Prattichizzo, D., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 8(4):384-396, October 2015 (article)

hi

[BibTex]

[BibTex]


no image
A thin film active-lens with translational control for dynamically programmable optical zoom

Yun, S., Park, S., Park, B., Nam, S., Park, S. K., Kyung, K.

Applied Physics Letters, 107(8):081907, AIP Publishing, August 2015 (article)

Abstract
We demonstrate a thin film active-lens for rapidly and dynamically controllable optical zoom. The active-lens is composed of a convex hemispherical polydimethylsiloxane (PDMS) lens structure working as an aperture and a dielectric elastomer (DE) membrane actuator, which is a combination of a thin DE layer made with PDMS and a compliant electrode pattern using silver-nanowires. The active-lens is capable of dynamically changing focal point of the soft aperture as high as 18.4% through its translational movement in vertical direction responding to electrically induced bulged-up deformation of the DE membrane actuator. Under operation with various sinusoidal voltage signals, the movement responses are fairly consistent with those estimated from numerical simulation. The responses are not only fast, fairly reversible, and highly durable during continuous cyclic operations, but also large enough to impart dynamic focus tunability for optical zoom in microscopic imaging devices with a light-weight and ultra-slim configuration.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Data-Driven Motion Mappings Improve Transparency in Teleoperation

Khurshid, R. P., Kuchenbecker, K. J.

Presence: Teleoperators and Virtual Environments, 24(2):132-154, May 2015 (article)

hi

[BibTex]

[BibTex]


no image
Robotic Learning of Haptic Adjectives Through Physical Interaction

Chu, V., McMahon, I., Riano, L., McDonald, C. G., He, Q., Perez-Tejada, J. M., Arrigo, M., Darrell, T., Kuchenbecker, K. J.

Robotics and Autonomous Systems, 63(3):279-292, 2015, Vivian Chu, Ian MacMahon, and Lorenzo Riano contributed equally to this publication. Corrigendum published in June 2016 (article)

hi

[BibTex]

[BibTex]


no image
Effects of Vibrotactile Feedback on Human Motor Learning of Arbitrary Arm Motions

Bark, K., Hyman, E., Tan, F., Cha, E., Jax, S. A., Buxbaum, L. J., Kuchenbecker, K. J.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, 23(1):51-63, January 2015 (article)

hi

[BibTex]

[BibTex]


Optimizing Average Precision using Weakly Supervised Data
Optimizing Average Precision using Weakly Supervised Data

Behl, A., Mohapatra, P., Jawahar, C. V., Kumar, M. P.

IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 2015 (article)

avg

[BibTex]

[BibTex]


no image
Kinematic and gait similarities between crawling human infants and other quadruped mammals

Righetti, L., Nylen, A., Rosander, K., Ijspeert, A.

Frontiers in Neurology, 6(17), February 2015 (article)

Abstract
Crawling on hands and knees is an early pattern of human infant locomotion, which offers an interesting way of studying quadrupedalism in one of its simplest form. We investigate how crawling human infants compare to other quadruped mammals, especially primates. We present quantitative data on both the gait and kinematics of seven 10-month-old crawling infants. Body movements were measured with an optoelectronic system giving precise data on 3-dimensional limb movements. Crawling on hands and knees is very similar to the locomotion of non-human primates in terms of the quite protracted arm at touch-down, the coordination between the spine movements in the lateral plane and the limbs, the relatively extended limbs during locomotion and the strong correlation between stance duration and speed of locomotion. However, there are important differences compared to primates, such as the choice of a lateral-sequence walking gait, which is similar to most non-primate mammals and the relatively stiff elbows during stance as opposed to the quite compliant gaits of primates. These finding raise the question of the role of both the mechanical structure of the body and neural control on the determination of these characteristics.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2013


no image
A Practical System For Recording Instrument Interactions During Live Robotic Surgery

McMahan, W., Gomez, E. D., Chen, L., Bark, K., Nappo, J. C., Koch, E. I., Lee, D. I., Dumon, K., Williams, N., Kuchenbecker, K. J.

Journal of Robotic Surgery, 7(4):351-358, 2013 (article)

hi

[BibTex]

2013


[BibTex]


Vision meets Robotics: The {KITTI} Dataset
Vision meets Robotics: The KITTI Dataset

Geiger, A., Lenz, P., Stiller, C., Urtasun, R.

International Journal of Robotics Research, 32(11):1231 - 1237 , Sage Publishing, September 2013 (article)

Abstract
We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Vibrotactile Display: Perception, Technology, and Applications

Choi, S., Kuchenbecker, K. J.

Proceedings of the IEEE, 101(9):2093-2104, sep 2013 (article)

hi

[BibTex]

[BibTex]


no image
ROS Open-source Audio Recognizer: ROAR Environmental Sound Detection Tools for Robot Programming

Romano, J. M., Brindza, J. P., Kuchenbecker, K. J.

Autonomous Robots, 34(3):207-215, April 2013 (article)

hi

[BibTex]

[BibTex]


no image
In Vivo Validation of a System for Haptic Feedback of Tool Vibrations in Robotic Surgery

Bark, K., McMahan, W., Remington, A., Gewirtz, J., Wedmid, A., Lee, D. I., Kuchenbecker, K. J.

Surgical Endoscopy, 27(2):656-664, February 2013, dynamic article (paper plus video), available at \href{http://www.springerlink.com/content/417j532708417342/}{http://www.springerlink.com/content/417j532708417342/} (article)

hi

[BibTex]

[BibTex]


no image
Perception of Springs with Visual and Proprioceptive Motion Cues: Implications for Prosthetics

Gurari, N., Kuchenbecker, K. J., Okamura, A. M.

IEEE Transactions on Human-Machine Systems, 43, pages: 102-114, January 2013, \href{http://www.youtube.com/watch?v=DBRw87Wk29E\&feature=youtu.be}{Video} (article)

hi

[BibTex]

[BibTex]


no image
Expectation and Attention in Hierarchical Auditory Prediction

Chennu, S., Noreika, V., Gueorguiev, D., Blenkmann, A., Kochen, S., Ibáñez, A., Owen, A. M., Bekinschtein, T. A.

Journal of Neuroscience, 33(27):11194-11205, Society for Neuroscience, 2013 (article)

Abstract
Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Optimal distribution of contact forces with inverse-dynamics control

Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., Schaal, S.

The International Journal of Robotics Research, 32(3):280-298, March 2013 (article)

Abstract
The development of legged robots for complex environments requires controllers that guarantee both high tracking performance and compliance with the environment. More specifically the control of the contact interaction with the environment is of crucial importance to ensure stable, robust and safe motions. In this contribution we develop an inverse-dynamics controller for floating-base robots under contact constraints that can minimize any combination of linear and quadratic costs in the contact constraints and the commands. Our main result is the exact analytical derivation of the controller. Such a result is particularly relevant for legged robots as it allows us to use torque redundancy to directly optimize contact interactions. For example, given a desired locomotion behavior, we can guarantee the minimization of contact forces to reduce slipping on difficult terrains while ensuring high tracking performance of the desired motion. The main advantages of the controller are its simplicity, computational efficiency and robustness to model inaccuracies. We present detailed experimental results on simulated humanoid and quadruped robots as well as a real quadruped robot. The experiments demonstrate that the controller can greatly improve the robustness of locomotion of the robots.1

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Controlled Reduction with Unactuated Cyclic Variables: Application to 3D Bipedal Walking with Passive Yaw Rotation

Gregg, R., Righetti, L.

IEEE Transactions on Automatic Control, 58(10):2679-2685, October 2013 (article)

Abstract
This technical note shows that viscous damping can shape momentum conservation laws in a manner that stabilizes yaw rotation and enables steering for underactuated 3D walking. We first show that unactuated cyclic variables can be controlled by passively shaped conservation laws given a stabilizing controller in the actuated coordinates. We then exploit this result to realize controlled geometric reduction with multiple unactuated cyclic variables. We apply this underactuated control strategy to a five-link 3D biped to produce exponentially stable straight-ahead walking and steering in the presence of passive yawing.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2012


no image
Evaluation of Tactile Feedback Methods for Wrist Rotation Guidance

Stanley, A. A., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 5(3):240-251, July 2012 (article)

hi

[BibTex]

2012


[BibTex]


no image
Creating realistic virtual textures from contact acceleration data

Romano, J. M., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 5(2):109-119, April 2012, Cover article (article)

hi

[BibTex]

[BibTex]


no image
Construct Validity of Instrument Vibrations as a Measure of Robotic Surgical Skill

Gomez, E. D., Bark, K., Rivera, C., McMahan, W., Remington, A., Lee, D. I., Williams, N., Murayama, K., Dumon, K., Kuchenbecker, K. J.

Journal of the American College of Surgeons, 215(3):S119-120, Chicago, Illinois, USA, 2012, Oral presentation given by Gomez at the {\em American College of Surgeons (ACS) Clinical Congress} (article)

hi

[BibTex]

[BibTex]

2010


no image
Lack of Discriminatory Function for Endoscopy Skills on a Computer-based Simulator

Kim, S., Spencer, G., Makar, G., Ahmad, N., Jaffe, D., Ginsberg, G., Kuchenbecker, K. J., Kochman, M.

Surgical Endoscopy, 24(12):3008-3015, December 2010 (article)

hi

[BibTex]

2010


[BibTex]


no image
Identifying the Role of Proprioception in Upper-Limb Prosthesis Control: Studies on Targeted Motion

Blank, A., Okamura, A. M., Kuchenbecker, K. J.

ACM Transactions on Applied Perception, 7(3):1-23, June 2010 (article)

hi

[BibTex]

[BibTex]