Header logo is


2018


Thumb xl huggiebot
Softness, Warmth, and Responsiveness Improve Robot Hugs

Block, A. E., Kuchenbecker, K. J.

International Journal of Social Robotics, 11(1):49-64, October 2018 (article)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, roboticists are naturally interested in having robots one day hug humans as seamlessly as humans hug other humans. This project's purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a soft, warm, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty relatively young and rather technical participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use.

hi

link (url) DOI Project Page [BibTex]

2018


link (url) DOI Project Page [BibTex]


no image
Complexity, Rate, and Scale in Sliding Friction Dynamics Between a Finger and Textured Surface

Khojasteh, B., Janko, M., Visell, Y.

Nature Scientific Reports, 8(13710), September 2018 (article)

Abstract
Sliding friction between the skin and a touched surface is highly complex, but lies at the heart of our ability to discriminate surface texture through touch. Prior research has elucidated neural mechanisms of tactile texture perception, but our understanding of the nonlinear dynamics of frictional sliding between the finger and textured surfaces, with which the neural signals that encode texture originate, is incomplete. To address this, we compared measurements from human fingertips sliding against textured counter surfaces with predictions of numerical simulations of a model finger that resembled a real finger, with similar geometry, tissue heterogeneity, hyperelasticity, and interfacial adhesion. Modeled and measured forces exhibited similar complex, nonlinear sliding friction dynamics, force fluctuations, and prominent regularities related to the surface geometry. We comparatively analysed measured and simulated forces patterns in matched conditions using linear and nonlinear methods, including recurrence analysis. The model had greatest predictive power for faster sliding and for surface textures with length scales greater than about one millimeter. This could be attributed to the the tendency of sliding at slower speeds, or on finer surfaces, to complexly engage fine features of skin or surface, such as fingerprints or surface asperities. The results elucidate the dynamical forces felt during tactile exploration and highlight the challenges involved in the biological perception of surface texture via touch.

hi

DOI [BibTex]

DOI [BibTex]


no image
Instrumentation, Data, and Algorithms for Visually Understanding Haptic Surface Properties

Burka, A. L.

University of Pennsylvania, Philadelphia, USA, August 2018, Department of Electrical and Systems Engineering (phdthesis)

Abstract
Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors' interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
A Robust Soft Lens for Tunable Camera Application Using Dielectric Elastomer Actuators

Nam, S., Yun, S., Yoon, J. W., Park, S., Park, S. K., Mun, S., Park, B., Kyung, K.

Soft robotics, Mary Ann Liebert, Inc., August 2018 (article)

Abstract
Developing tunable lenses, an expansion-based mechanism for dynamic focus adjustment can provide a larger focal length tuning range than a contraction-based mechanism. Here, we develop an expansion-tunable soft lens module using a disk-type dielectric elastomer actuator (DEA) that creates axially symmetric pulling forces on a soft lens. Adopted from a biological accommodation mechanism in human eyes, a soft lens at the annular center of a disk-type DEA pair is efficiently stretched to change the focal length in a highly reliable manner. A soft lens with a diameter of 3mm shows a 65.7% change in the focal length (14.3–23.7mm) under a dynamic driving voltage signal control. We confirm a quadratic relation between lens expansion and focal length that leads to large focal length tunability obtainable in the proposed approach. The fabricated tunable lens module can be used for soft, lightweight, and compact vision components in robots, drones, vehicles, and so on.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl ar
Robust Visual Augmented Reality in Robot-Assisted Surgery

Forte, M. P.

Politecnico di Milano, Milan, Italy, July 2018, Department of Electronic, Information, and Biomedical Engineering (mastersthesis)

Abstract
The broader research objective of this line of research is to test the hypothesis that real-time stereo video analysis and augmented reality can increase safety and task efficiency in robot-assisted surgery. This master’s thesis aims to solve the first step needed to achieve this goal: the creation of a robust system that delivers the envisioned feedback to a surgeon while he or she controls a surgical robot that is identical to those used on human patients. Several approaches for applying augmented reality to da Vinci Surgical Systems have been proposed, but none of them entirely rely on a clinical robot; specifically, they require additional sensors, depend on access to the da Vinci API, are designed for a very specific task, or were tested on systems that are starkly different from those in clinical use. There has also been prior work that presents the real-world camera view and the computer graphics on separate screens, or not in real time. In other scenarios, the digital information is overlaid manually by the surgeons themselves or by computer scientists, rather than being generated automatically in response to the surgeon’s actions. We attempted to overcome the aforementioned constraints by acquiring input signals from the da Vinci stereo endoscope and providing augmented reality to the console in real time (less than 150 ms delay, including the 62 ms of inherent latency of the da Vinci). The potential benefits of the resulting system are broad because it was built to be general, rather than customized for any specific task. The entire platform is compatible with any generation of the da Vinci System and does not require a dVRK (da Vinci Research Kit) or access to the API. Thus, it can be applied to existing da Vinci Systems in operating rooms around the world.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Task-Driven PCA-Based Design Optimization of Wearable Cutaneous Devices

Pacchierotti, C., Young, E. M., Kuchenbecker, K. J.

IEEE Robotics and Automation Letters, 3(3):2214-2221, July 2018, Presented at ICRA 2018 (article)

Abstract
Small size and low weight are critical requirements for wearable and portable haptic interfaces, making it essential to work toward the optimization of their sensing and actuation systems. This paper presents a new approach for task-driven design optimization of fingertip cutaneous haptic devices. Given one (or more) target tactile interactions to render and a cutaneous device to optimize, we evaluate the minimum number and best configuration of the device’s actuators to minimize the estimated haptic rendering error. First, we calculate the motion needed for the original cutaneous device to render the considered target interaction. Then, we run a principal component analysis (PCA) to search for possible couplings between the original motor inputs, looking also for the best way to reconfigure them. If some couplings exist, we can re-design our cutaneous device with fewer motors, optimally configured to render the target tactile sensation. The proposed approach is quite general and can be applied to different tactile sensors and cutaneous devices. We validated it using a BioTac tactile sensor and custom plate-based 3-DoF and 6-DoF fingertip cutaneous devices, considering six representative target tactile interactions. The algorithm was able to find couplings between each device’s motor inputs, proving it to be a viable approach to optimize the design of wearable and portable cutaneous devices. Finally, we present two examples of optimized designs for our 3-DoF fingertip cutaneous device.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl fitter18 frai imus
Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn IMUs

Fitter, N. T., Kuchenbecker, K. J.

Frontiers in Robotics and Artificial Intelligence, 5(85), July 2018 (article)

Abstract
Colleagues often shake hands in greeting, friends connect through high fives, and children around the world rejoice in hand-clapping games. As robots become more common in everyday human life, they will have the opportunity to join in these social-physical interactions, but few current robots are intended to touch people in friendly ways. This article describes how we enabled a Baxter Research Robot to both teach and learn bimanual hand-clapping games with a human partner. Our system monitors the user's motions via a pair of inertial measurement units (IMUs) worn on the wrists. We recorded a labeled library of 10 common hand-clapping movements from 10 participants; this dataset was used to train an SVM classifier to automatically identify hand-clapping motions from previously unseen participants with a test-set classification accuracy of 97.0%. Baxter uses these sensors and this classifier to quickly identify the motions of its human gameplay partner, so that it can join in hand-clapping games. This system was evaluated by N = 24 naïve users in an experiment that involved learning sequences of eight motions from Baxter, teaching Baxter eight-motion game patterns, and completing a free interaction period. The motion classification accuracy in this less structured setting was 85.9%, primarily due to unexpected variations in motion timing. The quantitative task performance results and qualitative participant survey responses showed that learning games from Baxter was significantly easier than teaching games to Baxter, and that the teaching role caused users to consider more teamwork aspects of the gameplay. Over the course of the experiment, people felt more understood by Baxter and became more willing to follow the example of the robot. Users felt uniformly safe interacting with Baxter, and they expressed positive opinions of Baxter and reported fun interacting with the robot. Taken together, the results indicate that this robot achieved credible social-physical interaction with humans and that its ability to both lead and follow systematically changed the human partner's experience.

hi

DOI [BibTex]

DOI [BibTex]


no image
Automatically Rating Trainee Skill at a Pediatric Laparoscopic Suturing Task

Oquendo, Y. A., Riddle, E. W., Hiller, D., Blinman, T. A., Kuchenbecker, K. J.

Surgical Endoscopy, 32(4):1840-1857, April 2018 (article)

hi

DOI [BibTex]

DOI [BibTex]


Thumb xl toh gagraphic 2805901
Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices

Mun, S., Yun, S., Nam, S., Park, S. K., Park, S., Park, B. J., Lim, J. M., Kyung, K. U.

IEEE Transactions on Haptics, 11(1):15-21, Febuary 2018 (article)

Abstract
This paper reports soft actuator based tactile stimulation interfaces applicable to wearable devices. The soft actuator is prepared by multi-layered accumulation of thin electro-active polymer (EAP) films. The multi-layered actuator is designed to produce electrically-induced convex protrusive deformation, which can be dynamically programmable for wide range of tactile stimuli. The maximum vertical protrusion is 650 μm and the output force is up to 255 mN. The soft actuators are embedded into the fingertip part of a glove and front part of a forearm band, respectively. We have conducted two kinds of experiments with 15 subjects. Perceived magnitudes of actuator's protrusion and vibrotactile intensity were measured with frequency of 1 Hz and 191 Hz, respectively. Analysis of the user tests shows participants perceive variation of protrusion height at the finger pad and modulation of vibration intensity through the proposed soft actuator based tactile interface.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl applsci 08 00241 g003
Robotic Motion Learning Framework to Promote Social Engagement

Burns, R., Jeon, M., Park, C. H.

Applied Sciences, 8(2):241, Febuary 2018, Special Issue "Social Robotics" (article)

Abstract
Imitation is a powerful component of communication between people, and it poses an important implication in improving the quality of interaction in the field of human–robot interaction (HRI). This paper discusses a novel framework designed to improve human–robot interaction through robotic imitation of a participant’s gestures. In our experiment, a humanoid robotic agent socializes with and plays games with a participant. For the experimental group, the robot additionally imitates one of the participant’s novel gestures during a play session. We hypothesize that the robot’s use of imitation will increase the participant’s openness towards engaging with the robot. Experimental results from a user study of 12 subjects show that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts did. These results point to an increased participant interest in engagement fueled by personalized imitation during interaction.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Immersive Low-Cost Virtual Reality Treatment for Phantom Limb Pain: Evidence from Two Cases

Ambron, E., Miller, A., Kuchenbecker, K. J., Buxbaum, L. J., Coslett, H. B.

Frontiers in Neurology, 9(67):1-7, 2018 (article)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl screen shot 2018 07 14 at 22.41.37
Tactile perception by electrovibration

Vardar, Y.

Koc University, 2018 (phdthesis)

Abstract
One approach to generating realistic haptic feedback on touch screens is electrovibration. In this technique, the friction force is altered via electrostatic forces, which are generated by applying an alternating voltage signal to the conductive layer of a capacitive touchscreen. Although the technology for rendering haptic effects on touch surfaces using electrovibration is already in place, our knowledge of the perception mechanisms behind these effects is limited. This thesis aims to explore the mechanisms underlying haptic perception of electrovibration in two parts. In the first part, the effect of input signal properties on electrovibration perception is investigated. Our findings indicate that the perception of electrovibration stimuli depends on frequency-dependent electrical properties of human skin and human tactile sensitivity. When a voltage signal is applied to a touchscreen, it is filtered electrically by human finger and it generates electrostatic forces in the skin and mechanoreceptors. Depending on the spectral energy content of this electrostatic force signal, different psychophysical channels may be activated. The channel which mediates the detection is determined by the frequency component which has a higher energy than the sensory threshold at that frequency. In the second part, effect of masking on the electrovibration perception is investigated. We show that the detection thresholds are elevated as linear functions of masking levels for simultaneous and pedestal masking. The masking effectiveness is larger for pedestal masking compared to simultaneous masking. Moreover, our results suggest that sharpness perception depends on the local contrast between background and foreground stimuli, which varies as a function of masking amplitude and activation levels of frequency-dependent psychophysical channels.

hi

Tactile perception by electrovibration [BibTex]


Thumb xl screen shot 2018 05 04 at 11.47.54
Tactile Masking by Electrovibration

Vardar, Y., Güçlü, B., Basdogan, C.

IEEE Transactions on Haptics, 11(4):623-635, 2018 (article)

Abstract
Future touch screen applications will include multiple tactile stimuli displayed simultaneously or consecutively to single finger or multiple fingers. These applications should be designed by considering human tactile masking mechanism since it is known that presenting one stimulus may interfere with the perception of the other. In this study, we investigate the effect of masking on tactile perception of electrovibration displayed on touch screens. Through conducting psychophysical experiments with nine subjects, we measured the masked thresholds of sinusoidal electrovibration bursts (125 Hz) under two masking conditions: simultaneous and pedestal. The masking stimuli were noise bursts, applied at five different sensation levels varying from 2 to 22 dB SL, also presented by electrovibration. For each subject, the detection thresholds were elevated as linear functions of masking levels for both masking types. We observed that the masking effectiveness was larger with pedestal masking than simultaneous masking. Moreover, in order to investigate the effect of tactile masking on our haptic perception of edge sharpness, we compared the perceived sharpness of edges separating two textured regions displayed with and without various masking stimuli. Our results suggest that sharpness perception depends on the local contrast between background and foreground stimuli, which varies as a function of masking amplitude and activation levels of frequency-dependent psychophysical channels.

hi

vardar_toh2018 DOI [BibTex]

vardar_toh2018 DOI [BibTex]