Header logo is


2024


{IMU}-Based Kinematics Estimation Accuracy Affects Gait Retraining Using Vibrotactile Cues
IMU-Based Kinematics Estimation Accuracy Affects Gait Retraining Using Vibrotactile Cues

Rokhmanova, N., Pearl, O., Kuchenbecker, K. J., Halilaj, E.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, 32, pages: 1005-1012, February 2024 (article)

Abstract
Wearable sensing using inertial measurement units (IMUs) is enabling portable and customized gait retraining for knee osteoarthritis. However, the vibrotactile feedback that users receive directly depends on the accuracy of IMU-based kinematics. This study investigated how kinematic errors impact an individual's ability to learn a therapeutic gait using vibrotactile cues. Sensor accuracy was computed by comparing the IMU-based foot progression angle to marker-based motion capture, which was used as ground truth. Thirty subjects were randomized into three groups to learn a toe-in gait: one group received vibrotactile feedback during gait retraining in the laboratory, another received feedback outdoors, and the control group received only verbal instruction and proceeded directly to the evaluation condition. All subjects were evaluated on their ability to maintain the learned gait in a new outdoor environment. We found that subjects with high tracking errors exhibited more incorrect responses to vibrotactile cues and slower learning rates than subjects with low tracking errors. Subjects with low tracking errors outperformed the control group in the evaluation condition, whereas those with higher error did not. Errors were correlated with foot size and angle magnitude, which may indicate a non-random algorithmic bias. The accuracy of IMU-based kinematics has a cascading effect on feedback; ignoring this effect could lead researchers or clinicians to erroneously classify a patient as a non-responder if they did not improve after retraining. To use patient and clinician time effectively, future implementation of portable gait retraining will require assessment across a diverse range of patients.

hi

DOI Project Page [BibTex]

2024


DOI Project Page [BibTex]


Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing in the Elephant Trunk
Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing in the Elephant Trunk

Schulz, A., Serhat, G., Kuchenbecker, K. J.

Abstract presented at the Society for Integrative and Comparative BIology Annual Meeting (SICB), Seattle, USA, January 2024 (misc)

Abstract
Skin is a complex biological composite consisting of layers with distinct mechanical properties, morphologies, and mechanosensory capabilities. This work seeks to expand the comparative biomechanics field to comparative haptics, analyzing elephant trunk touch by redesigning a previously published human finger-pad model with morphological parameters measured from an elephant trunk. The dorsal surface of the elephant trunk has a thick, wrinkled epidermis covered with whiskers at the distal tip and deep folds at the proximal base. We hypothesize that this thick dorsal skin protects the trunk from mechanical damage but significantly dulls its tactile sensing ability. To facilitate safe and dexterous motion, the distributed dorsal whiskers might serve as pre-touch antennae, transmitting an amplified version of impending contact to the mechanoreceptors beneath the elephant's armor. We tested these hypotheses by simulating soft tissue deformation through high-fidelity finite element analyses involving representative skin layers and whiskers, modeled based on frozen African elephant trunk (Loxodonta africana) morphology. For a typical contact force, quintupling the stratum corneum thickness to match dorsal trunk skin reduces the von Mises stress communicated to the dermis by 18%. However, adding a whisker offsets this dulled sensing, as hypothesized, amplifying the stress by more than 15 at the same location. We hope this work will motivate further investigations of mammalian touch using approaches and models from the ample literature on human touch.

hi

[BibTex]

[BibTex]


no image
MPI-10: Haptic-Auditory Measurements from Tool-Surface Interactions

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Dataset published as a companion to the journal article "Robust Surface Recognition with the Maximum Mean Discrepancy: Degrading Haptic-Auditory Signals through Bandwidth and Noise" in IEEE Transactions on Haptics, January 2024 (misc)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
How Should Robots Exercise with People? Robot-Mediated Exergames Win with Music, Social Analogues, and Gameplay Clarity

Fitter, N. T., Mohan, M., Preston, R. C., Johnson, M. J., Kuchenbecker, K. J.

Frontiers in Robotics and AI, 10(1155837):1-18, January 2024 (article)

Abstract
The modern worldwide trend toward sedentary behavior comes with significant health risks. An accompanying wave of health technologies has tried to encourage physical activity, but these approaches often yield limited use and retention. Due to their unique ability to serve as both a health-promoting technology and a social peer, we propose robots as a game-changing solution for encouraging physical activity. This article analyzes the eight exergames we previously created for the Rethink Baxter Research Robot in terms of four key components that are grounded in the video-game literature: repetition, pattern matching, music, and social design. We use these four game facets to assess gameplay data from 40 adult users who each experienced the games in balanced random order. In agreement with prior research, our results show that relevant musical cultural references, recognizable social analogues, and gameplay clarity are good strategies for taking an otherwise highly repetitive physical activity and making it engaging and popular among users. Others who study socially assistive robots and rehabilitation robotics can benefit from this work by considering the presented design attributes to generate future hypotheses and by using our eight open-source games to pursue follow-up work on social-physical exercise with robots.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Whiskers That Don’t Whisk: Unique Structure From the Absence of Actuation in Elephant Whiskers
Whiskers That Don’t Whisk: Unique Structure From the Absence of Actuation in Elephant Whiskers

Schulz, A., Kaufmann, L., Brecht, M., Richter, G., Kuchenbecker, K. J.

Abstract presented at the Society for Integrative and Comparative BIology Annual Meeting (SICB), Seattle, USA, January 2024 (misc)

Abstract
Whiskers are so named because these hairs often actuate circularly, whisking, via collagen wrapping at the root of the hair follicle to increase their sensing volumes. Elephant trunks are a unique case study for whiskers, as the dorsal and lateral sections of the elephant proboscis have scattered sensory hairs that lack individual actuation. We hypothesize that the actuation limitations of these non-whisking whiskers led to anisotropic morphology and non-homogeneous composition to meet the animal's sensory needs. To test these hypotheses, we examined trunk whiskers from a 35-year-old female African savannah elephant (Loxodonta africana). Whisker morphology was evaluated through micro-CT and polarized light microscopy. The whiskers from the distal tip of the trunk were found to be axially asymmetric, with an ovular cross-section at the root, shifting to a near-square cross-section at the point. Nanoindentation and additional microscopy revealed that elephant whiskers have a composition unlike any other mammalian hair ever studied: we recorded an elastic modulus of 3 GPa at the root and 0.05 GPa at the point of a single 4-cm-long whisker. This work challenges the assumption that hairs have circular cross-sections and isotropic mechanical properties. With such striking differences compared to other mammals, including the mouse (Mus musculus), rat (Rattus norvegicus), and cat (Felis catus), we conclude that whisker morphology and composition play distinct and complementary roles in elephant trunk mechanosensing.

zwe-csfm hi

[BibTex]

[BibTex]


no image
Robust Surface Recognition with the Maximum Mean Discrepancy: Degrading Haptic-Auditory Signals through Bandwidth and Noise

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

IEEE Transactions on Haptics, pages: 1-8, January 2024 (article)

Abstract
Sliding a tool across a surface generates rich sensations that can be analyzed to recognize what is being touched. However, the optimal configuration for capturing these signals is yet unclear. To bridge this gap, we consider haptic-auditory data as a human explores surfaces with different steel tools, including accelerations of the tool and finger, force and torque applied to the surface, and contact sounds. Our classification pipeline uses the maximum mean discrepancy (MMD) to quantify differences in data distributions in a high-dimensional space for inference. With recordings from three hemispherical tool diameters and ten diverse surfaces, we conducted two degradation studies by decreasing sensing bandwidth and increasing added noise. We evaluate the haptic-auditory recognition performance achieved with the MMD to compare newly gathered data to each surface in our known library. The results indicate that acceleration signals alone have great potential for high-accuracy surface recognition and are robust against noise contamination. The optimal accelerometer bandwidth exceeds 1000 Hz, suggesting that useful vibrotactile information extends beyond human perception range. Finally, smaller tool tips generate contact vibrations with better noise robustness. The provided sensing guidelines may enable superhuman performance in portable surface recognition, which could benefit quality control, material documentation, and robotics.

hi

DOI Project Page [BibTex]


InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction from Multi-view RGB-D Images
InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction from Multi-view RGB-D Images

Huang, Y., Taheri, O., Black, M. J., Tzionas, D.

International Journal of Computer Vision (IJCV), 2024 (article)

Abstract
Humans constantly interact with objects to accomplish tasks. To understand such interactions, computers need to reconstruct these in 3D from images of whole bodies manipulating objects, e.g., for grasping, moving and using the latter. This involves key challenges, such as occlusion between the body and objects, motion blur, depth ambiguities, and the low image resolution of hands and graspable object parts. To make the problem tractable, the community has followed a divide-and-conquer approach, focusing either only on interacting hands, ignoring the body, or on interacting bodies, ignoring the hands. However, these are only parts of the problem. On the contrary, recent work focuses on the whole problem. The GRAB dataset addresses whole-body interaction with dexterous hands but captures motion via markers and lacks video, while the BEHAVE dataset captures video of body-object interaction but lacks hand detail. We address the limitations of prior work with InterCap, a novel method that reconstructs interacting whole-bodies and objects from multi-view RGB-D data, using the parametric whole-body SMPL-X model and known object meshes. To tackle the above challenges, InterCap uses two key observations: (i) Contact between the body and object can be used to improve the pose estimation of both. (ii) Consumer-level Azure Kinect cameras let us set up a simple and flexible multi-view RGB-D system for reducing occlusions, with spatially calibrated and temporally synchronized cameras. With our InterCap method we capture the InterCap dataset, which contains 10 subjects (5 males and 5 females) interacting with 10 daily objects of various sizes and affordances, including contact with the hands or feet. To this end, we introduce a new data-driven hand motion prior, as well as explore simple ways for automatic contact detection based on 2D and 3D cues. In total, InterCap has 223 RGB-D videos, resulting in 67,357 multi-view frames, each containing 6 RGB-D images, paired with pseudo ground-truth 3D body and object meshes. Our InterCap method and dataset fill an important gap in the literature and support many research directions. Data and code are available at https://intercap.is.tue.mpg.de.

ps

Paper link (url) DOI [BibTex]


no image
Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models
2024 (misc)

Abstract
Humans excel at robust bipedal walking in complex natural environments. In each step, they adequately tune the interaction of biomechanical muscle dynamics and neuronal signals to be robust against uncertainties in ground conditions. However, it is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem considering stability, robustness, and energy efficiency. In computer simulations, energy minimization has been shown to be a successful optimization target, reproducing natural walking with trajectory optimization or reflex-based control methods. However, these methods focus on particular motions at a time and the resulting controllers are limited when compensating for perturbations. In robotics, reinforcement learning~(RL) methods recently achieved highly stable (and efficient) locomotion on quadruped systems, but the generation of human-like walking with bipedal biomechanical models has required extensive use of expert data sets. This strong reliance on demonstrations often results in brittle policies and limits the application to new behaviors, especially considering the potential variety of movements for high-dimensional musculoskeletal models in 3D. Achieving natural locomotion with RL without sacrificing its incredible robustness might pave the way for a novel approach to studying human walking in complex natural environments. Videos: this https://sites.google.com/view/naturalwalkingrl

al

link (url) [BibTex]


no image
Machine learning of a density functional for anisotropic patchy particles

Simon, A., Weimar, J., Martius, G., Oettel, M.

Journal of Chemical Theory and Computation, 2024 (article)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Event-based Non-Rigid Reconstruction of Low-Rank Parametrized Deformations from Contours

Xue, Y., Li, H., Leutenegger, S., Stueckler, J.

International Journal of Computer Vision (IJCV), 2024 (article)

Abstract
Visual reconstruction of fast non-rigid object deformations over time is a challenge for conventional frame-based cameras. In recent years, event cameras have gained significant attention due to their bio-inspired properties, such as high temporal resolution and high dynamic range. In this paper, we propose a novel approach for reconstructing such deformations using event measurements. Under the assumption of a static background, where all events are generated by the motion, our approach estimates the deformation of objects from events generated at the object contour in a probabilistic optimization framework. It associates events to mesh faces on the contour and maximizes the alignment of the line of sight through the event pixel with the associated face. In experiments on synthetic and real data of human body motion, we demonstrate the advantages of our method over state-of-the-art optimization and learning-based approaches for reconstructing the motion of human arms and hands. In addition, we propose an efficient event stream simulator to synthesize realistic event data for human motion.

ev

DOI [BibTex]

DOI [BibTex]


Being Neurodivergent in Academia: Autistic and abroad
Being Neurodivergent in Academia: Autistic and abroad

Schulz, A.

eLife, 13, 2024 (article)

Abstract
An AuDHD researcher recounts the highs and lows of relocating from the United States to Germany for his postdoc.

hi

DOI [BibTex]


HMP: Hand Motion Priors for Pose and Shape Estimation from Video
HMP: Hand Motion Priors for Pose and Shape Estimation from Video

Duran, E., Kocabas, M., Choutas, V., Fan, Z., Black, M. J.

Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024 (article)

ps

webpage pdf code [BibTex]

webpage pdf code [BibTex]


no image
Discrete fourier transform three-to-one (DFT321): Code

Landin, N., Romano, J. M., McMahan, W., Kuchenbecker, K. J.

MATLAB code of discrete fourier transform three-to-one (DFT321), 2024 (misc)

hi

Code Project Page [BibTex]

Code Project Page [BibTex]


{FLARE}: Fast learning of Animatable and Relightable Mesh Avatars
FLARE: Fast learning of Animatable and Relightable Mesh Avatars

Bharadwaj, S., Zheng, Y., Hilliges, O., Black, M. J., Abrevaya, V. F.

ACM Transactions on Graphics, 42(6):204:1-204:15, December 2023 (article) Accepted

Abstract
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems. While 3D meshes enable efficient processing and are highly portable, they lack realism in terms of shape and appearance. Neural representations, on the other hand, are realistic but lack compatibility and are slow to train and render. Our key insight is that it is possible to efficiently learn high-fidelity 3D mesh representations via differentiable rendering by exploiting highly-optimized methods from traditional computer graphics and approximating some of the components with neural networks. To that end, we introduce FLARE, a technique that enables the creation of animatable and relightable mesh avatars from a single monocular video. First, we learn a canonical geometry using a mesh representation, enabling efficient differentiable rasterization and straightforward animation via learned blendshapes and linear blend skinning weights. Second, we follow physically-based rendering and factor observed colors into intrinsic albedo, roughness, and a neural representation of the illumination, allowing the learned avatars to be relit in novel scenes. Since our input videos are captured on a single device with a narrow field of view, modeling the surrounding environment light is non-trivial. Based on the split-sum approximation for modeling specular reflections, we address this by approximating the pre-filtered environment map with a multi-layer perceptron (MLP) modulated by the surface roughness, eliminating the need to explicitly model the light. We demonstrate that our mesh-based avatar formulation, combined with learned deformation, material, and lighting MLPs, produces avatars with high-quality geometry and appearance, while also being efficient to train and render compared to existing approaches.

ps

Paper Project Page Code DOI [BibTex]

Paper Project Page Code DOI [BibTex]


From Skin to Skeleton: Towards Biomechanically Accurate {3D} Digital Humans
From Skin to Skeleton: Towards Biomechanically Accurate 3D Digital Humans

(Honorable Mention for Best Paper)

Keller, M., Werling, K., Shin, S., Delp, S., Pujades, S., Liu, C. K., Black, M. J.

ACM Transaction on Graphics (ToG), 42(6):253:1-253:15, December 2023 (article)

Abstract
Great progress has been made in estimating 3D human pose and shape from images and video by training neural networks to directly regress the parameters of parametric human models like SMPL. However, existing body models have simplified kinematic structures that do not correspond to the true joint locations and articulations in the human skeletal system, limiting their potential use in biomechanics. On the other hand, methods for estimating biomechanically accurate skeletal motion typically rely on complex motion capture systems and expensive optimization methods. What is needed is a parametric 3D human model with a biomechanically accurate skeletal structure that can be easily posed. To that end, we develop SKEL, which re-rigs the SMPL body model with a biomechanics skeleton. To enable this, we need training data of skeletons inside SMPL meshes in diverse poses. We build such a dataset by optimizing biomechanically accurate skeletons inside SMPL meshes from AMASS sequences. We then learn a regressor from SMPL mesh vertices to the optimized joint locations and bone rotations. Finally, we re-parametrize the SMPL mesh with the new kinematic parameters. The resulting SKEL model is animatable like SMPL but with fewer, and biomechanically-realistic, degrees of freedom. We show that SKEL has more biomechanically accurate joint locations than SMPL, and the bones fit inside the body surface better than previous methods. By fitting SKEL to SMPL meshes we are able to “upgrade" existing human pose and shape datasets to include biomechanical parameters. SKEL provides a new tool to enable biomechanics in the wild, while also providing vision and graphics researchers with a better constrained

ps

Project Page Paper DOI [BibTex]

Project Page Paper DOI [BibTex]


no image
Towards Semi-Automated Pleural Cavity Access for Pneumothorax in Austere Environments

L’Orsa, R., Lama, S., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

Acta Astronautica, 212, pages: 48-53, November 2023 (article)

Abstract
Astronauts are at risk for pneumothorax, a condition where injury or disease introduces air between the chest wall and the lungs (i.e., the pleural cavity). In a worst-case scenario, it can rapidly lead to a fatality if left unmanaged and will require prompt treatment in situ if developed during spaceflight. Chest tube insertion is the definitive treatment for pneumothorax, but it requires a high level of skill and frequent practice for safe use. Physician astronauts may struggle to maintain this skill on medium- and long-duration exploration-class missions, and it is inappropriate for pure just-in-time learning or skill refreshment paradigms. This paper proposes semi-automating tool insertion to reduce the risk of complications in austere environments and describes preliminary experiments providing initial validation of an intelligent prototype system. Specifically, we showcase and analyse motion and force recordings from a sensorized percutaneous access needle inserted repeatedly into an ex vivo tissue phantom, along with relevant physiological data simultaneously recorded from the operator. When coupled with minimal just-in-time training and/or augmented reality guidance, the proposed system may enable non-expert operators to safely perform emergency chest tube insertion without the use of ground resources.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Electrochemically Controlled Hydrogels with Electrotunable Permeability and Uniaxial Actuation
Electrochemically Controlled Hydrogels with Electrotunable Permeability and Uniaxial Actuation

Benselfelt, T., Shakya, J., Rothemund, P., Lindström, S. B., Piper, A., Winkler, T. E., Hajian, A., Wågberg, L., Keplinger, C., Hamedi, M. M.

Advanced Materials, 35(45):2303255, Wiley-VCH GmbH, November 2023 (article)

Abstract
The unique properties of hydrogels enable the design of life-like soft intelligent systems. However, stimuli-responsive hydrogels still suffer from limited actuation control. Direct electronic control of electronically conductive hydrogels can solve this challenge and allow direct integration with modern electronic systems. An electrochemically controlled nanowire composite hydrogel with high in-plane conductivity that stimulates a uniaxial electrochemical osmotic expansion is demonstrated. This materials system allows precisely controlled shape-morphing at only −1 V, where capacitive charging of the hydrogel bulk leads to a large uniaxial expansion of up to 300%, caused by the ingress of ≈700 water molecules per electron–ion pair. The material retains its state when turned off, which is ideal for electrotunable membranes as the inherent coupling between the expansion and mesoporosity enables electronic control of permeability for adaptive separation, fractionation, and distribution. Used as electrochemical osmotic hydrogel actuators, they achieve an electroactive pressure of up to 0.7 MPa (1.4 MPa vs dry) and a work density of ≈150 kJ m−3 (2 MJ m−3 vs dry). This new materials system paves the way to integrate actuation, sensing, and controlled permeation into advanced soft intelligent systems.

rm

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Seeking Causal, Invariant, Structures with Kernel Mean Embeddings in Haptic-Auditory Data from Tool-Surface Interaction
Seeking Causal, Invariant, Structures with Kernel Mean Embeddings in Haptic-Auditory Data from Tool-Surface Interaction

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the IROS Workshop on Causality for Robotics: Answering the Question of Why, Detroit, USA, October 2023 (misc)

Abstract
Causal inference could give future learning robots strong generalization and scalability capabilities, which are crucial for safety, fault diagnosis and error prevention. One application area of interest consists of the haptic recognition of surfaces. We seek to understand cause and effect during physical surface interaction by examining surface and tool identity, their interplay, and other contact-irrelevant factors. To work toward elucidating the mechanism of surface encoding, we attempt to recognize surfaces from haptic-auditory data captured by previously unseen hemispherical steel tools that differ from the recording tool in diameter and mass. In this context, we leverage ideas from kernel methods to quantify surface similarity through descriptive differences in signal distributions. We find that the effect of the tool is significantly present in higher-order statistical moments of contact data: aligning the means of the distributions being compared somewhat improves recognition but does not fully separate tool identity from surface identity. Our findings shed light on salient aspects of haptic-auditory data from tool-surface interaction and highlight the challenges involved in generalizing artificial surface discrimination capabilities.

hi

Manuscript Project Page [BibTex]

Manuscript Project Page [BibTex]


no image
A taxonomy and review of generalization research in NLP

Hupkes, D., Giulianelli, M., Dankers, V., Artetxe, M., Elazar, Y., Pimentel, T., Christodoulopoulos, C., Lasri, K., Saphra, N., Sinclair, A., Ulmer, D., Schottmann, F., Batsuren, K., Sun, K., Sinha, K., Khalatbari, L., Ryskina, M., Frieske, R., Cotterell, R., Jin, Z.

Nature Machine Intelligence, 5(10):1161-1174, October 2023 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Enhancing Surgical Team Collaboration and Situation Awareness through Multimodal Sensing

Allemang–Trivalle, A.

Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), pages: 716-720, Extended abstract (5 pages) presented at the ACM International Conference on Multimodal Interaction (ICMI) Doctoral Consortium, Paris, France, October 2023 (misc)

Abstract
Surgery, typically seen as the surgeon's sole responsibility, requires a broader perspective acknowledging the vital roles of other operating room (OR) personnel. The interactions among team members are crucial for delivering quality care and depend on shared situation awareness. I propose a two-phase approach to design and evaluate a multimodal platform that monitors OR members, offering insights into surgical procedures. The first phase focuses on designing a data-collection platform, tailored to surgical constraints, to generate novel collaboration and situation-awareness metrics using synchronous recordings of the participants' voices, positions, orientations, electrocardiograms, and respiration signals. The second phase concerns the creation of intuitive dashboards and visualizations, aiding surgeons in reviewing recorded surgery, identifying adverse events and contributing to proactive measures. This work aims to demonstrate an innovative approach to data collection and analysis, augmenting the surgical team's capabilities. The multimodal platform has the potential to enhance collaboration, foster situation awareness, and ultimately mitigate surgical adverse events. This research sets the stage for a transformative shift in the OR, enabling a more holistic and inclusive perspective that recognizes that surgery is a team effort.

hi

DOI [BibTex]

DOI [BibTex]


no image
NearContact: Accurate Human Detection using Tomographic Proximity and Contact Sensing with Cross-Modal Attention

Garrofé, G., Schoeffmann, C., Zangl, H., Kuchenbecker, K. J., Lee, H.

Extended abstract (4 pages) presented at the International Workshop on Human-Friendly Robotics (HFR), Munich, Germany, September 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test

Khojasteh, B., Solowjow, F., Trimpe, S., Kuchenbecker, K. J.

IEEE Transactions on Automation Science and Engineering, pages: 1-16, August 2023 (article)

Abstract
Machine learning and deep learning have been used extensively to classify physical surfaces through images and time-series contact data. However, these methods rely on human expertise and entail the time-consuming processes of data and parameter tuning. To overcome these challenges, we propose an easily implemented framework that can directly handle heterogeneous data sources for classification tasks. Our data-versus-data approach automatically quantifies distinctive differences in distributions in a high-dimensional space via kernel two-sample testing between two sets extracted from multimodal data (e.g., images, sounds, haptic signals). We demonstrate the effectiveness of our technique by benchmarking against expertly engineered classifiers for visual-audio-haptic surface recognition due to the industrial relevance, difficulty, and competitive baselines of this application; ablation studies confirm the utility of key components of our pipeline. As shown in our open-source code, we achieve 97.2% accuracy on a standard multi-user dataset with 108 surface classes, outperforming the state-of-the-art machine-learning algorithm by 6% on a more difficult version of the task. The fact that our classifier obtains this performance with minimal data processing in the standard algorithm setting reinforces the powerful nature of kernel methods for learning to recognize complex patterns. Note to Practitioners—We demonstrate how to apply the kernel two-sample test to a surface-recognition task, discuss opportunities for improvement, and explain how to use this framework for other classification problems with similar properties. Automating surface recognition could benefit both surface inspection and robot manipulation. Our algorithm quantifies class similarity and therefore outputs an ordered list of similar surfaces. This technique is well suited for quality assurance and documentation of newly received materials or newly manufactured parts. More generally, our automated classification pipeline can handle heterogeneous data sources including images and high-frequency time-series measurements of vibrations, forces and other physical signals. As our approach circumvents the time-consuming process of feature engineering, both experts and non-experts can use it to achieve high-accuracy classification. It is particularly appealing for new problems without existing models and heuristics. In addition to strong theoretical properties, the algorithm is straightforward to use in practice since it requires only kernel evaluations. Its transparent architecture can provide fast insights into the given use case under different sensing combinations without costly optimization. Practitioners can also use our procedure to obtain the minimum data-acquisition time for independent time-series data from new sensor recordings.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Getting personal with epigenetics: towards individual-specific epigenomic imputation with machine learning

Hawkins-Hooker, A., Visonà, G., Narendra, T., Rojas-Carulla, M., Schölkopf, B., Schweikert, G.

Nature Communications, 14(1), August 2023 (article)

ei

DOI [BibTex]

DOI [BibTex]


The Role of Kinematics Estimation Accuracy in Learning with Wearable Haptics
The Role of Kinematics Estimation Accuracy in Learning with Wearable Haptics

Rokhmanova, N., Pearl, O., Kuchenbecker, K. J., Halilaj, E.

Abstract presented at the American Society of Biomechanics (ASB), Knoxville, USA, August 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
A model for efficient dynamical ranking in networks

Della Vecchia, A., Neocosmos, K., Larremore, D. B., Moore, C., De Bacco, C.

August 2023 (article) Submitted

pio

Preprint Code link (url) [BibTex]

Preprint Code link (url) [BibTex]


Magnetically assisted soft milli-tools for occluded lumen morphology detection
Magnetically assisted soft milli-tools for occluded lumen morphology detection

Yan, Y., Wang, T., Zhang, R., Liu, Y., Hu, W., Sitti, M.

Science Advances, 9(33):eadi3979, August 2023 (article)

pi

DOI [BibTex]


Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation
Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation

Andrussow, I., Sun, H., Kuchenbecker, K. J., Martius, G.

Advanced Intelligent Systems, 5(8):2300042, August 2023, Inside back cover (article)

Abstract
Intelligent interaction with the physical world requires perceptual abilities beyond vision and hearing; vibrant tactile sensing is essential for autonomous robots to dexterously manipulate unfamiliar objects or safely contact humans. Therefore, robotic manipulators need high-resolution touch sensors that are compact, robust, inexpensive, and efficient. The soft vision-based haptic sensor presented herein is a miniaturized and optimized version of the previously published sensor Insight. Minsight has the size and shape of a human fingertip and uses machine learning methods to output high-resolution maps of 3D contact force vectors at 60 Hz. Experiments confirm its excellent sensing performance, with a mean absolute force error of 0.07 N and contact location error of 0.6 mm across its surface area. Minsight's utility is shown in two robotic tasks on a 3-DoF manipulator. First, closed-loop force control enables the robot to track the movements of a human finger based only on tactile data. Second, the informative value of the sensor output is shown by detecting whether a hard lump is embedded within a soft elastomer with an accuracy of 98%. These findings indicate that Minsight can give robots the detailed fingertip touch sensing needed for dexterous manipulation and physical human–robot interaction.

al hi

DOI Project Page [BibTex]


Learning to Estimate Palpation Forces in Robotic Surgery From Visual-Inertial Data
Learning to Estimate Palpation Forces in Robotic Surgery From Visual-Inertial Data

Lee, Y., Husin, H. M., Forte, M., Lee, S., Kuchenbecker, K. J.

IEEE Transactions on Medical Robotics and Bionics, 5(3):496-506, August 2023 (article)

Abstract
Surgeons cannot directly touch the patient's tissue in robot-assisted minimally invasive procedures. Instead, they must palpate using instruments inserted into the body through trocars. This way of operating largely prevents surgeons from using haptic cues to localize visually undetectable structures such as tumors and blood vessels, motivating research on direct and indirect force sensing. We propose an indirect force-sensing method that combines monocular images of the operating field with measurements from IMUs attached externally to the instrument shafts. Our method is thus suitable for various robotic surgery systems as well as laparoscopic surgery. We collected a new dataset using a da Vinci Si robot, a force sensor, and four different phantom tissue samples. The dataset includes 230 one-minute-long recordings of repeated bimanual palpation tasks performed by four lay operators. We evaluated several network architectures and investigated the role of the network inputs. Using the DenseNet vision model and including inertial data best-predicted palpation forces (lowest average root-mean-square error and highest average coefficient of determination). Ablation studies revealed that video frames carry significantly more information than inertial signals. Finally, we demonstrated the model's ability to generalize to unseen tissue and predict shear contact forces.

hi

DOI [BibTex]

DOI [BibTex]


{BARC}: Breed-Augmented Regression Using Classification for {3D} Dog Reconstruction from Images
BARC: Breed-Augmented Regression Using Classification for 3D Dog Reconstruction from Images

Rueegg, N., Zuffi, S., Schindler, K., Black, M. J.

Int. J. of Comp. Vis. (IJCV), 131(8):1964–1979, August 2023 (article)

Abstract
The goal of this work is to reconstruct 3D dogs from monocular images. We take a model-based approach, where we estimate the shape and pose parameters of a 3D articulated shape model for dogs. We consider dogs as they constitute a challenging problem, given they are highly articulated and come in a variety of shapes and appearances. Recent work has considered a similar task using the multi-animal SMAL model, with additional limb scale parameters, obtaining reconstructions that are limited in terms of realism. Like previous work, we observe that the original SMAL model is not expressive enough to represent dogs of many different breeds. Moreover, we make the hypothesis that the supervision signal used to train the network, that is 2D keypoints and silhouettes, is not sufficient to learn a regressor that can distinguish between the large variety of dog breeds. We therefore go beyond previous work in two important ways. First, we modify the SMAL shape space to be more appropriate for representing dog shape. Second, we formulate novel losses that exploit information about dog breeds. In particular, we exploit the fact that dogs of the same breed have similar body shapes. We formulate a novel breed similarity loss, consisting of two parts: One term is a triplet loss, that encourages the shape of dogs from the same breed to be more similar than dogs of different breeds. The second one is a breed classification loss. With our approach we obtain 3D dogs that, compared to previous work, are quantitatively better in terms of 2D reconstruction, and significantly better according to subjective and quantitative 3D evaluations. Our work shows that a-priori side information about similarity of shape and appearance, as provided by breed labels, can help to compensate for the lack of 3D training data. This concept may be applicable to other animal species or groups of species. We call our method BARC (Breed-Augmented Regression using Classification). Our code is publicly available for research purposes at https://barc.is.tue.mpg.de/.

ps

On-line pdf DOI [BibTex]

On-line pdf DOI [BibTex]


Strap Tightness and Tissue Composition Both Affect the Vibration Created by a Wearable Device
Strap Tightness and Tissue Composition Both Affect the Vibration Created by a Wearable Device

Rokhmanova, N., Faulkner, R., Martus, J., Fiene, J., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Abstract
Wearable haptic devices can provide salient real-time feedback (typically vibration) for rehabilitation, sports training, and skill acquisition. Although the body provides many sites for such cues, the influence of the mounting location on vibrotactile mechanics is commonly ignored. This study builds on previous research by quantifying how changes in strap tightness and local tissue composition affect the physical acceleration generated by a typical vibrotactile device.

hi

Project Page [BibTex]

Project Page [BibTex]


Toward a Device for Reliable Evaluation of Vibrotactile Perception
Toward a Device for Reliable Evaluation of Vibrotactile Perception

Ballardini, G., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

[BibTex]

[BibTex]


no image
Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test: Code

Khojasteh, B., Solowjow, F., Trimpe, S., Kuchenbecker, K. J.

Code published as a companion to the journal article "Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test" in IEEE Transactions on Automation Science and Engineering, July 2023 (misc)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Improving Haptic Rendering Quality by Measuring and Compensating for Undesired Forces
Improving Haptic Rendering Quality by Measuring and Compensating for Undesired Forces

Fazlollahi, F., Taghizadeh, Z., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Capturing Rich Auditory-Haptic Contact Data for Surface Recognition
Capturing Rich Auditory-Haptic Contact Data for Surface Recognition

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Abstract
The sophistication of biological sensing and transduction processes during finger-surface and tool-surface interaction is remarkable, enabling humans to perform ubiquitous tasks such as discriminating and manipulating surfaces. Capturing and processing these rich contact-elicited signals during surface exploration with similar success is an important challenge for artificial systems. Prior research introduced sophisticated mobile surface-sensing systems, but it remains less clear what quality, resolution and acuity of sensor data are necessary to perform human tasks with the same efficiency and accuracy. In order to address this gap in our understanding about artificial surface perception, we have designed a novel auditory-haptic test bed. This study aims to inspire new designs for artificial sensing tools in human-machine and robotic applications.

hi

Project Page [BibTex]

Project Page [BibTex]


Airo{T}ouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction
AiroTouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Catastrophic overfitting can be induced with discriminative non-robust features

Ortiz-Jimenez*, G., de Jorge*, P., Sanyal, A., Bibi, A., Dokania, P. K., Frossard, P., Rogez, G., Torr, P.

Transactions on Machine Learning Research , July 2023, *equal contribution (article)

ei

PDF Code link (url) [BibTex]

PDF Code link (url) [BibTex]


A Multifunctional Soft Robotic Shape Display with High-speed Actuation, Sensing, and Control
A Multifunctional Soft Robotic Shape Display with High-speed Actuation, Sensing, and Control

Johnson, B. K., Naris, M., Sundaram, V., Volchko, A., Ly, K., Mitchell, S. K., Acome, E., Kellaris, N., Keplinger, C., Correll, N., Humbert, J. S., Rentschler, M. E.

Nature Communications, 14(1), July 2023 (article)

Abstract
Shape displays which actively manipulate surface geometry are an expanding robotics domain with applications to haptics, manufacturing, aerodynamics, and more. However, existing displays often lack high-fidelity shape morphing, high-speed deformation, and embedded state sensing, limiting their potential uses. Here, we demonstrate a multifunctional soft shape display driven by a 10 × 10 array of scalable cellular units which combine high-speed electrohydraulic soft actuation, magnetic-based sensing, and control circuitry. We report high-performance reversible shape morphing up to 50 Hz, sensing of surface deformations with 0.1 mm sensitivity and external forces with 50 mN sensitivity in each cell, which we demonstrate across a multitude of applications including user interaction, image display, sensing of object mass, and dynamic manipulation of solids and liquids. This work showcases the rich multifunctionality and high-performance capabilities that arise from tightly-integrating large numbers of electrohydraulic actuators, soft sensors, and controllers at a previously undemonstrated scale in soft robotics.

rm

YouTube video link (url) DOI [BibTex]

YouTube video link (url) DOI [BibTex]


no image
CAPT Motor: A Strong Direct-Drive Haptic Interface

Javot, B., Nguyen, V. H., Ballardini, G., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Can Recording Expert Demonstrations with Tool Vibrations Facilitate Teaching of Manual Skills?
Can Recording Expert Demonstrations with Tool Vibrations Facilitate Teaching of Manual Skills?

Gourishetti, R., Javot, B., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Creating a Haptic Empathetic Robot Animal for Children with Autism
Creating a Haptic Empathetic Robot Animal for Children with Autism

Burns, R. B.

Workshop paper (4 pages) presented at the RSS Pioneers Workshop, Daegu, South Korea, July 2023 (misc)

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


The Influence of Amplitude and Sharpness on the Perceived Intensity of Isoenergetic Ultrasonic Signals
The Influence of Amplitude and Sharpness on the Perceived Intensity of Isoenergetic Ultrasonic Signals

Gueorguiev, D., Rohou–Claquin, B., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Vibrotactile Playback for Teaching Manual Skills from Expert Recordings
Vibrotactile Playback for Teaching Manual Skills from Expert Recordings

Gourishetti, R., Hughes, A. G., Javot, B., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]