Header logo is


2017


no image
Evaluation of High-Fidelity Simulation as a Training Tool in Transoral Robotic Surgery

Bur, A. M., Gomez, E. D., Newman, J. G., Weinstein, G. S., Bert W. O’Malley, J., Rassekh, C. H., Kuchenbecker, K. J.

Laryngoscope, 127(12):2790-2795, December 2017 (article)

hi

DOI [BibTex]

2017


DOI [BibTex]


Learning a model of facial shape and expression from {4D} scans
Learning a model of facial shape and expression from 4D scans

Li, T., Bolkart, T., Black, M. J., Li, H., Romero, J.

ACM Transactions on Graphics, 36(6):194:1-194:17, November 2017, Two first authors contributed equally (article)

Abstract
The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression from 4D face sequences in the D3DFACS dataset along with additional 4D sequences.We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).

ps

data/model video code chumpy code tensorflow paper supplemental Project Page [BibTex]

data/model video code chumpy code tensorflow paper supplemental Project Page [BibTex]


Investigating Body Image Disturbance in Anorexia Nervosa Using Novel Biometric Figure Rating Scales: A Pilot Study
Investigating Body Image Disturbance in Anorexia Nervosa Using Novel Biometric Figure Rating Scales: A Pilot Study

Mölbert, S. C., Thaler, A., Streuber, S., Black, M. J., Karnath, H., Zipfel, S., Mohler, B., Giel, K. E.

European Eating Disorders Review, 25(6):607-612, November 2017 (article)

Abstract
This study uses novel biometric figure rating scales (FRS) spanning body mass index (BMI) 13.8 to 32.2 kg/m2 and BMI 18 to 42 kg/m2. The aims of the study were (i) to compare FRS body weight dissatisfaction and perceptual distortion of women with anorexia nervosa (AN) to a community sample; (ii) how FRS parameters are associated with questionnaire body dissatisfaction, eating disorder symptoms and appearance comparison habits; and (iii) whether the weight spectrum of the FRS matters. Women with AN (n = 24) and a community sample of women (n = 104) selected their current and ideal body on the FRS and completed additional questionnaires. Women with AN accurately picked the body that aligned best with their actual weight in both FRS. Controls underestimated their BMI in the FRS 14–32 and were accurate in the FRS 18–42. In both FRS, women with AN desired a body close to their actual BMI and controls desired a thinner body. Our observations suggest that body image disturbance in AN is unlikely to be characterized by a visual perceptual disturbance, but rather by an idealization of underweight in conjunction with high body dissatisfaction. The weight spectrum of FRS can influence the accuracy of BMI estimation.

ps

publisher DOI Project Page [BibTex]


Embodied Hands: Modeling and Capturing Hands and Bodies Together
Embodied Hands: Modeling and Capturing Hands and Bodies Together

Romero, J., Tzionas, D., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):245:1-245:17, 245:1–245:17, ACM, November 2017 (article)

Abstract
Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.

ps

website youtube paper suppl video link (url) DOI Project Page [BibTex]

website youtube paper suppl video link (url) DOI Project Page [BibTex]


An Online Scalable Approach to Unified Multirobot Cooperative Localization and Object Tracking
An Online Scalable Approach to Unified Multirobot Cooperative Localization and Object Tracking

Ahmad, A., Lawless, G., Lima, P.

IEEE Transactions on Robotics (T-RO), 33, pages: 1184 - 1199, October 2017 (article)

Abstract
In this article we present a unified approach for multi-robot cooperative simultaneous localization and object tracking based on particle filters. Our approach is scalable with respect to the number of robots in the team. We introduce a method that reduces, from an exponential to a linear growth, the space and computation time requirements with respect to the number of robots in order to maintain a given level of accuracy in the full state estimation. Our method requires no increase in the number of particles with respect to the number of robots. However, in our method each particle represents a full state hypothesis, leading to the linear dependency on the number of robots of both space and time complexity. The derivation of the algorithm implementing our approach from a standard particle filter algorithm and its complexity analysis are presented. Through an extensive set of simulation experiments on a large number of randomized datasets, we demonstrate the correctness and efficacy of our approach. Through real robot experiments on a standardized open dataset of a team of four soccer playing robots tracking a ball, we evaluate our method's estimation accuracy with respect to the ground truth values. Through comparisons with other methods based on i) nonlinear least squares minimization and ii) joint extended Kalman filter, we further highlight our method's advantages. Finally, we also present a robustness test for our approach by evaluating it under scenarios of communication and vision failure in teammate robots.

ps

Published Version link (url) DOI [BibTex]

Published Version link (url) DOI [BibTex]


no image
Generalized exploration in policy search

van Hoof, H., Tanneberg, D., Peters, J.

Machine Learning, 106(9-10):1705-1724 , (Editors: Kurt Driessens, Dragi Kocev, Marko Robnik‐Sikonja, and Myra Spiliopoulou), October 2017, Special Issue of the ECML PKDD 2017 Journal Track (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Probabilistic Prioritization of Movement Primitives

Paraschos, A., Lioutikov, R., Peters, J., Neumann, G.

Proceedings of the International Conference on Intelligent Robot Systems, and IEEE Robotics and Automation Letters (RA-L), 2(4):2294-2301, October 2017 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer

Brown, J. D., O’Brien, C. E., Leung, S. C., Dumon, K. R., Lee, D. I., Kuchenbecker, K. J.

IEEE Transactions on Biomedical Engineering, 64(9):2263-2275, September 2017 (article)

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Physical and Behavioral Factors Improve Robot Hug Quality
Physical and Behavioral Factors Improve Robot Hug Quality

Block, A. E., Kuchenbecker, K. J.

Workshop Paper (2 pages) presented at the RO-MAN Workshop on Social Interaction and Multimodal Expression for Socially Intelligent Robots, Lisbon, Portugal, August 2017 (misc)

Abstract
A hug is one of the most basic ways humans can express affection. As hugs are so common, a natural progression of robot development is to have robots one day hug humans as seamlessly as these intimate human-human interactions occur. This project’s purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a warm, soft, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot char- acteristics and nine randomly ordered trials with varied hug pressure and duration. We found that people prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end.

hi

Project Page [BibTex]

Project Page [BibTex]


Crowdshaping Realistic {3D} Avatars with Words
Crowdshaping Realistic 3D Avatars with Words

Streuber, S., Ramirez, M. Q., Black, M., Zuffi, S., O’Toole, A., Hill, M. Q., Hahn, C. A.

August 2017, Application PCT/EP2017/051954 (misc)

Abstract
A method for generating a body shape, comprising the steps: - receiving one or more linguistic descriptors related to the body shape; - retrieving an association between the one or more linguistic descriptors and a body shape; and - generating the body shape, based on the association.

ps

Google Patents [BibTex]

Google Patents [BibTex]


no image
Ungrounded Haptic Augmented Reality System for Displaying Texture and Friction

Culbertson, H., Kuchenbecker, K. J.

IEEE/ASME Transactions on Mechatronics, 22(4):1839-1849, August 2017 (article)

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning Movement Primitive Libraries through Probabilistic Segmentation

Lioutikov, R., Neumann, G., Maeda, G., Peters, J.

International Journal of Robotics Research, 36(8):879-894, July 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Physically Interactive Exercise Games with a Baxter Robot

Fitter, N. T., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Perception of Force and Stiffness in the Presence of Low-Frequency Haptic Noise

Gurari, N., Okamura, A. M., Kuchenbecker, K. J.

PLoS ONE, 12(6):e0178605, June 2017 (article)

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Proton Pack: Visuo-Haptic Surface Data Recording

Burka, A., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Teaching a Robot to Collaborate with a Human Via Haptic Teleoperation

Hu, S., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


How Should Robots Hug?
How Should Robots Hug?

Block, A. E., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Evaluation of a Vibrotactile Simulator for Dental Caries Detection

Kuchenbecker, K. J., Parajon, R., Maggio, M. P.

Simulation in Healthcare, 12(3):148-156, June 2017 (article)

hi

DOI [BibTex]

DOI [BibTex]


no image
An Interactive Augmented-Reality Video Training Platform for the da Vinci Surgical System

Carlson, J., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the ICRA Workshop on C4 Surgical Robots, Singapore, May 2017 (misc)

Abstract
Teleoperated surgical robots such as the Intuitive da Vinci Surgical System facilitate minimally invasive surgeries, which decrease risk to patients. However, these systems can be difficult to learn, and existing training curricula on surgical simulators do not offer students the realistic experience of a full operation. This paper presents an augmented-reality video training platform for the da Vinci that will allow trainees to rehearse any surgery recorded by an expert. While the trainee operates a da Vinci in free space, they see their own instruments overlaid on the expert video. Tools are identified in the source videos via color segmentation and kernelized correlation filter tracking, and their depth is calculated from the da Vinci’s stereoscopic video feed. The user tries to follow the expert’s movements, and if any of their tools venture too far away, the system provides instantaneous visual feedback and pauses to allow the user to correct their motion. The trainee can also rewind the expert video by bringing either da Vinci tool very close to the camera. This combined and augmented video provides the user with an immersive and interactive training experience.

hi

[BibTex]

[BibTex]


no image
Guiding Trajectory Optimization by Demonstrated Distributions

Osa, T., Ghalamzan E., A. M., Stolkin, R., Lioutikov, R., Peters, J., Neumann, G.

IEEE Robotics and Automation Letters, 2(2):819-826, April 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Whole-body multi-contact motion in humans and humanoids: Advances of the CoDyCo European project

Padois, V., Ivaldi, S., Babic, J., Mistry, M., Peters, J., Nori, F.

Robotics and Autonomous Systems, 90, pages: 97-117, April 2017, Special Issue on New Research Frontiers for Intelligent Autonomous Systems (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Probabilistic Movement Primitives for Coordination of Multiple Human-Robot Collaborative Tasks

Maeda, G., Neumann, G., Ewerton, M., Lioutikov, R., Kroemer, O., Peters, J.

Autonomous Robots, 41(3):593-612, March 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Hand-Clapping Games with a Baxter Robot

Fitter, N. T., Kuchenbecker, K. J.

Hands-on demonstration presented at ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vienna, Austria, March 2017 (misc)

Abstract
Robots that work alongside humans might be more effective if they could forge a strong social bond with their human partners. Hand-clapping games and other forms of rhythmic social-physical interaction may foster human-robot teamwork, but the design of such interactions has scarcely been explored. At the HRI 2017 conference, we will showcase several such interactions taken from our recent work with the Rethink Robotics Baxter Research Robot, including tempo-matching, Simon says, and Pat-a-cake-like games. We believe conference attendees will be both entertained and intrigued by this novel demonstration of social-physical HRI.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Automatic OSATS Rating of Trainee Skill at a Pediatric Laparoscopic Suturing Task

Oquendo, Y. A., Riddle, E. W., Hiller, D., Blinman, T. A., Kuchenbecker, K. J.

Surgical Endoscopy, 31(Supplement 1):S28, Extended abstract presented as a podium presentation at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Springer, Houston, USA, March 2017 (misc)

Abstract
Introduction: Minimally invasive surgery has revolutionized surgical practice, but challenges remain. Trainees must acquire complex technical skills while minimizing patient risk, and surgeons must maintain their skills for rare procedures. These challenges are magnified in pediatric surgery due to the smaller spaces, finer tissue, and relative dearth of both inanimate and virtual simulators. To build technical expertise, trainees need opportunities for deliberate practice with specific performance feedback, which is typically provided via tedious human grading. This study aimed to validate a novel motion-tracking system and machine learning algorithm for automatically evaluating trainee performance on a pediatric laparoscopic suturing task using a 1–5 OSATS Overall Skill rating. Methods: Subjects (n=14) ranging from medical students to fellows per- formed one or two trials of an intracorporeal suturing task in a custom pediatric laparoscopy training box (Fig. 1) after watching a video of ideal performance by an expert. The position and orientation of the tools and endoscope were recorded over time using Ascension trakSTAR magnetic motion-tracking sensors, and both instrument grasp angles were recorded over time using flex sensors on the handles. The 27 trials were video-recorded and scored on the OSATS scale by a senior fellow; ratings ranged from 1 to 4. The raw motion data from each trial was processed to calculate over 200 preliminary motion parameters. Regularized least-squares regression (LASSO) was used to identify the most predictive parameters for inclusion in a regression tree. Model performance was evaluated by leave-one-subject-out cross validation, wherein the automatic scores given to each subject’s trials (by a model trained on all other data) are compared to the corresponding human rater scores. Results: The best-performing LASSO algorithm identified 14 predictive parameters for inclusion in the regression tree, including completion time, linear path length, angular path length, angular acceleration, grasp velocity, and grasp acceleration. The final model’s raw output showed a strong positive correlation of 0.87 with the reviewer-generated scores, and rounding the output to the nearest integer yielded a leave-one-subject-out cross-validation accuracy of 77.8%. Results are summarized in the confusion matrix (Table 1). Conclusions: Our novel motion-tracking system and regression model automatically gave previously unseen trials overall skill scores that closely match scores from an expert human rater. With additional data and further development, this system may enable creation of a motion-based training platform for pediatric laparoscopic surgery and could yield insights into the fundamental components of surgical skill.

hi

[BibTex]

[BibTex]


no image
How Much Haptic Surface Data is Enough?

Burka, A., Kuchenbecker, K. J.

Workshop paper (5 pages) presented at the AAAI Spring Symposium on Interactive Multi-Sensory Object Perception for Embodied Agents, Stanford, USA, March 2017 (misc)

Abstract
The Proton Pack is a portable visuo-haptic surface interaction recording device that will be used to collect a vast multimodal dataset, intended for robots to use as part of an approach to understanding the world around them. In order to collect a useful dataset, we want to pick a suitable interaction duration for each surface, noting the tradeoff between data collection resources and completeness of data. One interesting approach frames the data collection process as an online learning problem, building an incremental surface model and using that model to decide when there is enough data. Here we examine how to do such online surface modeling and when to stop collecting data, using kinetic friction as a first domain in which to apply online modeling.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Bioinspired tactile sensor for surface roughness discrimination

Yi, Z., Zhang, Y., Peters, J.

Sensors and Actuators A: Physical, 255, pages: 46-53, March 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Spinal joint compliance and actuation in a simulated bounding quadruped robot
Spinal joint compliance and actuation in a simulated bounding quadruped robot

Pouya, S., Khodabakhsh, M., Sproewitz, A., Ijspeert, A.

{Autonomous Robots}, pages: 437–452, Kluwer Academic Publishers, Springer, Dordrecht, New York, NY, Febuary 2017 (article)

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Importance of Matching Physical Friction, Hardness, and Texture in Creating Realistic Haptic Virtual Surfaces

Culbertson, H., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 10(1):63-74, January 2017 (article)

hi

[BibTex]


no image
Effects of Grip-Force, Contact, and Acceleration Feedback on a Teleoperated Pick-and-Place Task

Khurshid, R. P., Fitter, N. T., Fedalei, E. A., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 10(1):40-53, January 2017 (article)

hi

[BibTex]

[BibTex]


no image
Model-based Contextual Policy Search for Data-Efficient Generalization of Robot Skills

Kupcsik, A., Deisenroth, M., Peters, J., Ai Poh, L., Vadakkepat, V., Neumann, G.

Artificial Intelligence, 247, pages: 415-439, 2017, Special Issue on AI and Robotics (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Anticipatory Action Selection for Human-Robot Table Tennis

Wang, Z., Boularias, A., Mülling, K., Schölkopf, B., Peters, J.

Artificial Intelligence, 247, pages: 399-414, 2017, Special Issue on AI and Robotics (article)

Abstract
Abstract Anticipation can enhance the capability of a robot in its interaction with humans, where the robot predicts the humans' intention for selecting its own action. We present a novel framework of anticipatory action selection for human-robot interaction, which is capable to handle nonlinear and stochastic human behaviors such as table tennis strokes and allows the robot to choose the optimal action based on prediction of the human partner's intention with uncertainty. The presented framework is generic and can be used in many human-robot interaction scenarios, for example, in navigation and human-robot co-manipulation. In this article, we conduct a case study on human-robot table tennis. Due to the limited amount of time for executing hitting movements, a robot usually needs to initiate its hitting movement before the opponent hits the ball, which requires the robot to be anticipatory based on visual observation of the opponent's movement. Previous work on Intention-Driven Dynamics Models (IDDM) allowed the robot to predict the intended target of the opponent. In this article, we address the problem of action selection and optimal timing for initiating a chosen action by formulating the anticipatory action selection as a Partially Observable Markov Decision Process (POMDP), where the transition and observation are modeled by the \{IDDM\} framework. We present two approaches to anticipatory action selection based on the \{POMDP\} formulation, i.e., a model-free policy learning method based on Least-Squares Policy Iteration (LSPI) that employs the \{IDDM\} for belief updates, and a model-based Monte-Carlo Planning (MCP) method, which benefits from the transition and observation model by the IDDM. Experimental results using real data in a simulated environment show the importance of anticipatory action selection, and that \{POMDPs\} are suitable to formulate the anticipatory action selection problem by taking into account the uncertainties in prediction. We also show that existing algorithms for POMDPs, such as \{LSPI\} and MCP, can be applied to substantially improve the robot's performance in its interaction with humans.

am ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
easyGWAS: A Cloud-based Platform for Comparing the Results of Genome-wide Association Studies

Grimm, D., Roqueiro, D., Salome, P., Kleeberger, S., Greshake, B., Zhu, W., Liu, C., Lippert, C., Stegle, O., Schölkopf, B., Weigel, D., Borgwardt, K.

The Plant Cell, 29(1):5-19, 2017 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A Novel Unsupervised Segmentation Approach Quantifies Tumor Tissue Populations Using Multiparametric MRI: First Results with Histological Validation

Katiyar, P., Divine, M. R., Kohlhofer, U., Quintanilla-Martinez, L., Schölkopf, B., Pichler, B. J., Disselhorst, J. A.

Molecular Imaging and Biology, 19(3):391-397, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


Early Stopping Without a Validation Set
Early Stopping Without a Validation Set

Mahsereci, M., Balles, L., Lassner, C., Hennig, P.

arXiv preprint arXiv:1703.09580, 2017 (article)

Abstract
Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization. To find a good point to halt the optimizer, a common practice is to split the dataset into a training and a smaller validation set to obtain an ongoing estimate of the generalization performance. In this paper we propose a novel early stopping criterion which is based on fast-to-compute, local statistics of the computed gradients and entirely removes the need for a held-out validation set. Our experiments show that this is a viable approach in the setting of least-squares and logistic regression as well as neural networks.

ps pn

link (url) Project Page Project Page [BibTex]


no image
Minimax Estimation of Kernel Mean Embeddings

Tolstikhin, I., Sriperumbudur, B., Muandet, K.

Journal of Machine Learning Research, 18(86):1-47, 2017 (article)

ei

link (url) Project Page [BibTex]


no image
Kernel Mean Embedding of Distributions: A Review and Beyond

Muandet, K., Fukumizu, K., Sriperumbudur, B., Schölkopf, B.

Foundations and Trends in Machine Learning, 10(1-2):1-141, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Prediction of intention during interaction with iCub with Probabilistic Movement Primitives

Dermy, O., Paraschos, A., Ewerton, M., Charpillet, F., Peters, J., Ivaldi, S.

Frontiers in Robotics and AI, 4, pages: 45, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Manifold-based multi-objective policy search with sample reuse

Parisi, S., Pirotta, M., Peters, J.

Neurocomputing, 263, pages: 3-14, (Editors: Madalina Drugan, Marco Wiering, Peter Vamplew, and Madhu Chetty), 2017, Special Issue on Multi-Objective Reinforcement Learning (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Spectral Clustering predicts tumor tissue heterogeneity using dynamic 18F-FDG PET: a complement to the standard compartmental modeling approach

Katiyar, P., Divine, M. R., Kohlhofer, U., Quintanilla-Martinez, L., Schölkopf, B., Pichler, B. J., Disselhorst, J. A.

Journal of Nuclear Medicine, 58(4):651-657, 2017 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Electroencephalographic identifiers of motor adaptation learning

Ozdenizci, O., Yalcin, M., Erdogan, A., Patoglu, V., Grosse-Wentrup, M., Cetin, M.

Journal of Neural Engineering, 14(4):046027, 2017 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Detecting distortions of peripherally presented letter stimuli under crowded conditions

Wallis, T. S. A., Tobias, S., Bethge, M., Wichmann, F. A.

Attention, Perception, & Psychophysics, 79(3):850-862, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Temporal evolution of the central fixation bias in scene viewing

Rothkegel, L. O. M., Trukenbrod, H. A., Schütt, H. H., Wichmann, F. A., Engbert, R.

Journal of Vision, 17(13):3, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
BundleMAP: Anatomically Localized Classification, Regression, and Hypothesis Testing in Diffusion MRI

Khatami, M., Schmidt-Wilcke, T., Sundgren, P. C., Abbasloo, A., Schölkopf, B., Schultz, T.

Pattern Recognition, 63, pages: 593-600, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


Data-Driven Physics for Human Soft Tissue Animation
Data-Driven Physics for Human Soft Tissue Animation

Kim, M., Pons-Moll, G., Pujades, S., Bang, S., Kim, J., Black, M. J., Lee, S.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4):54:1-54:12, 2017 (article)

Abstract
Data driven models of human poses and soft-tissue deformations can produce very realistic results, but they only model the visible surface of the human body and cannot create skin deformation due to interactions with the environment. Physical simulations can generalize to external forces, but their parameters are difficult to control. In this paper, we present a layered volumetric human body model learned from data. Our model is composed of a data-driven inner layer and a physics-based external layer. The inner layer is driven with a volumetric statistical body model (VSMPL). The soft tissue layer consists of a tetrahedral mesh that is driven using the finite element method (FEM). Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity, are learned directly from 4D registrations of humans exhibiting soft tissue deformations. The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces. Experiments show that the resulting avatars produce realistic results on held out sequences and react to external forces. Moreover, the model supports the retargeting of physical properties from one avatar when they share the same topology.

ps

video paper link (url) Project Page [BibTex]

video paper link (url) Project Page [BibTex]


no image
A parametric texture model based on deep convolutional features closely matches texture appearance for humans

Wallis, T. S. A., Funke, C. M., Ecker, A. S., Gatys, L. A., Wichmann, F. A., Bethge, M.

Journal of Vision, 17(12), 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Model Selection for Gaussian Mixture Models

Huang, T., Peng, H., Zhang, K.

Statistica Sinica, 27(1):147-169, 2017 (article)

ei

link (url) [BibTex]

link (url) [BibTex]