Header logo is


2019


Series Elastic Behavior of Biarticular Muscle-Tendon Structure in a Robotic Leg
Series Elastic Behavior of Biarticular Muscle-Tendon Structure in a Robotic Leg

Ruppert, F., Badri-Spröwitz, A.

Frontiers in Neurorobotics, 64, pages: 13, 13, August 2019 (article)

dlg

Frontiers YouTube link (url) DOI [BibTex]

2019


Frontiers YouTube link (url) DOI [BibTex]


The positive side of damping
The positive side of damping

Heim, S., Millard, M., Le Mouel, C., Sproewitz, A.

Proceedings of AMAM, The 9th International Symposium on Adaptive Motion of Animals and Machines, August 2019 (conference) Accepted

dlg

[BibTex]

[BibTex]


Beyond Basins of Attraction: Quantifying Robustness of Natural Dynamics
Beyond Basins of Attraction: Quantifying Robustness of Natural Dynamics

Steve Heim, , Spröwitz, A.

IEEE Transactions on Robotics (T-RO) , 35(4), pages: 939-952, August 2019 (article)

Abstract
Properly designing a system to exhibit favorable natural dynamics can greatly simplify designing or learning the control policy. However, it is still unclear what constitutes favorable natural dynamics and how to quantify its effect. Most studies of simple walking and running models have focused on the basins of attraction of passive limit cycles and the notion of self-stability. We instead emphasize the importance of stepping beyond basins of attraction. In this paper, we show an approach based on viability theory to quantify robust sets in state-action space. These sets are valid for the family of all robust control policies, which allows us to quantify the robustness inherent to the natural dynamics before designing the control policy or specifying a control objective. We illustrate our formulation using spring-mass models, simple low-dimensional models of running systems. We then show an example application by optimizing robustness of a simulated planar monoped, using a gradient-free optimization scheme. Both case studies result in a nonlinear effective stiffness providing more robustness.

dlg

arXiv preprint arXiv:1806.08081 T-RO link (url) DOI Project Page [BibTex]

arXiv preprint arXiv:1806.08081 T-RO link (url) DOI Project Page [BibTex]


Motion Planning for Multi-Mobile-Manipulator Payload Transport Systems
Motion Planning for Multi-Mobile-Manipulator Payload Transport Systems

Tallamraju, R., Salunkhe, D., Rajappa, S., Ahmad, A., Karlapalem, K., Shah, S. V.

In 15th IEEE International Conference on Automation Science and Engineering, IEEE, August 2019 (inproceedings) Accepted

ps

[BibTex]

[BibTex]


Soft Continuous Surface for Micromanipulation driven by Light-controlled Hydrogels
Soft Continuous Surface for Micromanipulation driven by Light-controlled Hydrogels

Choi, E., Jeong, H., Qiu, T., Fischer, P., Palagi, S.

4th IEEE International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), July 2019 (conference)

Abstract
Remotely controlled, automated actuation and manipulation at the microscale is essential for a number of micro-manufacturing, biology, and lab-on-a-chip applications. To transport and manipulate micro-objects, arrays of remotely controlled micro-actuators are required, which, in turn, typically require complex and expensive solid-state chips. Here, we show that a continuous surface can function as a highly parallel, many-degree of freedom, wirelessly-controlled microactuator with seamless deformation. The soft continuous surface is based on a hydrogel that undergoes a volume change in response to applied light. The fabrication of the hydrogels and the characterization of their optical and thermomechanical behaviors are reported. The temperature-dependent localized deformation of the hydrogel is also investigated by numerical simulations. Static and dynamic deformations are obtained in the soft material by projecting light fields at high spatial resolution onto the surface. By controlling such deformations in open loop and especially closed loop, automated photoactuation is achieved. The surface deformations are then exploited to examine how inert microbeads can be manipulated autonomously on the surface. We believe that the proposed approach suggests ways to implement universal 2D micromanipulation schemes that can be useful for automation in microfabrication and lab-on-a-chip applications.

pf

[BibTex]

[BibTex]


Soft Phantom for the Training of Renal Calculi Diagnostics and  Lithotripsy
Soft Phantom for the Training of Renal Calculi Diagnostics and Lithotripsy

Li., D., Suarez-Ibarrola, R., Choi, E., Jeong, M., Gratzke, C., Miernik, A., Fischer, P., Qiu, T.

41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), July 2019 (conference)

Abstract
Organ models are important for medical training and surgical planning. With the fast development of additive fabrication technologies, including 3D printing, the fabrication of 3D organ phantoms with precise anatomical features becomes possible. Here, we develop the first high-resolution kidney phantom based on soft material assembly, by combining 3D printing and polymer molding techniques. The phantom exhibits both the detailed anatomy of a human kidney and the elasticity of soft tissues. The phantom assembly can be separated into two parts on the coronal plane, thus large renal calculi are readily placed at any desired location of the calyx. With our sealing method, the assembled phantom withstands a hydraulic pressure that is four times the normal intrarenal pressure, thus it allows the simulation of medical procedures under realistic pressure conditions. The medical diagnostics of the renal calculi is performed by multiple imaging modalities, including X-ray, ultrasound imaging and endoscopy. The endoscopic lithotripsy is also successfully performed on the phantom. The use of a multifunctional soft phantom assembly thus shows great promise for the simulation of minimally invasive medical procedures under realistic conditions.

pf

[BibTex]

[BibTex]


A Magnetic Actuation System for the  Active Microrheology in Soft Biomaterials
A Magnetic Actuation System for the Active Microrheology in Soft Biomaterials

Jeong, M., Choi, E., Li., D., Palagi, S., Fischer, P., Qiu, T.

4th IEEE International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), July 2019 (conference)

Abstract
Microrheology is a key technique to characterize soft materials at small scales. The microprobe is wirelessly actuated and therefore typically only low forces or torques can be applied, which limits the range of the applied strain. Here, we report a new magnetic actuation system for microrheology consisting of an array of rotating permanent magnets, which achieves a rotating magnetic field with a spatially homogeneous high field strength of ~100 mT in a working volume of ~20×20×20 mm3. Compared to a traditional electromagnetic coil system, the permanent magnet assembly is portable and does not require cooling, and it exerts a large magnetic torque on the microprobe that is an order of magnitude higher than previous setups. Experimental results demonstrate that the measurement range of the soft gels’ elasticity covers at least five orders of magnitude. With the large actuation torque, it is also possible to study the fracture mechanics of soft biomaterials at small scales.

pf

[BibTex]

[BibTex]


no image
Beta Power May Mediate the Effect of Gamma-TACS on Motor Performance

Mastakouri, A., Schölkopf, B., Grosse-Wentrup, M.

Engineering in Medicine and Biology Conference (EMBC), July 2019 (conference) Accepted

ei

arXiv PDF [BibTex]

arXiv PDF [BibTex]


no image
Coordinating Users of Shared Facilities via Data-driven Predictive Assistants and Game Theory

Geiger, P., Besserve, M., Winkelmann, J., Proissl, C., Schölkopf, B.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 49, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Learning Variable Impedance Control for Contact Sensitive Tasks
Learning Variable Impedance Control for Contact Sensitive Tasks

Bogdanovic, M., Khadiv, M., Righetti, L.

arXiv preprint, arXiv:1907.07500, July 2019 (article)

Abstract
Reinforcement learning algorithms have shown great success in solving different problems ranging from playing video games to robotics. However, they struggle to solve delicate robotic problems, especially those involving contact interactions. Though in principle a policy outputting joint torques should be able to learn these tasks, in practice we see that they have difficulty to robustly solve the problem without any structure in the action space. In this paper, we investigate how the choice of action space can give robust performance in presence of contact uncertainties. We propose to learn a policy that outputs impedance and desired position in joint space as a function of system states without imposing any other structure to the problem. We compare the performance of this approach to torque and position control policies under different contact uncertainties. Extensive simulation results on two different systems, a hopper (floating-base) with intermittent contacts and a manipulator (fixed-base) wiping a table, show that our proposed approach outperforms policies outputting torque or position in terms of both learning rate and robustness to environment uncertainty.

mg

[BibTex]


no image
What’s in the Adaptive Toolbox and How Do People Choose From It? Rational Models of Strategy Selection in Risky Choice

Mohnert, F., Pachur, T., Lieder, F.

41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

re

[BibTex]


no image
Measuring how people learn how to plan

Jain, Y. R., Callaway, F., Lieder, F.

RLDM 2019, July 2019 (conference)

re

[BibTex]

[BibTex]


Event-triggered Pulse Control with Model Learning (if Necessary)
Event-triggered Pulse Control with Model Learning (if Necessary)

Baumann, D., Solowjow, F., Johansson, K. H., Trimpe, S.

In Proceedings of the American Control Conference, pages: 792-797, American Control Conference (ACC), July 2019 (inproceedings)

ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


no image
Measuring how people learn how to plan

Jain, Y. R., Callaway, F., Lieder, F.

41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

re

[BibTex]

[BibTex]


no image
The Sensitivity of Counterfactual Fairness to Unmeasured Confounding

Kilbertus, N., Ball, P. J., Kusner, M. J., Weller, A., Silva, R.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 213, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Objective and Subjective Assessment of Algorithms for Reducing Three-Axis Vibrations to One-Axis Vibrations
Objective and Subjective Assessment of Algorithms for Reducing Three-Axis Vibrations to One-Axis Vibrations

Park, G., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference, pages: 467-472, July 2019 (inproceedings)

Abstract
A typical approach to creating realistic vibrotactile feedback is reducing 3D vibrations recorded by an accelerometer to 1D signals that can be played back on a haptic actuator, but some of the information is often lost in this dimensional reduction process. This paper describes seven representative algorithms and proposes four metrics based on the spectral match, the temporal match, and the average value and the variability of them across 3D rotations. These four performance metrics were applied to four texture recordings, and the method utilizing the discrete fourier transform (DFT) was found to be the best regardless of the sensing axis. We also recruited 16 participants to assess the perceptual similarity achieved by each algorithm in real time. We found the four metrics correlated well with the subjectively rated similarities for the six dimensional reduction algorithms, with the exception of taking the 3D vector magnitude, which was perceived to be good despite its low spectral and temporal match metrics.

hi

DOI [BibTex]

DOI [BibTex]


no image
A cognitive tutor for helping people overcome present bias

Lieder, F., Callaway, F., Jain, Y., Krueger, P., Das, P., Gul, S., Griffiths, T.

RLDM 2019, July 2019 (conference)

re

[BibTex]

[BibTex]


no image
The Incomplete Rosetta Stone problem: Identifiability results for Multi-view Nonlinear ICA

Gresele*, L., Rubenstein*, P. K., Mehrjou, A., Locatello, F., Schölkopf, B.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 53, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Fingertip Interaction Metrics Correlate with Visual and Haptic Perception of Real Surfaces
Fingertip Interaction Metrics Correlate with Visual and Haptic Perception of Real Surfaces

Vardar, Y., Wallraven, C., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 395-400, Tokyo, Japan, July 2019 (inproceedings)

Abstract
Both vision and touch contribute to the perception of real surfaces. Although there have been many studies on the individual contributions of each sense, it is still unclear how each modality’s information is processed and integrated. To fill this gap, we investigated the similarity of visual and haptic perceptual spaces, as well as how well they each correlate with fingertip interaction metrics. Twenty participants interacted with ten different surfaces from the Penn Haptic Texture Toolkit by either looking at or touching them and judged their similarity in pairs. By analyzing the resulting similarity ratings using multi-dimensional scaling (MDS), we found that surfaces are similarly organized within the three-dimensional perceptual spaces of both modalities. Also, between-participant correlations were significantly higher in the haptic condition. In a separate experiment, we obtained the contact forces and accelerations acting on one finger interacting with each surface in a controlled way. We analyzed the collected fingertip interaction data in both the time and frequency domains. Our results suggest that the three perceptual dimensions for each modality can be represented by roughness/smoothness, hardness/softness, and friction, and that these dimensions can be estimated by surface vibration power, tap spectral centroid, and kinetic friction coefficient, respectively.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning

Peharz, R., Vergari, A., Stelzner, K., Molina, A., Shao, X., Trapp, M., Kersting, K., Ghahramani, Z.

Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI), pages: 124, (Editors: Amir Globerson and Ricardo Silva), AUAI Press, July 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

Ranjan, A., Jampani, V., Balles, L., Kim, K., Sun, D., Wulff, J., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems.

ps

Paper link (url) Project Page Project Page [BibTex]

Paper link (url) Project Page Project Page [BibTex]


Taking a Deeper Look at the Inverse Compositional Algorithm
Taking a Deeper Look at the Inverse Compositional Algorithm

Lv, Z., Dellaert, F., Rehg, J. M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
In this paper, we provide a modern synthesis of the classic inverse compositional algorithm for dense image alignment. We first discuss the assumptions made by this well-established technique, and subsequently propose to relax these assumptions by incorporating data-driven priors into this model. More specifically, we unroll a robust version of the inverse compositional algorithm and replace multiple components of this algorithm using more expressive models whose parameters we train in an end-to-end fashion from data. Our experiments on several challenging 3D rigid motion estimation tasks demonstrate the advantages of combining optimization with learning-based techniques, outperforming the classic inverse compositional algorithm as well as data-driven image-to-pose regression approaches.

avg

pdf suppmat Video Project Page Poster [BibTex]

pdf suppmat Video Project Page Poster [BibTex]


no image
Kernel Mean Matching for Content Addressability of GANs

Jitkrittum*, W., Sangkloy*, P., Gondal, M. W., Raj, A., Hays, J., Schölkopf, B.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 3140-3151, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019, *equal contribution (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., Bachem, O.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 4114-4124, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Data-driven inference of passivity properties via Gaussian process optimization
Data-driven inference of passivity properties via Gaussian process optimization

Romer, A., Trimpe, S., Allgöwer, F.

In Proceedings of the European Control Conference, European Control Conference (ECC), June 2019 (inproceedings)

ics

PDF [BibTex]

PDF [BibTex]


Local Temporal Bilinear Pooling for Fine-grained Action Parsing
Local Temporal Bilinear Pooling for Fine-grained Action Parsing

Zhang, Y., Tang, S., Muandet, K., Jarvers, C., Neumann, H.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Fine-grained temporal action parsing is important in many applications, such as daily activity understanding, human motion analysis, surgical robotics and others requiring subtle and precise operations in a long-term period. In this paper we propose a novel bilinear pooling operation, which is used in intermediate layers of a temporal convolutional encoder-decoder net. In contrast to other work, our proposed bilinear pooling is learnable and hence can capture more complex local statistics than the conventional counterpart. In addition, we introduce exact lower-dimension representations of our bilinear forms, so that the dimensionality is reduced with neither information loss nor extra computation. We perform intensive experiments to quantitatively analyze our model and show the superior performances to other state-of-the-art work on various datasets.

ei ps

Code video demo pdf link (url) [BibTex]

Code video demo pdf link (url) [BibTex]


Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Sanyal, S., Bolkart, T., Feng, H., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
The estimation of 3D face shape from a single image must be robust to variations in lighting, head pose, expression, facial hair, makeup, and occlusions. Robustness requires a large training set of in-the-wild images, which by construction, lack ground truth 3D shape. To train a network without any 2D-to-3D supervision, we present RingNet, which learns to compute 3D face shape from a single image. Our key observation is that an individual’s face shape is constant across images, regardless of expression, pose, lighting, etc. RingNet leverages multiple images of a person and automatically detected 2D face features. It uses a novel loss that encourages the face shape to be similar when the identity is the same and different for different people. We achieve invariance to expression by representing the face using the FLAME model. Once trained, our method takes a single image and outputs the parameters of FLAME, which can be readily animated. Additionally we create a new database of faces “not quite in-the-wild” (NoW) with 3D head scans and high-resolution images of the subjects in a wide variety of conditions. We evaluate publicly available methods and find that RingNet is more accurate than methods that use 3D supervision. The dataset, model, and results are available for research purposes.

ps

code pdf preprint link (url) Project Page [BibTex]

code pdf preprint link (url) Project Page [BibTex]


MOTS: Multi-Object Tracking and Segmentation
MOTS: Multi-Object Tracking and Segmentation

Voigtlaender, P., Krause, M., Osep, A., Luiten, J., Sekar, B. B. G., Geiger, A., Leibe, B.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
This paper extends the popular task of multi-object tracking to multi-object tracking and segmentation (MOTS). Towards this goal, we create dense pixel-level annotations for two existing tracking datasets using a semi-automatic annotation procedure. Our new annotations comprise 65,213 pixel masks for 977 distinct objects (cars and pedestrians) in 10,870 video frames. For evaluation, we extend existing multi-object tracking metrics to this new task. Moreover, we propose a new baseline method which jointly addresses detection, tracking, and segmentation with a single convolutional network. We demonstrate the value of our datasets by achieving improvements in performance when training on MOTS annotations. We believe that our datasets, metrics and baseline will become a valuable resource towards developing multi-object tracking approaches that go beyond 2D bounding boxes.

avg

pdf suppmat Project Page Poster Video Project Page [BibTex]

pdf suppmat Project Page Poster Video Project Page [BibTex]


PointFlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds
PointFlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds

Behl, A., Paschalidou, D., Donne, S., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D motion from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.

avg

pdf suppmat Project Page Poster Video [BibTex]

pdf suppmat Project Page Poster Video [BibTex]


no image
Generate Semantically Similar Images with Kernel Mean Matching

Jitkrittum*, W., Sangkloy*, P., Gondal, M. W., Raj, A., Hays, J., Schölkopf, B.

6th Workshop Women in Computer Vision (WiCV) (oral presentation), June 2019, *equal contribution (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Introducing the Decision Advisor: A simple online tool that helps people overcome cognitive biases and experience less regret in real-life decisions

Iwama, G., Greenberg, S., Moore, D., Lieder, F.

40th Annual Meeting of the Society for Judgement and Decision Making, June 2019 (conference)

re

[BibTex]

[BibTex]


no image
Projections for Approximate Policy Iteration Algorithms

Akrour, R., Pajarinen, J., Peters, J., Neumann, G.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 181-190, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Learning Joint Reconstruction of Hands and Manipulated Objects
Learning Joint Reconstruction of Hands and Manipulated Objects

Hasson, Y., Varol, G., Tzionas, D., Kalevatykh, I., Black, M. J., Laptev, I., Schmid, C.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 11807-11816, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Estimating hand-object manipulations is essential for interpreting and imitating human actions. Previous work has made significant progress towards reconstruction of hand poses and object shapes in isolation. Yet, reconstructing hands and objects during manipulation is a more challenging task due to significant occlusions of both the hand and object. While presenting challenges, manipulations may also simplify the problem since the physics of contact restricts the space of valid hand-object configurations. For example, during manipulation, the hand and object should be in contact but not interpenetrate. In this work, we regularize the joint reconstruction of hands and objects with manipulation constraints. We present an end-to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and evaluate the model, we also propose a new large-scale synthetic dataset, ObMan, with hand-object manipulations. We demonstrate the transferability of ObMan-trained models to real data.

ps

pdf suppl poster link (url) Project Page Project Page [BibTex]

pdf suppl poster link (url) Project Page Project Page [BibTex]


Connecting the Dots: Learning Representations for Active Monocular Depth Estimation
Connecting the Dots: Learning Representations for Active Monocular Depth Estimation

Riegler, G., Liao, Y., Donne, S., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
We propose a technique for depth estimation with a monocular structured-light camera, \ie, a calibrated stereo set-up with one camera and one laser projector. Instead of formulating the depth estimation via a correspondence search problem, we show that a simple convolutional architecture is sufficient for high-quality disparity estimates in this setting. As accurate ground-truth is hard to obtain, we train our model in a self-supervised fashion with a combination of photometric and geometric losses. Further, we demonstrate that the projected pattern of the structured light sensor can be reliably separated from the ambient information. This can then be used to improve depth boundaries in a weakly supervised fashion by modeling the joint statistics of image and depth edges. The model trained in this fashion compares favorably to the state-of-the-art on challenging synthetic and real-world datasets. In addition, we contribute a novel simulator, which allows to benchmark active depth prediction algorithms in controlled conditions.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Implementation of a 6-{DOF} Parallel Continuum Manipulator for Delivering Fingertip Tactile Cues
Implementation of a 6-DOF Parallel Continuum Manipulator for Delivering Fingertip Tactile Cues

Young, E. M., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 12(3):295-306, June 2019 (article)

Abstract
Existing fingertip haptic devices can deliver different subsets of tactile cues in a compact package, but we have not yet seen a wearable six-degree-of-freedom (6-DOF) display. This paper presents the Fuppeteer (short for Fingertip Puppeteer), a device that is capable of controlling the position and orientation of a flat platform, such that any combination of normal and shear force can be delivered at any location on any human fingertip. We build on our previous work of designing a parallel continuum manipulator for fingertip haptics by presenting a motorized version in which six flexible Nitinol wires are actuated via independent roller mechanisms and proportional-derivative controllers. We evaluate the settling time and end-effector vibrations observed during system responses to step inputs. After creating a six-dimensional lookup table and adjusting simulated inputs using measured Jacobians, we show that the device can make contact with all parts of the fingertip with a mean error of 1.42 mm. Finally, we present results from a human-subject study. A total of 24 users discerned 9 evenly distributed contact locations with an average accuracy of 80.5%. Translational and rotational shear cues were identified reasonably well near the center of the fingertip and more poorly around the edges.

hi

DOI [BibTex]


Trajectory-Based Off-Policy Deep Reinforcement Learning
Trajectory-Based Off-Policy Deep Reinforcement Learning

Doerr, A., Volpp, M., Toussaint, M., Trimpe, S., Daniel, C.

In Proceedings of the International Conference on Machine Learning (ICML), International Conference on Machine Learning (ICML), June 2019 (inproceedings)

Abstract
Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.

ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


no image
Switching Linear Dynamics for Variational Bayes Filtering

Becker-Ehmck, P., Peters, J., van der Smagt, P.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 553-562, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Learning Non-volumetric Depth Fusion using Successive Reprojections
Learning Non-volumetric Depth Fusion using Successive Reprojections

Donne, S., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Given a set of input views, multi-view stereopsis techniques estimate depth maps to represent the 3D reconstruction of the scene; these are fused into a single, consistent, reconstruction -- most often a point cloud. In this work we propose to learn an auto-regressive depth refinement directly from data. While deep learning has improved the accuracy and speed of depth estimation significantly, learned MVS techniques remain limited to the planesweeping paradigm. We refine a set of input depth maps by successively reprojecting information from neighbouring views to leverage multi-view constraints. Compared to learning-based volumetric fusion techniques, an image-based representation allows significantly more detailed reconstructions; compared to traditional point-based techniques, our method learns noise suppression and surface completion in a data-driven fashion. Due to the limited availability of high-quality reconstruction datasets with ground truth, we introduce two novel synthetic datasets to (pre-)train our network. Our approach is able to improve both the output depth maps and the reconstructed point cloud, for both learned and traditional depth estimation front-ends, on both synthetic and real data.

avg

pdf suppmat Project Page Video Poster blog [BibTex]

pdf suppmat Project Page Video Poster blog [BibTex]


Expressive Body Capture: 3D Hands, Face, and Body from a Single Image
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A. A. A., Tzionas, D., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 10975-10985, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
To facilitate the analysis of human actions, interactions and emotions, we compute a 3D model of human body pose, hand pose, and facial expression from a single monocular image. To achieve this, we use thousands of 3D scans to train a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with fully articulated hands and an expressive face. Learning to regress the parameters of SMPL-X directly from images is challenging without paired images and 3D ground truth. Consequently, we follow the approach of SMPLify, which estimates 2D features and then optimizes model parameters to fit the features. We improve on SMPLify in several significant ways: (1) we detect 2D features corresponding to the face, hands, and feet and fit the full SMPL-X model to these; (2) we train a new neural network pose prior using a large MoCap dataset; (3) we define a new interpenetration penalty that is both fast and accurate; (4) we automatically detect gender and the appropriate body models (male, female, or neutral); (5) our PyTorch implementation achieves a speedup of more than 8x over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to both controlled images and images in the wild. We evaluate 3D accuracy on a new curated dataset comprising 100 images with pseudo ground-truth. This is a step towards automatic expressive human capture from monocular RGB data. The models, code, and data are available for research purposes at https://smpl-x.is.tue.mpg.de.

ps

video code pdf suppl poster link (url) Project Page [BibTex]

video code pdf suppl poster link (url) Project Page [BibTex]


Capture, Learning, and Synthesis of 3D Speaking Styles
Capture, Learning, and Synthesis of 3D Speaking Styles

Cudeiro, D., Bolkart, T., Laidlaw, C., Ranjan, A., Black, M. J.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input—even speech in languages other than English—and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.

ps

code Project Page video paper [BibTex]

code Project Page video paper [BibTex]


no image
Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness

Suter, R., Miladinovic, D., Schölkopf, B., Bauer, S.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 6056-6065, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids
Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids

Paschalidou, D., Ulusoy, A. O., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Abstracting complex 3D shapes with parsimonious part-based representations has been a long standing goal in computer vision. This paper presents a learning-based solution to this problem which goes beyond the traditional 3D cuboid representation by exploiting superquadrics as atomic elements. We demonstrate that superquadrics lead to more expressive 3D scene parses while being easier to learn than 3D cuboid representations. Moreover, we provide an analytical solution to the Chamfer loss which avoids the need for computational expensive reinforcement learning or iterative prediction. Our model learns to parse 3D objects into consistent superquadric representations without supervision. Results on various ShapeNet categories as well as the SURREAL human body dataset demonstrate the flexibility of our model in capturing fine details and complex poses that could not have been modelled using cuboids.

avg

Project Page Poster suppmat pdf Video blog handout [BibTex]

Project Page Poster suppmat pdf Video blog handout [BibTex]


no image
Variational Autoencoders Recover PCA Directions (by Accident)

Rolinek, M., Zietlow, D., Martius, G.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
The Variational Autoencoder (VAE) is a powerful architecture capable of representation learning and generative modeling. When it comes to learning interpretable (disentangled) representations, VAE and its variants show unparalleled performance. However, the reasons for this are unclear, since a very particular alignment of the latent embedding is needed but the design of the VAE does not encourage it in any explicit way. We address this matter and offer the following explanation: the diagonal approximation in the encoder together with the inherent stochasticity force local orthogonality of the decoder. The local behavior of promoting both reconstruction and orthogonality matches closely how the PCA embedding is chosen. Alongside providing an intuitive understanding, we justify the statement with full theoretical analysis as well as with experiments.

al

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


Resource-aware IoT Control: Saving Communication through Predictive Triggering
Resource-aware IoT Control: Saving Communication through Predictive Triggering

Trimpe, S., Baumann, D.

IEEE Internet of Things Journal, 6(3):5013-5028, June 2019 (article)

Abstract
The Internet of Things (IoT) interconnects multiple physical devices in large-scale networks. When the 'things' coordinate decisions and act collectively on shared information, feedback is introduced between them. Multiple feedback loops are thus closed over a shared, general-purpose network. Traditional feedback control is unsuitable for design of IoT control because it relies on high-rate periodic communication and is ignorant of the shared network resource. Therefore, recent event-based estimation methods are applied herein for resource-aware IoT control allowing agents to decide online whether communication with other agents is needed, or not. While this can reduce network traffic significantly, a severe limitation of typical event-based approaches is the need for instantaneous triggering decisions that leave no time to reallocate freed resources (e.g., communication slots), which hence remain unused. To address this problem, novel predictive and self triggering protocols are proposed herein. From a unified Bayesian decision framework, two schemes are developed: self triggers that predict, at the current triggering instant, the next one; and predictive triggers that check at every time step, whether communication will be needed at a given prediction horizon. The suitability of these triggers for feedback control is demonstrated in hardware experiments on a cart-pole, and scalability is discussed with a multi-vehicle simulation.

ics

PDF arXiv DOI [BibTex]

PDF arXiv DOI [BibTex]


no image
First-Order Adversarial Vulnerability of Neural Networks and Input Dimension

Simon-Gabriel, C., Ollivier, Y., Bottou, L., Schölkopf, B., Lopez-Paz, D.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 5809-5817, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Learning and Tracking the {3D} Body Shape of Freely Moving Infants from {RGB-D} sequences
Learning and Tracking the 3D Body Shape of Freely Moving Infants from RGB-D sequences

Hesse, N., Pujades, S., Black, M., Arens, M., Hofmann, U., Schroeder, S.

Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019 (article)

Abstract
Statistical models of the human body surface are generally learned from thousands of high-quality 3D scans in predefined poses to cover the wide variety of human body shapes and articulations. Acquisition of such data requires expensive equipment, calibration procedures, and is limited to cooperative subjects who can understand and follow instructions, such as adults. We present a method for learning a statistical 3D Skinned Multi-Infant Linear body model (SMIL) from incomplete, low-quality RGB-D sequences of freely moving infants. Quantitative experiments show that SMIL faithfully represents the RGB-D data and properly factorizes the shape and pose of the infants. To demonstrate the applicability of SMIL, we fit the model to RGB-D sequences of freely moving infants and show, with a case study, that our method captures enough motion detail for General Movements Assessment (GMA), a method used in clinical practice for early detection of neurodevelopmental disorders in infants. SMIL provides a new tool for analyzing infant shape and movement and is a step towards an automated system for GMA.

ps

pdf Journal DOI [BibTex]

pdf Journal DOI [BibTex]


no image
Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models

Ialongo, A. D., Van Der Wilk, M., Hensman, J., Rasmussen, C. E.

In Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 2931-2940, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (inproceedings)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


A Magnetically-Actuated Untethered Jellyfish-Inspired Soft Milliswimmer
A Magnetically-Actuated Untethered Jellyfish-Inspired Soft Milliswimmer

(Best Paper Award)

Ziyu Ren, T. W., Hu, W.

RSS 2019: Robotics: Science and Systems Conference, June 2019 (conference)

pi

[BibTex]

[BibTex]