Header logo is


2019


Soft-magnetic coatings as possible sensors for magnetic imaging of superconductors
Soft-magnetic coatings as possible sensors for magnetic imaging of superconductors

Ionescu, A., Simmendinger, J., Bihler, M., Miksch, C., Fischer, P., Soltan, S., Schütz, G., Albrecht, J.

Supercond. Sci. and Tech., 33, pages: 015002, IOP, December 2019 (article)

Abstract
Magnetic imaging of superconductors typically requires a soft-magnetic material placed on top of the superconductor to probe local magnetic fields. For reasonable results the influence of the magnet onto the superconductor has to be small. Thin YBCO films with soft-magnetic coatings are investigated using SQUID magnetometry. Detailed measurements of the magnetic moment as a function of temperature, magnetic field and time have been performed for different heterostructures. It is found that the modification of the superconducting transport in these heterostructures strongly depends on the magnetic and structural properties of the soft-magnetic material. This effect is especially pronounced for an inhomogeneous coating consisting of ferromagnetic nanoparticles.

pf mms

link (url) DOI [BibTex]

2019


link (url) DOI [BibTex]


no image
Selecting causal brain features with a single conditional independence test per feature

Mastakouri, A., Schölkopf, B., Janzing, D.

Advances in Neural Information Processing Systems 32, pages: 12532-12543, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Practical and Consistent Estimation of f-Divergences

Rubenstein, P. K., Bousquet, O., Djolonga, J., Riquelme, C., Tolstikhin, I.

Advances in Neural Information Processing Systems 32, pages: 4072-4082, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Controlling Heterogeneous Stochastic Growth Processes on Lattices with Limited Resources
Controlling Heterogeneous Stochastic Growth Processes on Lattices with Limited Resources

Haksar, R., Solowjow, F., Trimpe, S., Schwager, M.

In Proceedings of the 58th IEEE International Conference on Decision and Control (CDC) , pages: 1315-1322, 58th IEEE International Conference on Decision and Control (CDC), December 2019 (conference)

ics

PDF [BibTex]

PDF [BibTex]


no image
Invert to Learn to Invert

Putzky, P., Welling, M.

Advances in Neural Information Processing Systems 32, pages: 444-454, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
On the Fairness of Disentangled Representations

Locatello, F., Abbati, G., Rainforth, T., Bauer, S., Schölkopf, B., Bachem, O.

Advances in Neural Information Processing Systems 32, pages: 14584-14597, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Limitations of the empirical Fisher approximation for natural gradient descent

Kunstner, F., Hennig, P., Balles, L.

Advances in Neural Information Processing Systems 32, pages: 4158-4169, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei pn

link (url) [BibTex]

link (url) [BibTex]


no image
A Model to Search for Synthesizable Molecules

Bradshaw, J., Paige, B., Kusner, M. J., Segler, M., Hernández-Lobato, J. M.

Advances in Neural Information Processing Systems 32, pages: 7935-7947, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement

Hu, S., Kuchenbecker, K. J.

Applied Bionics and Biomechanics, (9765383), December 2019 (article)

Abstract
Learning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit from robotic support. Our approach uses the task-parameterized Gaussian mixture model (TP-GMM) algorithm to encode sets of demonstrations in separate models that each correspond to a different task situation. The robot then maximizes its expected performance in a new situation by either selecting a good existing model or requesting new demonstrations. Compared to a standard implementation that encodes all demonstrations together for all test situations, the proposed approach offers four advantages. First, a simply defined distance function can be used to estimate test performance by calculating the similarity between a test situation and the existing models. Second, the proposed approach can improve generalization, e.g., better satisfying the demonstrated task constraints and speeding up task execution. Third, because the hierarchical structure encodes each demonstrated situation individually, a wider range of task situations can be modeled in the same framework without deteriorating performance. Last, adding or removing demonstrations incurs low computational load, and thus, the robot’s skill library can be built incrementally. We first instantiate the proposed approach in a simulated task to validate these advantages. We then show that the advantages transfer to real hardware for a task where naive participants collaborated with a Willow Garage PR2 robot to move a handheld object. For most tested scenarios, our hierarchical method achieved significantly better task performance and subjective ratings than both a passive model with only gravity compensation and a single TP-GMM encoding all demonstrations.

hi

DOI [BibTex]


no image
Kernel Stein Tests for Multiple Model Comparison

Lim, J. N., Yamada, M., Schölkopf, B., Jitkrittum, W.

Advances in Neural Information Processing Systems 32, pages: 2240-2250, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset

Gondal, M. W., Wuthrich, M., Miladinovic, D., Locatello, F., Breidt, M., Volchkov, V., Akpo, J., Bachem, O., Schölkopf, B., Bauer, S.

Advances in Neural Information Processing Systems 32, pages: 15714-15725, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

am ei sf

link (url) [BibTex]

link (url) [BibTex]


no image
Convergence Guarantees for Adaptive Bayesian Quadrature Methods

Kanagawa, M., Hennig, P.

Advances in Neural Information Processing Systems 32, pages: 6234-6245, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei pn

link (url) [BibTex]

link (url) [BibTex]


no image
Are Disentangled Representations Helpful for Abstract Visual Reasoning?

van Steenkiste, S., Locatello, F., Schmidhuber, J., Bachem, O.

Advances in Neural Information Processing Systems 32, pages: 14222-14235, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Perceiving the arrow of time in autoregressive motion

Meding, K., Janzing, D., Schölkopf, B., Wichmann, F. A.

Advances in Neural Information Processing Systems 32, pages: 2303-2314, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Stochastic Frank-Wolfe for Composite Convex Minimization

Locatello, F., Yurtsever, A., Fercoq, O., Cevher, V.

Advances in Neural Information Processing Systems 32, pages: 14246-14256, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Fisher Efficient Inference of Intractable Models

Liu, S., Kanamori, T., Jitkrittum, W., Chen, Y.

Advances in Neural Information Processing Systems 32, pages: 8790-8800, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Flex-Convolution

Groh*, F., Wieschollek*, P., Lensch, H. P. A.

Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, 11361, pages: 105-122, Lecture Notes in Computer Science, (Editors: Jawahar, C. V. and Li, Hongdong and Mori, Greg and Schindler, Konrad), Springer International Publishing, December 2019, *equal contribution (conference)

ei

DOI [BibTex]

DOI [BibTex]


Attacking Optical Flow
Attacking Optical Flow

Ranjan, A., Janai, J., Geiger, A., Black, M. J.

In International Conference on Computer Vision, November 2019 (inproceedings)

Abstract
Deep neural nets achieve state-of-the-art performance on the problem of optical flow estimation. Since optical flow is used in several safety-critical applications like self-driving cars, it is important to gain insights into the robustness of those techniques. Recently, it has been shown that adversarial attacks easily fool deep neural networks to misclassify objects. The robustness of optical flow networks to adversarial attacks, however, has not been studied so far. In this paper, we extend adversarial patch attacks to optical flow networks and show that such attacks can compromise their performance. We show that corrupting a small patch of less than 1% of the image size can significantly affect optical flow estimates. Our attacks lead to noisy flow estimates that extend significantly beyond the region of the attack, in many cases even completely erasing the motion of objects in the scene. While networks using an encoder-decoder architecture are very sensitive to these attacks, we found that networks using a spatial pyramid architecture are less affected. We analyse the success and failure of attacking both architectures by visualizing their feature maps and comparing them to classical optical flow techniques which are robust to these attacks. We also demonstrate that such attacks are practical by placing a printed pattern into real scenes.

avg ps

Video Project Page Paper Supplementary Material link (url) [BibTex]

Video Project Page Paper Supplementary Material link (url) [BibTex]


Acoustic hologram enhanced phased arrays for ultrasonic particle manipulation
Acoustic hologram enhanced phased arrays for ultrasonic particle manipulation

Cox, L., Melde, K., Croxford, A., Fischer, P., Drinkwater, B.

Phys. Rev. Applied, 12, pages: 064055, November 2019 (article)

Abstract
The ability to shape ultrasound fields is important for particle manipulation, medical therapeutics and imaging applications. If the amplitude and/or phase is spatially varied across the wavefront then it is possible to project ‘acoustic images’. When attempting to form an arbitrary desired static sound field, acoustic holograms are superior to phased arrays due to their significantly higher phase fidelity. However, they lack the dynamic flexibility of phased arrays. Here, we demonstrate how to combine the high-fidelity advantages of acoustic holograms with the dynamic control of phased arrays in the ultrasonic frequency range. Holograms are used with a 64-element phased array, driven with continuous excitation. Moving the position of the projected hologram via phase delays which steer the output beam is demonstrated experimentally. This allows the creation of a much more tightly focused point than with the phased array alone, whilst still being reconfigurable. It also allows the complex movement at a water-air interface of a “phase surfer” along a phase track or the manipulation of a more arbitrarily shaped particle via amplitude traps. Furthermore, a particle manipulation device with two emitters and a single split hologram is demonstrated that allows the positioning of a “phase surfer” along a 1D axis. This paper opens the door for new applications with complex manipulation of ultrasound whilst minimising the complexity and cost of the apparatus.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Decoding subcategories of human bodies from both body- and face-responsive cortical regions
Decoding subcategories of human bodies from both body- and face-responsive cortical regions

Foster, C., Zhao, M., Romero, J., Black, M. J., Mohler, B. J., Bartels, A., Bülthoff, I.

NeuroImage, 202(15):116085, November 2019 (article)

Abstract
Our visual system can easily categorize objects (e.g. faces vs. bodies) and further differentiate them into subcategories (e.g. male vs. female). This ability is particularly important for objects of social significance, such as human faces and bodies. While many studies have demonstrated category selectivity to faces and bodies in the brain, how subcategories of faces and bodies are represented remains unclear. Here, we investigated how the brain encodes two prominent subcategories shared by both faces and bodies, sex and weight, and whether neural responses to these subcategories rely on low-level visual, high-level visual or semantic similarity. We recorded brain activity with fMRI while participants viewed faces and bodies that varied in sex, weight, and image size. The results showed that the sex of bodies can be decoded from both body- and face-responsive brain areas, with the former exhibiting more consistent size-invariant decoding than the latter. Body weight could also be decoded in face-responsive areas and in distributed body-responsive areas, and this decoding was also invariant to image size. The weight of faces could be decoded from the fusiform body area (FBA), and weight could be decoded across face and body stimuli in the extrastriate body area (EBA) and a distributed body-responsive area. The sex of well-controlled faces (e.g. excluding hairstyles) could not be decoded from face- or body-responsive regions. These results demonstrate that both face- and body-responsive brain regions encode information that can distinguish the sex and weight of bodies. Moreover, the neural patterns corresponding to sex and weight were invariant to image size and could sometimes generalize across face and body stimuli, suggesting that such subcategorical information is encoded with a high-level visual or semantic code.

ps

paper pdf DOI [BibTex]

paper pdf DOI [BibTex]


Learning to Explore in Motion and Interaction Tasks
Learning to Explore in Motion and Interaction Tasks

Bogdanovic, M., Righetti, L.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, November 2019 (conference)

Abstract
Model free reinforcement learning suffers from the high sampling complexity inherent to robotic manipulation or locomotion tasks. Most successful approaches typically use random sampling strategies which leads to slow policy convergence. In this paper we present a novel approach for efficient exploration that leverages previously learned tasks. We exploit the fact that the same system is used across many tasks and build a generative model for exploration based on data from previously solved tasks to improve learning new tasks. The approach also enables continuous learning of improved exploration strategies as novel tasks are learned. Extensive simulations on a robot manipulator performing a variety of motion and contact interaction tasks demonstrate the capabilities of the approach. In particular, our experiments suggest that the exploration strategy can more than double learning speed, especially when rewards are sparse. Moreover, the algorithm is robust to task variations and parameter tuning, making it beneficial for complex robotic problems.

mg

arXiv [BibTex]

arXiv [BibTex]


A Learnable Safety Measure
A Learnable Safety Measure

Heim, S., Rohr, A. V., Trimpe, S., Badri-Spröwitz, A.

Conference on Robot Learning, November 2019 (conference) Accepted

dlg ics

Arxiv [BibTex]

Arxiv [BibTex]


no image
Deep Neural Network Approach in Electrical Impedance Tomography-Based Real-Time Soft Tactile Sensor

Park, H., Lee, H., Park, K., Mo, S., Kim, J.

In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 7447-7452, Macau, China, November 2019 (inproceedings)

Abstract
Recently, a whole-body tactile sensing have emerged in robotics for safe human-robot interaction. A key issue in the whole-body tactile sensing is ensuring large-area manufacturability and high durability. To fulfill these requirements, a reconstruction method called electrical impedance tomography (EIT) was adopted in large-area tactile sensing. This method maps voltage measurements to conductivity distribution using only a few number of measurement electrodes. A common approach for the mapping is using a linearized model derived from the Maxwell's equation. This linearized model shows fast computation time and moderate robustness against measurement noise but reconstruction accuracy is limited. In this paper, we propose a novel nonlinear EIT algorithm through Deep Neural Network (DNN) approach to improve the reconstruction accuracy of EIT-based tactile sensors. The neural network architecture with rectified linear unit (ReLU) function ensured extremely low computational time (0.002 seconds) and nonlinear network structure which provides superior measurement accuracy. The DNN model was trained with dataset synthesized in simulation environment. To achieve the robustness against measurement noise, the training proceeded with additive Gaussian noise that estimated through actual measurement noise. For real sensor application, the trained DNN model was transferred to a conductive fabric-based soft tactile sensor. For validation, the reconstruction error and noise robustness were mainly compared using conventional linearized model and proposed approach in simulation environment. As a demonstration, the tactile sensor equipped with the trained DNN model is presented for a contact force estimation.

hi

DOI [BibTex]

DOI [BibTex]


Fast Feedback Control over Multi-hop Wireless Networks with Mode Changes and Stability Guarantees
Fast Feedback Control over Multi-hop Wireless Networks with Mode Changes and Stability Guarantees

Baumann, D., Mager, F., Jacob, R., Thiele, L., Zimmerling, M., Trimpe, S.

ACM Transactions on Cyber-Physical Systems, 4(2):18, November 2019 (article)

ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


no image
Ultracold atoms in disordered potentials: elastic scattering time in the strong scattering regime

Adrien Signoles, Baptiste Lecoutre, Jérémie Richard, Lih-King Lim, Vincent Denechaud, Valentin Volchkov, Vasiliki Angelopoulou, Fred Jendrzejewski, Alain Aspect, Laurent Sanchez-Palencia, Vincent Josse

New Journal of Physics, 21, pages: 105002, IOP Publishing and Deutsche Physikalische Gesellschaft, October 2019 (article)

sf

DOI [BibTex]

DOI [BibTex]


A Helical Microrobot with an Optimized Propeller-Shape for Propulsion in Viscoelastic Biological Media
A Helical Microrobot with an Optimized Propeller-Shape for Propulsion in Viscoelastic Biological Media

Li., D., Jeong, M., Oren, E., Yu, T., Qiu, T.

Robotics, 8, pages: 87, MDPI, October 2019 (article)

Abstract
One major challenge for microrobots is to penetrate and effectively move through viscoelastic biological tissues. Most existing microrobots can only propel in viscous liquids. Recent advances demonstrate that sub-micron robots can actively penetrate nanoporous biological tissue, such as the vitreous of the eye. However, it is still difficult to propel a micron-sized device through dense biological tissue. Here, we report that a special twisted helical shape together with a high aspect ratio in cross-section permit a microrobot with a diameter of hundreds-of-micrometers to move through mouse liver tissue. The helical microrobot is driven by a rotating magnetic field and localized by ultrasound imaging inside the tissue. The twisted ribbon is made of molybdenum and a sharp tip is chemically etched to generate a higher pressure at the edge of the propeller to break the biopolymeric network of the dense tissue.

pf

link (url) DOI [BibTex]


Acoustic Holographic Cell Patterning in a Biocompatible Hydrogel
Acoustic Holographic Cell Patterning in a Biocompatible Hydrogel

Ma, Z., Holle, A., Melde, K., Qiu, T., Poeppel, K., Kadiri, V., Fischer, P.

Adv. Mat., October 2019 (article)

Abstract
Acoustophoresis is promising as a rapid, biocompatible, non-contact cell manipulation method, where cells are arranged along the nodes or antinodes of the acoustic field. Typically, the acoustic field is formed in a resonator, which results in highly symmetric regular patterns. However, arbitrary, non-symmetrically shaped cell assemblies are necessary to obtain the irregular cellular arrangements found in biological tissues. We show that arbitrarily shaped cell patterns can be obtained from the complex acoustic field distribution defined by an acoustic hologram. Attenuation of the sound field induces localized acoustic streaming and the resultant convection flow gently delivers the suspended cells to the image plane where they form the designed pattern. We show that the process can be implemented in a biocompatible collagen solution, which can then undergo gelation to immobilize the cell pattern inside the viscoelastic matrix. The patterned cells exhibit F-actin-based protrusions, which indicates that the cells grow and thrive within the matrix. Cell viability assays and brightfield imaging after one week confirm cell survival and that the patterns persist. Acoustophoretic cell manipulation by holographic fields thus holds promise for non-contact, long-range, long-term cellular pattern formation, with a wide variety of potential applications in tissue engineering and mechanobiology.

pf

link (url) DOI [BibTex]


Resolving {3D} Human Pose Ambiguities with {3D} Scene Constraints
Resolving 3D Human Pose Ambiguities with 3D Scene Constraints

Hassan, M., Choutas, V., Tzionas, D., Black, M. J.

In International Conference on Computer Vision, pages: 2282-2292, October 2019 (inproceedings)

Abstract
To understand and analyze human behavior, we need to capture humans moving in, and interacting with, the world. Most existing methods perform 3D human pose estimation without explicitly considering the scene. We observe however that the world constrains the body and vice-versa. To motivate this, we show that current 3D human pose estimation methods produce results that are not consistent with the 3D scene. Our key contribution is to exploit static 3D scene structure to better estimate human pose from monocular images. The method enforces Proximal Relationships with Object eXclusion and is called PROX. To test this, we collect a new dataset composed of 12 different 3D scenes and RGB sequences of 20 subjects moving in and interacting with the scenes. We represent human pose using the 3D human body model SMPL-X and extend SMPLify-X to estimate body pose using scene constraints. We make use of the 3D scene information by formulating two main constraints. The interpenetration constraint penalizes intersection between the body model and the surrounding 3D scene. The contact constraint encourages specific parts of the body to be in contact with scene surfaces if they are close enough in distance and orientation. For quantitative evaluation we capture a separate dataset with 180 RGB frames in which the ground-truth body pose is estimated using a motion-capture system. We show quantitatively that introducing scene constraints significantly reduces 3D joint error and vertex error. Our code and data are available for research at https://prox.is.tue.mpg.de.

ps

pdf poster link (url) [BibTex]

pdf poster link (url) [BibTex]


Learning to Reconstruct {3D} Human Pose and Shape via Model-fitting in the Loop
Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop

Kolotouros, N., Pavlakos, G., Black, M. J., Daniilidis, K.

In International Conference on Computer Vision, October 2019 (inproceedings)

Abstract
Model-based human pose estimation is currently approached through two different paradigms. Optimization-based methods fit a parametric body model to 2D observations in an iterative manner, leading to accurate image-model alignments, but are often slow and sensitive to the initialization. In contrast, regression-based methods, that use a deep network to directly estimate the model parameters from pixels, tend to provide reasonable, but not pixel accurate, results while requiring huge amounts of supervision. In this work, instead of investigating which approach is better, our key insight is that the two paradigms can form a strong collaboration. A reasonable, directly regressed estimate from the network can initialize the iterative optimization making the fitting faster and more accurate. Similarly, a pixel accurate fit from iterative optimization can act as strong supervision for the network. This is the core of our proposed approach SPIN (SMPL oPtimization IN the loop). The deep network initializes an iterative optimization routine that fits the body model to 2D joints within the training loop, and the fitted estimate is subsequently used to supervise the network. Our approach is self-improving by nature, since better network estimates can lead the optimization to better solutions, while more accurate optimization fits provide better supervision for the network. We demonstrate the effectiveness of our approach in different settings, where 3D ground truth is scarce, or not available, and we consistently outperform the state-of-the-art model-based pose estimation approaches by significant margins.

ps

pdf code project [BibTex]

pdf code project [BibTex]


EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association
EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association

Strecke, M., Stückler, J.

In International Conference on Computer Vision, October 2019, arXiv:1904.11781 (inproceedings)

ev

preprint Project page Poster DOI [BibTex]

preprint Project page Poster DOI [BibTex]


Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild"
Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild"

Zuffi, S., Kanazawa, A., Berger-Wolf, T., Black, M. J.

In International Conference on Computer Vision, October 2019 (inproceedings)

Abstract
We present the first method to perform automatic 3D pose, shape and texture capture of animals from images acquired in-the-wild. In particular, we focus on the problem of capturing 3D information about Grevy's zebras from a collection of images. The Grevy's zebra is one of the most endangered species in Africa, with only a few thousand individuals left. Capturing the shape and pose of these animals can provide biologists and conservationists with information about animal health and behavior. In contrast to research on human pose, shape and texture estimation, training data for endangered species is limited, the animals are in complex natural scenes with occlusion, they are naturally camouflaged, travel in herds, and look similar to each other. To overcome these challenges, we integrate the recent SMAL animal model into a network-based regression pipeline, which we train end-to-end on synthetically generated images with pose, shape, and background variation. Going beyond state-of-the-art methods for human shape and pose estimation, our method learns a shape space for zebras during training. Learning such a shape space from images using only a photometric loss is novel, and the approach can be used to learn shape in other settings with limited 3D supervision. Moreover, we couple 3D pose and shape prediction with the task of texture synthesis, obtaining a full texture map of the animal from a single image. We show that the predicted texture map allows a novel per-instance unsupervised optimization over the network features. This method, SMALST (SMAL with learned Shape and Texture) goes beyond previous work, which assumed manual keypoints and/or segmentation, to regress directly from pixels to 3D animal shape, pose and texture. Code and data are available at https://github.com/silviazuffi/smalst

ps

code pdf supmat iccv19 presentation Project Page [BibTex]


End-to-end Learning for Graph Decomposition
End-to-end Learning for Graph Decomposition

Song, J., Andres, B., Black, M., Hilliges, O., Tang, S.

In International Conference on Computer Vision, October 2019 (inproceedings)

Abstract
Deep neural networks provide powerful tools for pattern recognition, while classical graph algorithms are widely used to solve combinatorial problems. In computer vision, many tasks combine elements of both pattern recognition and graph reasoning. In this paper, we study how to connect deep networks with graph decomposition into an end-to-end trainable framework. More specifically, the minimum cost multicut problem is first converted to an unconstrained binary cubic formulation where cycle consistency constraints are incorporated into the objective function. The new optimization problem can be viewed as a Conditional Random Field (CRF) in which the random variables are associated with the binary edge labels. Cycle constraints are introduced into the CRF as high-order potentials. A standard Convolutional Neural Network (CNN) provides the front-end features for the fully differentiable CRF. The parameters of both parts are optimized in an end-to-end manner. The efficacy of the proposed learning algorithm is demonstrated via experiments on clustering MNIST images and on the challenging task of real-world multi-people pose estimation.

ps

PDF [BibTex]

PDF [BibTex]


no image
Ultracold atoms in disordered potentials: elastic scattering time in the strong scattering regime

Signoles, A., Lecoutre, B., Richard, J., Lim, L., Denechaud, V., Volchkov, V., Angelopoulou, V., Jendrzejewski, F., Aspect, A., Sanchez-Palencia, L., Josse, V.

New Journal of Physic, 21, pages: 105002, IOP Publishing, October 2019 (article)

sf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Markerless Outdoor Human Motion Capture Using Multiple Autonomous Micro Aerial Vehicles
Markerless Outdoor Human Motion Capture Using Multiple Autonomous Micro Aerial Vehicles

Saini, N., Price, E., Tallamraju, R., Enficiaud, R., Ludwig, R., Martinović, I., Ahmad, A., Black, M.

In International Conference on Computer Vision, October 2019 (inproceedings) Accepted

Abstract
Capturing human motion in natural scenarios means moving motion capture out of the lab and into the wild. Typical approaches rely on fixed, calibrated, cameras and reflective markers on the body, significantly limiting the motions that can be captured. To make motion capture truly unconstrained, we describe the first fully autonomous outdoor capture system based on flying vehicles. We use multiple micro-aerial-vehicles(MAVs), each equipped with a monocular RGB camera, an IMU, and a GPS receiver module. These detect the person, optimize their position, and localize themselves approximately. We then develop a markerless motion capture method that is suitable for this challenging scenario with a distant subject, viewed from above, with approximately calibrated and moving cameras. We combine multiple state-of-the-art 2D joint detectors with a 3D human body model and a powerful prior on human pose. We jointly optimize for 3D body pose and camera pose to robustly fit the 2D measurements. To our knowledge, this is the first successful demonstration of outdoor, full-body, markerless motion capture from autonomous flying vehicles.

ps

Code Data Video Paper Manuscript Project Page [BibTex]


Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics
Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

International Conference on Computer Vision, October 2019 (conference)

Abstract
Deep learning based 3D reconstruction techniques have recently achieved impressive results. However, while state-of-the-art methods are able to output complex 3D geometry, it is not clear how to extend these results to time-varying topologies. Approaches treating each time step individually lack continuity and exhibit slow inference, while traditional 4D reconstruction methods often utilize a template model or discretize the 4D space at fixed resolution. In this work, we present Occupancy Flow, a novel spatio-temporal representation of time-varying 3D geometry with implicit correspondences. Towards this goal, we learn a temporally and spatially continuous vector field which assigns a motion vector to every point in space and time. In order to perform dense 4D reconstruction from images or sparse point clouds, we combine our method with a continuous 3D representation. Implicitly, our model yields correspondences over time, thus enabling fast inference while providing a sound physical description of the temporal dynamics. We show that our method can be used for interpolation and reconstruction tasks, and demonstrate the accuracy of the learned correspondences. We believe that Occupancy Flow is a promising new 4D representation which will be useful for a variety of spatio-temporal reconstruction tasks.

avg

pdf poster suppmat code Project page video blog [BibTex]


no image
Neural Signatures of Motor Skill in the Resting Brain

Ozdenizci, O., Meyer, T., Wichmann, F., Peters, J., Schölkopf, B., Cetin, M., Grosse-Wentrup, M.

Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC 2019), pages: 4387-4394, IEEE, October 2019 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Active Perception based Formation Control for Multiple Aerial Vehicles
Active Perception based Formation Control for Multiple Aerial Vehicles

Tallamraju, R., Price, E., Ludwig, R., Karlapalem, K., Bülthoff, H. H., Black, M. J., Ahmad, A.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, 4(4):4491-4498, IEEE, October 2019 (article)

Abstract
We present a novel robotic front-end for autonomous aerial motion-capture (mocap) in outdoor environments. In previous work, we presented an approach for cooperative detection and tracking (CDT) of a subject using multiple micro-aerial vehicles (MAVs). However, it did not ensure optimal view-point configurations of the MAVs to minimize the uncertainty in the person's cooperatively tracked 3D position estimate. In this article, we introduce an active approach for CDT. In contrast to cooperatively tracking only the 3D positions of the person, the MAVs can actively compute optimal local motion plans, resulting in optimal view-point configurations, which minimize the uncertainty in the tracked estimate. We achieve this by decoupling the goal of active tracking into a quadratic objective and non-convex constraints corresponding to angular configurations of the MAVs w.r.t. the person. We derive this decoupling using Gaussian observation model assumptions within the CDT algorithm. We preserve convexity in optimization by embedding all the non-convex constraints, including those for dynamic obstacle avoidance, as external control inputs in the MPC dynamics. Multiple real robot experiments and comparisons involving 3 MAVs in several challenging scenarios are presented.

ps

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


{AMASS}: Archive of Motion Capture as Surface Shapes
AMASS: Archive of Motion Capture as Surface Shapes

Mahmood, N., Ghorbani, N., Troje, N. F., Pons-Moll, G., Black, M. J.

International Conference on Computer Vision, pages: 5442-5451, October 2019 (conference)

Abstract
Large datasets are the cornerstone of recent advances in computer vision using deep learning. In contrast, existing human motion capture (mocap) datasets are small and the motions limited, hampering progress on learning models of human motion. While there are many different datasets available, they each use a different parameterization of the body, making it difficult to integrate them into a single meta dataset. To address this, we introduce AMASS, a large and varied database of human motion that unifies 15 different optical marker-based mocap datasets by representing them within a common framework and parameterization. We achieve this using a new method, MoSh++, that converts mocap data into realistic 3D human meshes represented by a rigged body model. Here we use SMPL [26], which is widely used and provides a standard skeletal representation as well as a fully rigged surface mesh. The method works for arbitrary marker-sets, while recovering soft-tissue dynamics and realistic hand motion. We evaluate MoSh++ and tune its hyper-parameters using a new dataset of 4D body scans that are jointly recorded with marker-based mocap. The consistent representation of AMASS makes it readily useful for animation, visualization, and generating training data for deep learning. Our dataset is significantly richer than previous human motion collections, having more than 40 hours of motion data, spanning over 300 subjects, more than 11000 motions, and is available for research at https://amass.is.tue.mpg.de/.

ps

code pdf suppl arxiv project website video poster AMASS_Poster [BibTex]


Texture Fields: Learning Texture Representations in Function Space
Texture Fields: Learning Texture Representations in Function Space

Oechsle, M., Mescheder, L., Niemeyer, M., Strauss, T., Geiger, A.

International Conference on Computer Vision, October 2019 (conference)

Abstract
In recent years, substantial progress has been achieved in learning-based reconstruction of 3D objects. At the same time, generative models were proposed that can generate highly realistic images. However, despite this success in these closely related tasks, texture reconstruction of 3D objects has received little attention from the research community and state-of-the-art methods are either limited to comparably low resolution or constrained experimental setups. A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques. In this paper, we propose Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network. Our approach circumvents limiting factors like shape discretization and parameterization, as the proposed texture representation is independent of the shape representation of the 3D object. We show that Texture Fields are able to represent high frequency texture and naturally blend with modern deep learning techniques. Experimentally, we find that Texture Fields compare favorably to state-of-the-art methods for conditional texture reconstruction of 3D objects and enable learning of probabilistic generative models for texturing unseen 3D models. We believe that Texture Fields will become an important building block for the next generation of generative 3D models.

avg

pdf suppmat video poster blog Project Page [BibTex]


no image
Robust Humanoid Locomotion Using Trajectory Optimization and Sample-Efficient Learning

Yeganegi, M. H., Khadiv, M., Moosavian, S. A. A., Zhu, J., Prete, A. D., Righetti, L.

Proceedings International Conference on Humanoid Robots, IEEE, 2019 IEEE-RAS International Conference on Humanoid Robots, October 2019 (conference)

Abstract
Trajectory optimization (TO) is one of the most powerful tools for generating feasible motions for humanoid robots. However, including uncertainties and stochasticity in the TO problem to generate robust motions can easily lead to intractable problems. Furthermore, since the models used in TO have always some level of abstraction, it can be hard to find a realistic set of uncertainties in the model space. In this paper we leverage a sample-efficient learning technique (Bayesian optimization) to robustify TO for humanoid locomotion. The main idea is to use data from full-body simulations to make the TO stage robust by tuning the cost weights. To this end, we split the TO problem into two phases. The first phase solves a convex optimization problem for generating center of mass (CoM) trajectories based on simplified linear dynamics. The second stage employs iterative Linear-Quadratic Gaussian (iLQG) as a whole-body controller to generate full body control inputs. Then we use Bayesian optimization to find the cost weights to use in the first stage that yields robust performance in the simulation/experiment, in the presence of different disturbance/uncertainties. The results show that the proposed approach is able to generate robust motions for different sets of disturbances and uncertainties.

mg

https://arxiv.org/abs/1907.04616 link (url) [BibTex]

https://arxiv.org/abs/1907.04616 link (url) [BibTex]


Arrays of plasmonic nanoparticle dimers with defined nanogap spacers
Arrays of plasmonic nanoparticle dimers with defined nanogap spacers

Jeong, H., Adams, M. C., Guenther, J., Alarcon-Correa, M., Kim, I., Choi, E., Miksch, C., Mark, A. F. M., Mark, A. G., Fischer, P.

ACS Nano, September 2019 (article)

Abstract
Plasmonic molecules are building blocks of metallic nanostructures that give rise to intriguing optical phenomena with similarities to those seen in molecular systems. The ability to design plasmonic hybrid structures and molecules with nanometric resolution would enable applications in optical metamaterials and sensing that presently cannot be demonstrated, because of a lack of suitable fabrication methods allowing the structural control of the plasmonic atoms on a large scale. Here we demonstrate a wafer-scale “lithography-free” parallel fabrication scheme to realize nanogap plasmonic meta-molecules with precise control over their size, shape, material, and orientation. We demonstrate how we can tune the corresponding coupled resonances through the entire visible spectrum. Our fabrication method, based on glancing angle physical vapor deposition with gradient shadowing, permits critical parameters to be varied across the wafer and thus is ideally suited to screen potential structures. We obtain billions of aligned dimer structures with controlled variation of the spectral properties across the wafer. We spectroscopically map the plasmonic resonances of gold dimer structures and show that they not only are in good agreement with numerically modeled spectra, but also remain functional, at least for a year, in ambient conditions.

pf

link (url) DOI [BibTex]


Learning to Train with Synthetic Humans
Learning to Train with Synthetic Humans

Hoffmann, D. T., Tzionas, D., Black, M. J., Tang, S.

In German Conference on Pattern Recognition (GCPR), September 2019 (inproceedings)

Abstract
Neural networks need big annotated datasets for training. However, manual annotation can be too expensive or even unfeasible for certain tasks, like multi-person 2D pose estimation with severe occlusions. A remedy for this is synthetic data with perfect ground truth. Here we explore two variations of synthetic data for this challenging problem; a dataset with purely synthetic humans, as well as a real dataset augmented with synthetic humans. We then study which approach better generalizes to real data, as well as the influence of virtual humans in the training loss. We observe that not all synthetic samples are equally informative for training, while the informative samples are different for each training stage. To exploit this observation, we employ an adversarial student-teacher framework; the teacher improves the student by providing the hardest samples for its current state as a challenge. Experiments show that this student-teacher framework outperforms all our baselines.

ps

pdf suppl poster link (url) [BibTex]

pdf suppl poster link (url) [BibTex]


The Influence of Visual Perspective on Body Size Estimation in Immersive Virtual Reality
The Influence of Visual Perspective on Body Size Estimation in Immersive Virtual Reality

Thaler, A., Pujades, S., Stefanucci, J. K., Creem-Regehr, S. H., Tesch, J., Black, M. J., Mohler, B. J.

In ACM Symposium on Applied Perception, September 2019 (inproceedings)

Abstract
The creation of realistic self-avatars that users identify with is important for many virtual reality applications. However, current approaches for creating biometrically plausible avatars that represent a particular individual require expertise and are time-consuming. We investigated the visual perception of an avatar’s body dimensions by asking males and females to estimate their own body weight and shape on a virtual body using a virtual reality avatar creation tool. In a method of adjustment task, the virtual body was presented in an HTC Vive head-mounted display either co-located with (first-person perspective) or facing (third-person perspective) the participants. Participants adjusted the body weight and dimensions of various body parts to match their own body shape and size. Both males and females underestimated their weight by 10-20% in the virtual body, but the estimates of the other body dimensions were relatively accurate and within a range of ±6%. There was a stronger influence of visual perspective on the estimates for males, but this effect was dependent on the amount of control over the shape of the virtual body, indicating that the results might be caused by where in the body the weight changes expressed themselves. These results suggest that this avatar creation tool could be used to allow participants to make a relatively accurate self-avatar in terms of adjusting body part dimensions, but not weight, and that the influence of visual perspective and amount of control needed over the body shape are likely gender-specific.

ps

pdf [BibTex]

pdf [BibTex]


no image
Convolutional neural networks: A magic bullet for gravitational-wave detection?

Gebhard, T., Kilbertus, N., Harry, I., Schölkopf, B.

Physical Review D, 100(6):063015, American Physical Society, September 2019 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Trunk Pitch Oscillations for Joint Load Redistribution in Humans and Humanoid Robots
Trunk Pitch Oscillations for Joint Load Redistribution in Humans and Humanoid Robots

Drama, Ö., Badri-Spröwitz, A.

Proceedings International Conference on Humanoid Robots, Humanoids, September 2019 (conference) Accepted

dlg

link (url) [BibTex]

link (url) [BibTex]


no image
How do people learn how to plan?

Jain, Y. R., Gupta, S., Rakesh, V., Dayan, P., Callaway, F., Lieder, F.

Conference on Cognitive Computational Neuroscience, September 2019 (conference)

re

[BibTex]

[BibTex]


Predictive Triggering for Distributed Control of Resource Constrained Multi-agent Systems
Predictive Triggering for Distributed Control of Resource Constrained Multi-agent Systems

Mastrangelo, J. M., Baumann, D., Trimpe, S.

In Proceedings of the 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems, pages: 79-84, 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys), September 2019 (inproceedings)

ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


no image
Data scarcity, robustness and extreme multi-label classification

Babbar, R., Schölkopf, B.

Machine Learning, 108(8):1329-1351, September 2019, Special Issue of the ECML PKDD 2019 Journal Track (article)

ei

DOI [BibTex]

DOI [BibTex]


Genetically modified M13 bacteriophage nanonets for enzyme catalysis and recovery
Genetically modified M13 bacteriophage nanonets for enzyme catalysis and recovery

Kadiri, V. M., Alarcon-Correa, M., Guenther, J. P., Ruppert, J., Bill, J., Rothenstein, D., Fischer, P.

Catalysts, 9, pages: 723, August 2019 (article)

Abstract
Enzyme-based biocatalysis exhibits multiple advantages over inorganic catalysts, including the biocompatibility and the unchallenged specificity of enzymes towards their substrate. The recovery and repeated use of enzymes is essential for any realistic application in biotechnology, but is not easily achieved with current strategies. For this purpose, enzymes are often immobilized on inorganic scaffolds, which could entail a reduction of the enzymes’ activity. Here, we show that immobilization to a nano-scaled biological scaffold, a nanonetwork of end-to-end cross-linked M13 bacteriophages, ensures high enzymatic activity and at the same time allows for the simple recovery of the enzymes. The bacteriophages have been genetically engineered to express AviTags at their ends, which permit biotinylation and their specific end-to-end self-assembly while allowing space on the major coat protein for enzyme coupling. We demonstrate that the phages form nanonetwork structures and that these so-called nanonets remain highly active even after re-using the nanonets multiple times in a flow-through reactor.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Light-controlled micromotors and soft microrobots
Light-controlled micromotors and soft microrobots

Palagi, S., Singh, D. P., Fischer, P.

Adv. Opt. Mat., 7, pages: 1900370, August 2019 (article)

Abstract
Mobile microscale devices and microrobots can be powered by catalytic reactions (chemical micromotors) or by external fields. This report is focused on the role of light as a versatile means for wirelessly powering and controlling such microdevices. Recent advances in the development of autonomous micromotors are discussed, where light permits their actuation with unprecedented control and thereby enables advances in the field of active matter. In addition, structuring the light field is a new means to drive soft microrobots that are based on (photo‐) responsive polymers. The behavior of the two main classes of thermo‐ and photoresponsive polymers adopted in microrobotics (poly(N‐isopropylacrylamide) and liquid‐crystal elastomers) is analyzed, and recent applications are reported. The advantages and limitations of controlling micromotors and microrobots by light are reviewed, and some of the remaining challenges in the development of novel photo‐active materials for micromotors and microrobots are discussed.

pf

link (url) DOI [BibTex]