Header logo is


2019


no image
Event-triggered Pulse Control with Adaptation through Learning

Baumann, D., Solowjow, F., Johansson, K. H., Trimpe, S.

In Proceedings of the American Control Conference, American Control Conference (ACC), July 2019 (inproceedings) Accepted

ics

[BibTex]

2019


[BibTex]


Thumb xl teaser awesome v2
Haptipedia: Accelerating Haptic Device Discovery to Support Interaction & Engineering Design

Seifi, H., Fazlollahi, F., Oppermann, M., Sastrillo, J. A., Ip, J., Agrawal, A., Park, G., Kuchenbecker, K. J., MacLean, K. E.

In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), Glasgow, Scotland, May 2019 (inproceedings) Accepted

Abstract
Creating haptic experiences often entails inventing, modifying, or selecting specialized hardware. However, experience designers are rarely engineers, and 30 years of haptic inventions are buried in a fragmented literature that describes devices mechanically rather than by potential purpose. We conceived of Haptipedia to unlock this trove of examples: Haptipedia presents a device corpus for exploration through metadata that matter to both device and experience designers. It is a taxonomy of device attributes that go beyond physical description to capture potential utility, applied to a growing database of 105 grounded force-feedback devices, and accessed through a public visualization that links utility to morphology. Haptipedia's design was driven by both systematic review of the haptic device literature and rich input from diverse haptic designers. We describe Haptipedia's reception (including hopes it will redefine device reporting standards) and our plans for its sustainability through community participation.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Internal Array Electrodes Improve the Spatial Resolution of Soft Tactile Sensors Based on Electrical Resistance Tomography

Lee, H., Park, K., Kim, J., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 2019 (inproceedings) Accepted

hi

[BibTex]

[BibTex]


Thumb xl robot
Improving Haptic Adjective Recognition with Unsupervised Feature Learning

Richardson, B. A., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 2019 (inproceedings) Accepted

Abstract
Humans can form an impression of how a new object feels simply by touching its surfaces with the densely innervated skin of the fingertips. Many haptics researchers have recently been working to endow robots with similar levels of haptic intelligence, but these efforts almost always employ hand-crafted features, which are brittle, and concrete tasks, such as object recognition. We applied unsupervised feature learning methods, specifically K-SVD and Spatio-Temporal Hierarchical Matching Pursuit (ST-HMP), to rich multi-modal haptic data from a diverse dataset. We then tested the learned features on 19 more abstract binary classification tasks that center on haptic adjectives such as smooth and squishy. The learned features proved superior to traditional hand-crafted features by a large margin, almost doubling the average F1 score across all adjectives. Additionally, particular exploratory procedures (EPs) and sensor channels were found to support perception of certain haptic adjectives, underlining the need for diverse interactions and multi-modal haptic data.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Decision-Theoretic Meta-Learning: Versatile and Efficient Amortization of Few-Shot Learning

Gordon, J., Bronskill, J., Bauer, M., Nowozin, S., Turner, R. E.

7th International Conference on Learning Representations (ICLR), May 2019 (conference) Accepted

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Resampled Priors for Variational Autoencoders

Bauer, M., Mnih, A.

22nd International Conference on Artificial Intelligence and Statistics, April 2019 (conference) Accepted

ei

arXiv [BibTex]

arXiv [BibTex]


Thumb xl testbed v5
Feedback Control Goes Wireless: Guaranteed Stability over Low-power Multi-hop Networks

Mager, F., Baumann, D., Jacob, R., Thiele, L., Trimpe, S., Zimmerling, M.

In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, 10th ACM/IEEE International Conference on Cyber-Physical Systems, April 2019 (inproceedings) Accepted

Abstract
Closing feedback loops fast and over long distances is key to emerging applications; for example, robot motion control and swarm coordination require update intervals below 100 ms. Low-power wireless is preferred for its flexibility, low cost, and small form factor, especially if the devices support multi-hop communication. Thus far, however, closed-loop control over multi-hop low-power wireless has only been demonstrated for update intervals on the order of multiple seconds. This paper presents a wireless embedded system that tames imperfections impairing control performance such as jitter or packet loss, and a control design that exploits the essential properties of this system to provably guarantee closed-loop stability for linear dynamic systems. Using experiments on a testbed with multiple cart-pole systems, we are the first to demonstrate the feasibility and to assess the performance of closed-loop control and coordination over multi-hop low-power wireless for update intervals from 20 ms to 50 ms.

ics

arXiv PDF Project Page [BibTex]

arXiv PDF Project Page [BibTex]


Thumb xl screenshot 2019 02 03 at 19.15.13
A Novel Texture Rendering Approach for Electrostatic Displays

Fiedler, T., Vardar, Y.

In Proceedings of International Workshop on Haptic and Audio Interaction Design (HAID), Lille, France, March 2019 (inproceedings)

Abstract
Generating realistic texture feelings on tactile displays using data-driven methods has attracted a lot of interest in the last decade. However, the need for large data storages and transmission rates complicates the use of these methods for the future commercial displays. In this paper, we propose a new texture rendering approach which can compress the texture data signicantly for electrostatic displays. Using three sample surfaces, we first explain how to record, analyze and compress the texture data, and render them on a touchscreen. Then, through psychophysical experiments conducted with nineteen participants, we show that the textures can be reproduced by a signicantly less number of frequency components than the ones in the original signal without inducing perceptual degradation. Moreover, our results indicate that the possible degree of compression is affected by the surface properties.

hi

Fiedler19-HAID-Electrostatic [BibTex]

Fiedler19-HAID-Electrostatic [BibTex]


Thumb xl s ban outdoors 1   small
Explorations of Shape-Changing Haptic Interfaces for Blind and Sighted Pedestrian Navigation

Spiers, A., Kuchenbecker, K.

In CHI 2019 Workshop on Hacking Blind Navigation, 2019 (inproceedings)

Abstract
Since the 1960s, technologists have worked to develop systems that facilitate independent navigation by vision-impaired (VI) pedestrians. These devices vary in terms of conveyed information and feedback modality. Unfortunately, many such prototypes never progress beyond laboratory testing. Conversely, smartphone-based navigation systems for sighted pedestrians have grown in robustness and capabilities, to the point of now being ubiquitous. How can we leverage the success of sighted navigation technology, which is driven by a larger global market, as a way to progress VI navigation systems? We believe one possibility is to make common devices that benefit both VI and sighted individuals, by providing information in a way that does not distract either user from their tasks or environment. To this end we have developed physical interfaces that eschew visual, audio or vibratory feedback, instead relying on the natural human ability to perceive the shape of a handheld object.

hi

[BibTex]


no image
MYND: A Platform for Large-scale Neuroscientific Studies

Hohmann, M. R., Hackl, M., Wirth, B., Zaman, T., Enficiaud, R., Grosse-Wentrup, M., Schölkopf, B.

Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI), 2019 (conference) Accepted

ei

[BibTex]

[BibTex]


Thumb xl model
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Ghosh, P., Losalka, A., Black, M. J.

In Proc. AAAI, 2019 (inproceedings)

Abstract
Susceptibility of deep neural networks to adversarial attacks poses a major theoretical and practical challenge. All efforts to harden classifiers against such attacks have seen limited success till now. Two distinct categories of samples against which deep neural networks are vulnerable, ``adversarial samples" and ``fooling samples", have been tackled separately so far due to the difficulty posed when considered together. In this work, we show how one can defend against them both under a unified framework. Our model has the form of a variational autoencoder with a Gaussian mixture prior on the latent variable, such that each mixture component corresponds to a single class. We show how selective classification can be performed using this model, thereby causing the adversarial objective to entail a conflict. The proposed method leads to the rejection of adversarial samples instead of misclassification, while maintaining high precision and recall on test data. It also inherently provides a way of learning a selective classifier in a semi-supervised scenario, which can similarly resist adversarial attacks. We further show how one can reclassify the detected adversarial samples by iterative optimization.

ps

link (url) Project Page [BibTex]


no image
Fisher Efficient Inference of Intractable Models

Liu, S., Kanamori, T., Jitkrittum, W., Chen, Y.

2019 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


Thumb xl teaser website
Occupancy Networks: Learning 3D Reconstruction in Function Space

Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, 2019 (inproceedings)

Abstract
With the advent of deep neural networks, learning-based approaches for 3D~reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D~reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D~reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.

avg

paper supplement [BibTex]

paper supplement [BibTex]


no image
Fast Gaussian Process Based Gradient Matching for Parameter Identification in Systems of Nonlinear ODEs

Wenk, P., Gotovos, A., Bauer, S., Gorbach, N., Krause, A., Buhmann, J. M.

22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019 (conference) Accepted

ei

PDF [BibTex]

PDF [BibTex]

2018


no image
Enhancing the Accuracy and Fairness of Human Decision Making

Valera, I., Singla, A., Gomez Rodriguez, M.

Advances in Neural Information Processing Systems 31, pages: 1774-1783, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

arXiv link (url) [BibTex]

2018


arXiv link (url) [BibTex]


no image
Deep Reinforcement Learning for Event-Triggered Control

Baumann, D., Zhu, J., Martius, G., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), 57th IEEE International Conference on Decision and Control (CDC), December 2018 (inproceedings)

al ics

arXiv PDF Project Page Project Page [BibTex]

arXiv PDF Project Page Project Page [BibTex]


no image
Boosting Black Box Variational Inference

Locatello*, F., Dresdner*, G., R., K., Valera, I., Rätsch, G.

Advances in Neural Information Processing Systems 31, pages: 3405-3415, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
When do random forests fail?

Tang, C., Garreau, D., von Luxburg, U.

In Proceedings Neural Information Processing Systems, Neural Information Processing Systems (NIPS 2018) , December 2018 (inproceedings)

slt

Project Page [BibTex]

Project Page [BibTex]


no image
Consolidating the Meta-Learning Zoo: A Unifying Perspective as Posterior Predictive Inference

Gordon*, J., Bronskill*, J., Bauer*, M., Nowozin, S., Turner, R. E.

Workshop on Meta-Learning (MetaLearn 2018) at the 32nd Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Versa: Versatile and Efficient Few-shot Learning

Gordon*, J., Bronskill*, J., Bauer*, M., Nowozin, S., Turner, R. E.

Third Workshop on Bayesian Deep Learning at the 32nd Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning

Harder F., K. J. W. M., M., P.

Workshop on Privacy Preserving Machine Learning at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Learning Invariances using the Marginal Likelihood

van der Wilk, M., Bauer, M., John, S. T., Hensman, J.

Advances in Neural Information Processing Systems 31, pages: 9960-9970, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Deep Nonlinear Non-Gaussian Filtering for Dynamical Systems

Mehrjou, A., Schölkopf, B.

Workshop: Infer to Control: Probabilistic Reinforcement Learning and Structured Control at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Resampled Priors for Variational Autoencoders

Bauer, M., Mnih, A.

Third Workshop on Bayesian Deep Learning at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Generalisation in humans and deep neural networks

Geirhos, R., Temme, C. R. M., Rauber, J., Schütt, H., Bethge, M., Wichmann, F. A.

Advances in Neural Information Processing Systems 31, pages: 7549-7561, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Data-Efficient Hierarchical Reinforcement Learning

Nachum, O., Gu, S., Lee, H., Levine, S.

Advances in Neural Information Processing Systems 31, pages: 3307-3317, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl 2018 prd
Assessing Generative Models via Precision and Recall

Sajjadi, M. S. M., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.

Advances in Neural Information Processing Systems 31, pages: 5234-5243, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Adaptive Skip Intervals: Temporal Abstraction for Recurrent Dynamical Models

Neitz, A., Parascandolo, G., Bauer, S., Schölkopf, B.

Advances in Neural Information Processing Systems 31, pages: 9838-9848, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
A Computational Camera with Programmable Optics for Snapshot High Resolution Multispectral Imaging

Chen, J., Hirsch, M., Eberhardt, B., Lensch, H. P. A.

Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, December 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


Thumb xl unbenannte pr%c3%a4sentation 1
Efficient Encoding of Dynamical Systems through Local Approximations

Solowjow, F., Mehrjou, A., Schölkopf, B., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), pages: 6073 - 6079 , Miami, Fl, USA, December 2018 (inproceedings)

ei ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


no image
Informative Features for Model Comparison

Jitkrittum, W., Kanagawa, H., Sangkloy, P., Hays, J., Schölkopf, B., Gretton, A.

Advances in Neural Information Processing Systems 31, pages: 816-827, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Flex-Convolution (Million-Scale Point-Cloud Learning Beyond Grid-Worlds)

Groh*, F., Wieschollek*, P., Lensch, H. P. A.

Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, December 2018, *equal contribution (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Bayesian Nonparametric Hawkes Processes

Kapoor, J., Vergari, A., Gomez Rodriguez, M., Valera, I.

Bayesian Nonparametrics workshop at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Thumb xl imgidx 00326
Customized Multi-Person Tracker

Ma, L., Tang, S., Black, M. J., Gool, L. V.

In Computer Vision – ACCV 2018, Springer International Publishing, Asian Conference on Computer Vision, December 2018 (inproceedings)

ps

PDF Project Page [BibTex]

PDF Project Page [BibTex]


Thumb xl lars2018
Depth Control of Underwater Robots using Sliding Modes and Gaussian Process Regression

Lima, G. S., Bessa, W. M., Trimpe, S.

In Proceeding of the 15th Latin American Robotics Symposium, João Pessoa, Brazil, 15th Latin American Robotics Symposium, November 2018 (inproceedings) Accepted

Abstract
The development of accurate control systems for underwater robotic vehicles relies on the adequate compensation for hydrodynamic effects. In this work, a new robust control scheme is presented for remotely operated underwater vehicles. In order to meet both robustness and tracking requirements, sliding mode control is combined with Gaussian process regression. The convergence properties of the closed-loop signals are analytically proven. Numerical results confirm the stronger improved performance of the proposed control scheme.

ics

[BibTex]

[BibTex]


Thumb xl toc image
Gait learning for soft microrobots controlled by light fields

Rohr, A. V., Trimpe, S., Marco, A., Fischer, P., Palagi, S.

In International Conference on Intelligent Robots and Systems (IROS) 2018, pages: 6199-6206, International Conference on Intelligent Robots and Systems 2018, October 2018 (inproceedings)

Abstract
Soft microrobots based on photoresponsive materials and controlled by light fields can generate a variety of different gaits. This inherent flexibility can be exploited to maximize their locomotion performance in a given environment and used to adapt them to changing environments. However, because of the lack of accurate locomotion models, and given the intrinsic variability among microrobots, analytical control design is not possible. Common data-driven approaches, on the other hand, require running prohibitive numbers of experiments and lead to very sample-specific results. Here we propose a probabilistic learning approach for light-controlled soft microrobots based on Bayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach results in a learning scheme that is highly data-efficient, enabling gait optimization with a limited experimental budget, and robust against differences among microrobot samples. These features are obtained by designing the learning scheme through the comparison of different GP priors and BO settings on a semisynthetic data set. The developed learning scheme is validated in microrobot experiments, resulting in a 115% improvement in a microrobot’s locomotion performance with an experimental budget of only 20 tests. These encouraging results lead the way toward self-adaptive microrobotic systems based on lightcontrolled soft microrobots and probabilistic learning control.

ics pf

arXiv IEEE Xplore DOI Project Page [BibTex]

arXiv IEEE Xplore DOI Project Page [BibTex]


no image
Regularizing Reinforcement Learning with State Abstraction

Akrour, R., Veiga, F., Peters, J., Neuman, G.

Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2018 (conference) Accepted

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl sevillagcpr
On the Integration of Optical Flow and Action Recognition

Sevilla-Lara, L., Liao, Y., Güney, F., Jampani, V., Geiger, A., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 281-297, Springer, Cham, October 2018 (inproceedings)

Abstract
Most of the top performing action recognition methods use optical flow as a "black box" input. Here we take a deeper look at the combination of flow and action recognition, and investigate why optical flow is helpful, what makes a flow method good for action recognition, and how we can make it better. In particular, we investigate the impact of different flow algorithms and input transformations to better understand how these affect a state-of-the-art action recognition method. Furthermore, we fine tune two neural-network flow methods end-to-end on the most widely used action recognition dataset (UCF101). Based on these experiments, we make the following five observations: 1) optical flow is useful for action recognition because it is invariant to appearance, 2) optical flow methods are optimized to minimize end-point-error (EPE), but the EPE of current methods is not well correlated with action recognition performance, 3) for the flow methods tested, accuracy at boundaries and at small displacements is most correlated with action recognition performance, 4) training optical flow to minimize classification error instead of minimizing EPE improves recognition performance, and 5) optical flow learned for the task of action recognition differs from traditional optical flow especially inside the human body and at the boundary of the body. These observations may encourage optical flow researchers to look beyond EPE as a goal and guide action recognition researchers to seek better motion cues, leading to a tighter integration of the optical flow and action recognition communities.

avg ps

arXiv DOI [BibTex]

arXiv DOI [BibTex]


Thumb xl iros18
Towards Robust Visual Odometry with a Multi-Camera System

Liu, P., Geppert, M., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In International Conference on Intelligent Robots and Systems (IROS) 2018, International Conference on Intelligent Robots and Systems, October 2018 (inproceedings)

Abstract
We present a visual odometry (VO) algorithm for a multi-camera system and robust operation in challenging environments. Our algorithm consists of a pose tracker and a local mapper. The tracker estimates the current pose by minimizing photometric errors between the most recent keyframe and the current frame. The mapper initializes the depths of all sampled feature points using plane-sweeping stereo. To reduce pose drift, a sliding window optimizer is used to refine poses and structure jointly. Our formulation is flexible enough to support an arbitrary number of stereo cameras. We evaluate our algorithm thoroughly on five datasets. The datasets were captured in different conditions: daytime, night-time with near-infrared (NIR) illumination and night-time without NIR illumination. Experimental results show that a multi-camera setup makes the VO more robust to challenging environments, especially night-time conditions, in which a single stereo configuration fails easily due to the lack of features.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Learning to Categorize Bug Reports with LSTM Networks

Gondaliya, K., Peters, J., Rueckert, E.

Proceedings of the 10th International Conference on Advances in System Testing and Validation Lifecycle (VALID), pages: 7-12, October 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Domain Randomization for Simulation-Based Policy Optimization with Transferability Assessment

Muratore, F., Treede, F., Gienger, M., Peters, J.

2nd Annual Conference on Robot Learning (CoRL), 87, pages: 700-713, Proceedings of Machine Learning Research, PMLR, October 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Reinforcement Learning of Phase Oscillators for Fast Adaptation to Moving Targets

Maeda, G., Koc, O., Morimoto, J.

Proceedings of The 2nd Conference on Robot Learning (CoRL), 87, pages: 630-640, (Editors: Aude Billard, Anca Dragan, Jan Peters, Jun Morimoto ), PMLR, October 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Constraint-Space Projection Direct Policy Search

Akrour, R., Peters, J., Neuman, G.

14th European Workshop on Reinforcement Learning (EWRL), October 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl interpolation
Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation

Wulff, J., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 567-582, Springer, Cham, October 2018 (inproceedings)

Abstract
The difficulty of annotating training data is a major obstacle to using CNNs for low-level tasks in video. Synthetic data often does not generalize to real videos, while unsupervised methods require heuristic n losses. Proxy tasks can overcome these issues, and start by training a network for a task for which annotation is easier or which can be trained unsupervised. The trained network is then fine-tuned for the original task using small amounts of ground truth data. Here, we investigate frame interpolation as a proxy task for optical flow. Using real movies, we train a CNN unsupervised for temporal interpolation. Such a network implicitly estimates motion, but cannot handle untextured regions. By fi ne-tuning on small amounts of ground truth flow, the network can learn to fill in homogeneous regions and compute full optical flow fi elds. Using this unsupervised pre-training, our network outperforms similar architectures that were trained supervised using synthetic optical flow.

ps

pdf arXiv DOI Project Page [BibTex]

pdf arXiv DOI Project Page [BibTex]


Thumb xl bmvc pic
Human Motion Parsing by Hierarchical Dynamic Clustering

Zhang, Y., Tang, S., Sun, H., Neumann, H.

In Proceedings of the British Machine Vision Conference (BMVC), pages: 269, BMVA Press, 29th British Machine Vision Conference, September 2018 (inproceedings)

Abstract
Parsing continuous human motion into meaningful segments plays an essential role in various applications. In this work, we propose a hierarchical dynamic clustering framework to derive action clusters from a sequence of local features in an unsuper- vised bottom-up manner. We systematically investigate the modules in this framework and particularly propose diverse temporal pooling schemes, in order to realize accurate temporal action localization. We demonstrate our method on two motion parsing tasks: temporal action segmentation and abnormal behavior detection. The experimental results indicate that the proposed framework is significantly more effective than the other related state-of-the-art methods on several datasets.

ps

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Spatio-temporal Transformer Network for Video Restoration

Kim, T. H., Sajjadi, M. S. M., Hirsch, M., Schölkopf, B.

15th European Conference on Computer Vision (ECCV), Part III, 11207, pages: 111-127, Lecture Notes in Computer Science, (Editors: Vittorio Ferrari, Martial Hebert,Cristian Sminchisescu and Yair Weiss), Springer, September 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl coma faces
Generating 3D Faces using Convolutional Mesh Autoencoders

Ranjan, A., Bolkart, T., Sanyal, S., Black, M. J.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 11207, pages: 725-741, Springer, Cham, September 2018 (inproceedings)

Abstract
Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://coma.is.tue.mpg.de/.

ps

code paper supplementary link (url) DOI Project Page Project Page [BibTex]

code paper supplementary link (url) DOI Project Page Project Page [BibTex]


Thumb xl ianeccv18
Learning Priors for Semantic 3D Reconstruction

Cherabier, I., Schönberger, J., Oswald, M., Pollefeys, M., Geiger, A.

In Computer Vision – ECCV 2018, Springer International Publishing, Cham, September 2018 (inproceedings)

Abstract
We present a novel semantic 3D reconstruction framework which embeds variational regularization into a neural network. Our network performs a fixed number of unrolled multi-scale optimization iterations with shared interaction weights. In contrast to existing variational methods for semantic 3D reconstruction, our model is end-to-end trainable and captures more complex dependencies between the semantic labels and the 3D geometry. Compared to previous learning-based approaches to 3D reconstruction, we integrate powerful long-range dependencies using variational coarse-to-fine optimization. As a result, our network architecture requires only a moderate number of parameters while keeping a high level of expressiveness which enables learning from very little data. Experiments on real and synthetic datasets demonstrate that our network achieves higher accuracy compared to a purely variational approach while at the same time requiring two orders of magnitude less iterations to converge. Moreover, our approach handles ten times more semantic class labels using the same computational resources.

avg

pdf suppmat Project Page Video DOI Project Page [BibTex]

pdf suppmat Project Page Video DOI Project Page [BibTex]


no image
Separating Reflection and Transmission Images in the Wild

Wieschollek, P., Gallo, O., Gu, J., Kautz, J.

European Conference on Computer Vision (ECCV), September 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]