Header logo is


2017


no image
Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning

Gu, S., Lillicrap, T., Turner, R. E., Ghahramani, Z., Schölkopf, B., Levine, S.

Advances in Neural Information Processing Systems 30, pages: 3849-3858, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) Project Page [BibTex]

2017


link (url) Project Page [BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Workshop: Advances in Approximate Bayesian Inference at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Learning Independent Causal Mechanisms

Parascandolo, G., Rojas-Carulla, M., Kilbertus, N., Schölkopf, B.

Workshop: Learning Disentangled Representations: from Perception to Control at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Avoiding Discrimination through Causal Reasoning

Kilbertus, N., Rojas-Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., Schölkopf, B.

Advances in Neural Information Processing Systems 30, pages: 656-666, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

Locatello, F., Tschannen, M., Rätsch, G., Jaggi, M.

Advances in Neural Information Processing Systems 30, pages: 773-784, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
AdaGAN: Boosting Generative Models

Tolstikhin, I., Gelly, S., Bousquet, O., Simon-Gabriel, C. J., Schölkopf, B.

Advances in Neural Information Processing Systems 30, pages: 5424-5433, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


Thumb xl larsnips
The Numerics of GANs

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

Abstract
In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Safe Adaptive Importance Sampling

Stich, S. U., Raj, A., Jaggi, M.

Advances in Neural Information Processing Systems 30, pages: 4384-4394, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
ConvWave: Searching for Gravitational Waves with Fully Convolutional Neural Nets

Gebhard, T., Kilbertus, N., Parascandolo, G., Harry, I., Schölkopf, B.

Workshop on Deep Learning for Physical Sciences (DLPS) at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
From Parity to Preference-based Notions of Fairness in Classification

Zafar, M. B., Valera, I., Gomez Rodriguez, M., Gummadi, K., Weller, A.

Advances in Neural Information Processing Systems 30, pages: 229-239, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Discriminative k-shot learning using probabilistic models

Bauer*, M., Rojas-Carulla*, M., Świątkowski, J. B., Schölkopf, B., Turner, R. E.

Second Workshop on Bayesian Deep Learning at the 31st Conference on Neural Information Processing Systems , December 2017, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Closed-form Inference and Prediction in Gaussian Process State-Space Models

Ialongo, A. D., Van Der Wilk, M., Rasmussen, C. E.

Time Series Workshop at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

PDF [BibTex]

PDF [BibTex]


no image
Learning Robust Video Synchronization without Annotations

Wieschollek, P., Freeman, I., Lensch, H. P. A.

16th IEEE International Conference on Machine Learning and Applications (ICMLA), pages: 92 - 100, (Editors: X. Chen, B. Luo, F. Luo, V. Palade, and M. A. Wani), IEEE, December 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Optimizing human learning

Tabibian, B., Upadhyay, U., De, A., Zarezade, A., Schölkopf, B., Gomez Rodriguez, M.

Workshop on Teaching Machines, Robots, and Humans at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

Kim, J., Tabibian, B., Oh, A., Schölkopf, B., Gomez Rodriguez, M.

Workshop on Prioritising Online Content at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals

Tanneberg, D., Peters, J., Rueckert, E.

Proceedings of the 1st Annual Conference on Robot Learning (CoRL), pages: 167-174, Proceedings of Machine Learning Research, (Editors: Sergey Levine, Vincent Vanhoucke and Ken Goldberg), PMLR, November 2017 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Behind Distribution Shift: Mining Driving Forces of Changes and Causal Arrows

Huang, B., Zhang, K., Zhang, J., Sanchez-Romero, R., Glymour, C., Schölkopf, B.

IEEE 17th International Conference on Data Mining (ICDM), pages: 913-918, (Editors: Vijay Raghavan,Srinivas Aluru, George Karypis, Lucio Miele and Xindong Wu), November 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Efficient Online Adaptation with Stochastic Recurrent Neural Networks

Tanneberg, D., Peters, J., Rueckert, E.

IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pages: 198-204, IEEE, November 2017 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl flamewebteaserwide
Learning a model of facial shape and expression from 4D scans

Li, T., Bolkart, T., Black, M. J., Li, H., Romero, J.

ACM Transactions on Graphics, 36(6):194:1-194:17, November 2017, Two first authors contributed equally (article)

Abstract
The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression from 4D face sequences in the D3DFACS dataset along with additional 4D sequences.We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).

ps

data/model video code chumpy code tensorflow paper supplemental Project Page [BibTex]

data/model video code chumpy code tensorflow paper supplemental Project Page [BibTex]


Thumb xl molbert
Investigating Body Image Disturbance in Anorexia Nervosa Using Novel Biometric Figure Rating Scales: A Pilot Study

Mölbert, S. C., Thaler, A., Streuber, S., Black, M. J., Karnath, H., Zipfel, S., Mohler, B., Giel, K. E.

European Eating Disorders Review, 25(6):607-612, November 2017 (article)

Abstract
This study uses novel biometric figure rating scales (FRS) spanning body mass index (BMI) 13.8 to 32.2 kg/m2 and BMI 18 to 42 kg/m2. The aims of the study were (i) to compare FRS body weight dissatisfaction and perceptual distortion of women with anorexia nervosa (AN) to a community sample; (ii) how FRS parameters are associated with questionnaire body dissatisfaction, eating disorder symptoms and appearance comparison habits; and (iii) whether the weight spectrum of the FRS matters. Women with AN (n = 24) and a community sample of women (n = 104) selected their current and ideal body on the FRS and completed additional questionnaires. Women with AN accurately picked the body that aligned best with their actual weight in both FRS. Controls underestimated their BMI in the FRS 14–32 and were accurate in the FRS 18–42. In both FRS, women with AN desired a body close to their actual BMI and controls desired a thinner body. Our observations suggest that body image disturbance in AN is unlikely to be characterized by a visual perceptual disturbance, but rather by an idealization of underweight in conjunction with high body dissatisfaction. The weight spectrum of FRS can influence the accuracy of BMI estimation.

ps

publisher DOI Project Page [BibTex]


no image
Learning inverse dynamics models in O(n) time with LSTM networks

Rueckert, E., Nakatenus, M., Tosatto, S., Peters, J.

IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pages: 811-816, IEEE, November 2017 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl manoteaser
Embodied Hands: Modeling and Capturing Hands and Bodies Together

Romero, J., Tzionas, D., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):245:1-245:17, 245:1–245:17, ACM, November 2017 (article)

Abstract
Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.

ps

website youtube paper suppl video link (url) DOI Project Page [BibTex]

website youtube paper suppl video link (url) DOI Project Page [BibTex]


no image
A Comparison of Distance Measures for Learning Nonparametric Motor Skill Libraries

Stark, S., Peters, J., Rueckert, E.

IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pages: 624-630, IEEE, November 2017 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Simulation of the underactuated Sake Robotics Gripper in V-REP

Thiem, S., Stark, S., Tanneberg, D., Peters, J., Rueckert, E.

Workshop at the International Conference on Humanoid Robots (HUMANOIDS), November 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
End-to-End Learning for Image Burst Deblurring

Wieschollek, P., Schölkopf, B., Lensch, H. P. A., Hirsch, M.

Computer Vision - ACCV 2016 - 13th Asian Conference on Computer Vision, 10114, pages: 35-51, Image Processing, Computer Vision, Pattern Recognition, and Graphics, (Editors: Lai, S.-H., Lepetit, V., Nishino, K., and Sato, Y. ), Springer, November 2017 (conference)

ei

[BibTex]

[BibTex]


no image
Active Incremental Learning of Robot Movement Primitives

Maeda, G., Ewerton, M., Osa, T., Busch, B., Peters, J.

Proceedings of the 1st Annual Conference on Robot Learning (CoRL), 78, pages: 37-46, Proceedings of Machine Learning Research, (Editors: Sergey Levine, Vincent Vanhoucke and Ken Goldberg), PMLR, November 2017 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl teasercrop
A Generative Model of People in Clothing

Lassner, C., Pons-Moll, G., Gehler, P. V.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
We present the first image-based generative model of people in clothing in a full-body setting. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.

ps

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl website teaser
Semantic Video CNNs through Representation Warping

Gadde, R., Jampani, V., Gehler, P. V.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings) Accepted

Abstract
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very lit- tle extra computational cost. This module is called Net- Warp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network repre- sentations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to- end training. Experiments validate that the proposed ap- proach incurs only little extra computational cost, while im- proving performance, when video streams are available. We achieve new state-of-the-art results on the standard CamVid and Cityscapes benchmark datasets and show reliable im- provements over different baseline networks. Our code and models are available at http://segmentation.is. tue.mpg.de

ps

pdf Supplementary Project Page [BibTex]

pdf Supplementary Project Page [BibTex]


no image
Online Video Deblurring via Dynamic Temporal Blending Network

Kim, T. H., Lee, K. M., Schölkopf, B., Hirsch, M.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 4038-4047, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser iccv2017
Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Behl, A., Jafari, O. H., Mustikovela, S. K., Alhaija, H. A., Rother, C., Geiger, A.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Thumb xl 2016 enhancenet
EnhanceNet: Single Image Super-Resolution through Automated Texture Synthesis

Sajjadi, M. S. M., Schölkopf, B., Hirsch, M.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 4501-4510, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

Arxiv Project link (url) DOI [BibTex]

Arxiv Project link (url) DOI [BibTex]


no image
Learning Blind Motion Deblurring

Wieschollek, P., Hirsch, M., Schölkopf, B., Lensch, H.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 231-240, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl screen shot 2017 08 09 at 12.54.00
A simple yet effective baseline for 3d human pose estimation

Martinez, J., Hossain, R., Romero, J., Little, J. J.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3-dimensional positions. With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, "lifting" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feed-forward network outperforms the best reported result by about 30\% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (\ie, using images as input) yields state of the art results -- this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.

ps

video code arxiv pdf preprint Project Page [BibTex]

video code arxiv pdf preprint Project Page [BibTex]


Thumb xl jonas teaser
Sparsity Invariant CNNs

Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


no image
Personalized Brain-Computer Interface Models for Motor Rehabilitation

Mastakouri, A., Weichwald, S., Ozdenizci, O., Meyer, T., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages: 3024-3029, October 2017 (conference)

ei

ArXiv PDF DOI Project Page [BibTex]

ArXiv PDF DOI Project Page [BibTex]


Thumb xl gernot teaser
OctNetFusion: Learning Depth Fusion from Data

Riegler, G., Ulusoy, A. O., Bischof, H., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.

avg

pdf Video 1 Video 2 Project Page Project Page [BibTex]

pdf Video 1 Video 2 Project Page Project Page [BibTex]


Thumb xl cover tro paper
An Online Scalable Approach to Unified Multirobot Cooperative Localization and Object Tracking

Ahmad, A., Lawless, G., Lima, P.

IEEE Transactions on Robotics (T-RO), 33, pages: 1184 - 1199, October 2017 (article)

Abstract
In this article we present a unified approach for multi-robot cooperative simultaneous localization and object tracking based on particle filters. Our approach is scalable with respect to the number of robots in the team. We introduce a method that reduces, from an exponential to a linear growth, the space and computation time requirements with respect to the number of robots in order to maintain a given level of accuracy in the full state estimation. Our method requires no increase in the number of particles with respect to the number of robots. However, in our method each particle represents a full state hypothesis, leading to the linear dependency on the number of robots of both space and time complexity. The derivation of the algorithm implementing our approach from a standard particle filter algorithm and its complexity analysis are presented. Through an extensive set of simulation experiments on a large number of randomized datasets, we demonstrate the correctness and efficacy of our approach. Through real robot experiments on a standardized open dataset of a team of four soccer playing robots tracking a ball, we evaluate our method's estimation accuracy with respect to the ground truth values. Through comparisons with other methods based on i) nonlinear least squares minimization and ii) joint extended Kalman filter, we further highlight our method's advantages. Finally, we also present a robustness test for our approach by evaluating it under scenarios of communication and vision failure in teammate robots.

ps

Published Version link (url) DOI [BibTex]

Published Version link (url) DOI [BibTex]


no image
Generalized exploration in policy search

van Hoof, H., Tanneberg, D., Peters, J.

Machine Learning, 106(9-10):1705-1724 , (Editors: Kurt Driessens, Dragi Kocev, Marko Robnik‐Sikonja, and Myra Spiliopoulou), October 2017, Special Issue of the ECML PKDD 2017 Journal Track (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Multi-frame blind image deconvolution through split frequency - phase recovery

Gauci, A., Abela, J., Cachia, E., Hirsch, M., ZarbAdami, K.

Proc. SPIE 10225, Eighth International Conference on Graphic and Image Processing (ICGIP 2016), pages: 1022511, (Editors: Yulin Wang, Tuan D. Pham, Vit Vozenilek, David Zhang, Yi Xie), October 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Probabilistic Prioritization of Movement Primitives

Paraschos, A., Lioutikov, R., Peters, J., Neumann, G.

Proceedings of the International Conference on Intelligent Robot Systems, and IEEE Robotics and Automation Letters (RA-L), 2(4):2294-2301, October 2017 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl andreas teaser
Direct Visual Odometry for a Fisheye-Stereo Camera

Liu, P., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

Abstract
We present a direct visual odometry algorithm for a fisheye-stereo camera. Our algorithm performs simultaneous camera motion estimation and semi-dense reconstruction. The pipeline consists of two threads: a tracking thread and a mapping thread. In the tracking thread, we estimate the camera pose via semi-dense direct image alignment. To have a wider field of view (FoV) which is important for robotic perception, we use fisheye images directly without converting them to conventional pinhole images which come with a limited FoV. To address the epipolar curve problem, plane-sweeping stereo is used for stereo matching and depth initialization. Multiple depth hypotheses are tracked for selected pixels to better capture the uncertainty characteristics of stereo matching. Temporal motion stereo is then used to refine the depth and remove false positive depth hypotheses. Our implementation runs at an average of 20 Hz on a low-end PC. We run experiments in outdoor environments to validate our algorithm, and discuss the experimental results. We experimentally show that we are able to estimate 6D poses with low drift, and at the same time, do semi-dense 3D reconstruction with high accuracy.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Closing One’s Eyes Affects Amplitude Modulation but Not Frequency Modulation in a Cognitive BCI

Görner, M., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 165-170, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
A Guided Task for Cognitive Brain-Computer Interfaces

Moser, J., Hohmann, M. R., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 326-331, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Bayesian Regression for Artifact Correction in Electroencephalography

Fiebig, K., Jayaram, V., Hesse, T., Blank, A., Peters, J., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 131-136, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

am ei

DOI [BibTex]

DOI [BibTex]


no image
Investigating Music Imagery as a Cognitive Paradigm for Low-Cost Brain-Computer Interfaces

Grossberger, L., Hohmann, M. R., Peters, J., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 160-164, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

am ei

DOI [BibTex]

DOI [BibTex]


no image
Correlations of Motor Adaptation Learning and Modulation of Resting-State Sensorimotor EEG Activity

Ozdenizci, O., Yalcin, M., Erdogan, A., Patoglu, V., Grosse-Wentrup, M., Cetin, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 384-388, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Weakly-Supervised Localization of Diabetic Retinopathy Lesions in Retinal Fundus Images

Gondal, M. W., Köhler, J. M., Grzeszick, R., Fink, G., Hirsch, M.

IEEE International Conference on Image Processing (ICIP), pages: 2069-2073, September 2017 (conference)

ei

arXiv DOI [BibTex]

arXiv DOI [BibTex]


no image
Assisting the practice of motor skills by humans with a probability distribution over trajectories

Ewerton, M., Maeda, G., Rother, D., Weimar, J., Lotter, L., Kollegger, G., Wiemeyer, J., Peters, J.

In Workshop Human-in-the-loop robotic manipulation: on the influence of the human role at IROS, September 2017 (inproceedings)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
BIMROB – Bidirectional Interaction Between Human and Robot for the Learning of Movements

Kollegger, G., Ewerton, M., Wiemeyer, J., Peters, J.

Proceedings of the 11th International Symposium on Computer Science in Sport (IACSS), (663):151-163, Advances in Intelligent Systems and Computing, (Editors: Lames M., Saupe D. and Wiemeyer J.), Springer International Publishing, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Goal-driven dimensionality reduction for reinforcement learning

Parisi, S., Ramstedt, S., Peters, J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 4634-4639, IEEE, September 2017 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]