Header logo is


2018


no image
Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences

Pinsler, R., Akrour, R., Osa, T., Peters, J., Neumann, G.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 596-601, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

2018


DOI Project Page [BibTex]


no image
Group invariance principles for causal generative models

Besserve, M., Shajarisales, N., Schölkopf, B., Janzing, D.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 557-565, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 464-472, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Mixture of Attractors: A Novel Movement Primitive Representation for Learning Motor Skills From Demonstrations

Manschitz, S., Gienger, M., Kober, J., Peters, J.

IEEE Robotics and Automation Letters, 3(2):926-933, April 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Probabilistic movement primitives under unknown system dynamics

Paraschos, A., Rueckert, E., Peters, J., Neumann, G.

Advanced Robotics, 32(6):297-310, April 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Cause-Effect Inference by Comparing Regression Errors

Blöbaum, P., Janzing, D., Washio, T., Shimizu, S., Schölkopf, B.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) , 84, pages: 900-909, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Will People Like Your Image? Learning the Aesthetic Space

Schwarz, K., Wieschollek, P., Lensch, H. P. A.

2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages: 2048-2057, March 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
An Algorithmic Perspective on Imitation Learning

Osa, T., Pajarinen, J., Neumann, G., Bagnell, J., Abbeel, P., Peters, J.

Foundations and Trends in Robotics, 7(1-2):1-179, March 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Using Probabilistic Movement Primitives in Robotics

Paraschos, A., Daniel, C., Peters, J., Neumann, G.

Autonomous Robots, 42(3):529-551, March 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Representation of sensory uncertainty in macaque visual cortex

Goris, R., Henaff, O., Meding, K.

Computational and Systems Neuroscience (COSYNE) 2018, March 2018 (poster)

ei

[BibTex]

[BibTex]


no image
A kernel-based approach to learning contact distributions for robot manipulation tasks

Kroemer, O., Leischnig, S., Luettgen, S., Peters, J.

Autonomous Robots, 42(3):581-600, March 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

Kim, J., Tabibian, B., Oh, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM), pages: 324-332, (Editors: Yi Chang, Chengxiang Zhai, Yan Liu, and Yoelle Maarek), ACM, Febuary 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Approximate Value Iteration Based on Numerical Quadrature

Vinogradska, J., Bischoff, B., Peters, J.

IEEE Robotics and Automation Letters, 3(2):1330-1337, January 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Biomimetic Tactile Sensors and Signal Processing with Spike Trains: A Review

Yi, Z., Zhang, Y., Peters, J.

Sensors and Actuators A: Physical, 269, pages: 41-52, January 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Die kybernetische Revolution

Schölkopf, B.

15-Mar-2018, Süddeutsche Zeitung, 2018 (misc)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences

Kanagawa, M., Hennig, P., Sejdinovic, D., Sriperumbudur, B. K.

Arxiv e-prints, arXiv:1805.08845v1 [stat.ML], 2018 (article)

Abstract
This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other. It is widely known in machine learning that these two formalisms are closely related; for instance, the estimator of kernel ridge regression is identical to the posterior mean of Gaussian process regression. However, they have been studied and developed almost independently by two essentially separate communities, and this makes it difficult to seamlessly transfer results between them. Our aim is to overcome this potential difficulty. To this end, we review several old and new results and concepts from either side, and juxtapose algorithmic quantities from each framework to highlight close similarities. We also provide discussions on subtle philosophical and theoretical differences between the two approaches.

pn

arXiv [BibTex]

arXiv [BibTex]


no image
Functional Programming for Modular Bayesian Inference

Ścibior, A., Kammar, O., Ghahramani, Z.

Proceedings of the ACM on Functional Programming (ICFP), 2(Article No. 83):1-29, ACM, 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Design and Analysis of the NIPS 2016 Review Process

Shah*, N., Tabibian*, B., Muandet, K., Guyon, I., von Luxburg, U.

Journal of Machine Learning Research, 19(49):1-34, 2018, *equal contribution (article)

ei slt

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


no image
A Flexible Approach for Fair Classification

Zafar, M. B., Valera, I., Gomez Rodriguez, M., Gummadi, K.

Journal of Machine Learning, 2018 (article) Accepted

ei

Project Page [BibTex]

Project Page [BibTex]


no image
Adaptation and Robust Learning of Probabilistic Movement Primitives

Gomez-Gonzalez, S., Neumann, G., Schölkopf, B., Peters, J.

IEEE Transactions on Robotics, 2018 (article) In revision

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Automatic Bayesian Density Analysis

Vergari, A., Molina, A., Peharz, R., Ghahramani, Z., Kersting, K., Valera, I.

2018 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
A virtual reality environment for experiments in assistive robotics and neural interfaces

Bustamante, S.

Graduate School of Neural Information Processing, Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Does universal controllability of physical systems prohibit thermodynamic cycles?

Janzing, D., Wocjan, P.

Open Systems and Information Dynamics, 25(3):1850016, 2018 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Optimal Trajectory Generation and Learning Control for Robot Table Tennis

Koc, O.

Technical University Darmstadt, Germany, 2018 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Learning Causality and Causality-Related Learning: Some Recent Progress

Zhang, K., Schölkopf, B., Spirtes, P., Glymour, C.

National Science Review, 5(1):26-29, 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl tease pic
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients

Balles, L., Hennig, P.

In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (inproceedings) Accepted

Abstract
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner's toolbox for problems where ADAM fails.

pn

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Online optimal trajectory generation for robot table tennis

Koc, O., Maeda, G., Peters, J.

Robotics and Autonomous Systems, 105, pages: 121-137, 2018 (article)

ei

PDF link (url) DOI [BibTex]

PDF link (url) DOI [BibTex]


no image
Counterfactual Mean Embedding: A Kernel Method for Nonparametric Causal Inference

Muandet, K., Kanagawa, M., Saengkyongam, S., Marukata, S.

Arxiv e-prints, arXiv:1805.08845v1 [stat.ML], 2018 (article)

Abstract
This paper introduces a novel Hilbert space representation of a counterfactual distribution---called counterfactual mean embedding (CME)---with applications in nonparametric causal inference. Counterfactual prediction has become an ubiquitous tool in machine learning applications, such as online advertisement, recommendation systems, and medical diagnosis, whose performance relies on certain interventions. To infer the outcomes of such interventions, we propose to embed the associated counterfactual distribution into a reproducing kernel Hilbert space (RKHS) endowed with a positive definite kernel. Under appropriate assumptions, the CME allows us to perform causal inference over the entire landscape of the counterfactual distribution. The CME can be estimated consistently from observational data without requiring any parametric assumption about the underlying distributions. We also derive a rate of convergence which depends on the smoothness of the conditional mean and the Radon-Nikodym derivative of the underlying marginal distributions. Our framework can deal with not only real-valued outcome, but potentially also more complex and structured outcomes such as images, sequences, and graphs. Lastly, our experimental results on off-policy evaluation tasks demonstrate the advantages of the proposed estimator.

ei pn

arXiv [BibTex]

arXiv [BibTex]


no image
Model-based Kernel Sum Rule: Kernel Bayesian Inference with Probabilistic Models

Nishiyama, Y., Kanagawa, M., Gretton, A., Fukumizu, K.

Arxiv e-prints, arXiv:1409.5178v2 [stat.ML], 2018 (article)

Abstract
Kernel Bayesian inference is a powerful nonparametric approach to performing Bayesian inference in reproducing kernel Hilbert spaces or feature spaces. In this approach, kernel means are estimated instead of probability distributions, and these estimates can be used for subsequent probabilistic operations (as for inference in graphical models) or in computing the expectations of smooth functions, for instance. Various algorithms for kernel Bayesian inference have been obtained by combining basic rules such as the kernel sum rule (KSR), kernel chain rule, kernel product rule and kernel Bayes' rule. However, the current framework only deals with fully nonparametric inference (i.e., all conditional relations are learned nonparametrically), and it does not allow for flexible combinations of nonparametric and parametric inference, which are practically important. Our contribution is in providing a novel technique to realize such combinations. We introduce a new KSR referred to as the model-based KSR (Mb-KSR), which employs the sum rule in feature spaces under a parametric setting. Incorporating the Mb-KSR into existing kernel Bayesian framework provides a richer framework for hybrid (nonparametric and parametric) kernel Bayesian inference. As a practical application, we propose a novel filtering algorithm for state space models based on the Mb-KSR, which combines the nonparametric learning of an observation process using kernel mean embedding and the additive Gaussian noise model for a state transition process. While we focus on additive Gaussian noise models in this study, the idea can be extended to other noise models, such as the Cauchy and alpha-stable noise models.

pn

arXiv [BibTex]

arXiv [BibTex]


no image
Hierarchical Reinforcement Learning of Multiple Grasping Strategies with Human Instructions

Osa, T., Peters, J., Neumann, G.

Advanced Robotics, 32(18):955-968, 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Probabilistic Deep Learning using Random Sum-Product Networks

Peharz, R., Vergari, A., Stelzner, K., Molina, A., Trapp, M., Kersting, K., Ghahramani, Z.

2018 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
k–SVRG: Variance Reduction for Large Scale Optimization

Raj, A., Stich, S.

In 2018 (inproceedings) Submitted

ei

[BibTex]

[BibTex]


no image
Distribution-Dissimilarities in Machine Learning

Simon-Gabriel, C. J.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

ei

[BibTex]

[BibTex]


Thumb xl hp teaser
A probabilistic model for the numerical solution of initial value problems

Schober, M., Särkkä, S., Philipp Hennig,

Statistics and Computing, Springer US, 2018 (article)

Abstract
We study connections between ordinary differential equation (ODE) solvers and probabilistic regression methods in statistics. We provide a new view of probabilistic ODE solvers as active inference agents operating on stochastic differential equation models that estimate the unknown initial value problem (IVP) solution from approximate observations of the solution derivative, as provided by the ODE dynamics. Adding to this picture, we show that several multistep methods of Nordsieck form can be recast as Kalman filtering on q-times integrated Wiener processes. Doing so provides a family of IVP solvers that return a Gaussian posterior measure, rather than a point estimate. We show that some such methods have low computational overhead, nontrivial convergence order, and that the posterior has a calibrated concentration rate. Additionally, we suggest a step size adaptation algorithm which completes the proposed method to a practically useful implementation, which we experimentally evaluate using a representative set of standard codes in the DETEST benchmark set.

pn

PDF Code DOI Project Page [BibTex]


no image
Autofocusing-based phase correction

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

Magnetic Resonance in Medicine, 80(3):958-968, 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

7th AREADNE Conference on Research in Encoding and Decoding of Neural Ensembles, 2018 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Case series: Slowing alpha rhythm in late-stage ALS patients

Hohmann, M. R., Fomina, T., Jayaram, V., Emde, T., Just, J., Synofzik, M., Schölkopf, B., Schöls, L., Grosse-Wentrup, M.

Clinical Neurophysiology, 129(2):406-408, 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Inverse Reinforcement Learning via Nonparametric Spatio-Temporal Subgoal Modeling

Šošić, A., Rueckert, E., Peters, J., Zoubir, A., Koeppl, H.

Journal of Machine Learning Research, 19(69):1-45, 2018 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Grip Stabilization of Novel Objects using Slip Prediction

Veiga, F., Peters, J., Hermans, T.

IEEE Transactions on Haptics, 2018 (article) In press

ei

DOI [BibTex]

DOI [BibTex]


no image
A Differentially Private Kernel Two-Sample Test

Raj*, A., Law*, L., Sejdinovic*, D., Park, M.

2018, *equal contribution (conference) Submitted

ei

[BibTex]

[BibTex]


no image
Electrophysiological correlates of neurodegeneration in motor and non-motor brain regions in amyotrophic lateral sclerosis—implications for brain–computer interfacing

Kellmeyer, P., Grosse-Wentrup, M., Schulze-Bonhage, A., Ziemann, U., Ball, T.

Journal of Neural Engineering, 15(4):041003, IOP Publishing, 2018 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
A Causal Perspective on Deep Representation Learning

Suter, R.

ETH Zurich, 2018 (mastersthesis)

ei

[BibTex]


no image
Domain Adaptation Under Causal Assumptions

Lechner, T.

Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Quantum machine learning: a classical perspective

Ciliberto, C., Herbster, M., Ialongo, A. D., Pontil, M., Rocchetto, A., Severini, S., Wossnig, L.

Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 474(2209):20170551, 2018 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Kernel-based tests for joint independence

Pfister, N., Bühlmann, P., Schölkopf, B., Peters, J.

Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):5-31, 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]