Header logo is


2020


no image
Automatic Discovery of Interpretable Planning Strategies

Skirzyński, J., Becker, F., Lieder, F.

May 2020 (article) Submitted

Abstract
When making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decisionmakers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that the provision of decision rules as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Furthermore, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making.

re

Automatic Discovery of Interpretable Planning Strategies The code for our algorithm and the experiments is available [BibTex]


no image
Adaptation and Robust Learning of Probabilistic Movement Primitives

Gomez-Gonzalez, S., Neumann, G., Schölkopf, B., Peters, J.

IEEE Transactions on Robotics, 36(2):366-379, IEEE, March 2020 (article)

ei

arXiv DOI Project Page [BibTex]

arXiv DOI Project Page [BibTex]


no image
Advancing Rational Analysis to the Algorithmic Level

Lieder, F., Griffiths, T. L.

Behavioral and Brain Sciences, 43, E27, March 2020 (article)

Abstract
The commentaries raised questions about normativity, human rationality, cognitive architectures, cognitive constraints, and the scope or resource rational analysis (RRA). We respond to these questions and clarify that RRA is a methodological advance that extends the scope of rational modeling to understanding cognitive processes, why they differ between people, why they change over time, and how they could be improved.

re

Advancing rational analysis to the algorithmic level DOI [BibTex]

Advancing rational analysis to the algorithmic level DOI [BibTex]


no image
Learning to Overexert Cognitive Control in a Stroop Task

Bustamante, L., Lieder, F., Musslick, S., Shenhav, A., Cohen, J.

Febuary 2020, Laura Bustamante and Falk Lieder contributed equally to this publication. (article) In revision

Abstract
How do people learn when to allocate how much cognitive control to which task? According to the Learned Value of Control (LVOC) model, people learn to predict the value of alternative control allocations from features of a given situation. This suggests that people may generalize the value of control learned in one situation to other situations with shared features, even when the demands for cognitive control are different. This makes the intriguing prediction that what a person learned in one setting could, under some circumstances, cause them to misestimate the need for, and potentially over-exert control in another setting, even if this harms their performance. To test this prediction, we had participants perform a novel variant of the Stroop task in which, on each trial, they could choose to either name the color (more control-demanding) or read the word (more automatic). However only one of these tasks was rewarded, it changed from trial to trial, and could be predicted by one or more of the stimulus features (the color and/or the word). Participants first learned colors that predicted the rewarded task. Then they learned words that predicted the rewarded task. In the third part of the experiment, we tested how these learned feature associations transferred to novel stimuli with some overlapping features. The stimulus-task-reward associations were designed so that for certain combinations of stimuli the transfer of learned feature associations would incorrectly predict that more highly rewarded task would be color naming, which would require the exertion of control, even though the actually rewarded task was word reading and therefore did not require the engagement of control. Our results demonstrated that participants over-exerted control for these stimuli, providing support for the feature-based learning mechanism described by the LVOC model.

re

Learning to Overexert Cognitive Control in a Stroop Task DOI [BibTex]

Learning to Overexert Cognitive Control in a Stroop Task DOI [BibTex]


no image
Real Time Trajectory Prediction Using Deep Conditional Generative Models

Gomez-Gonzalez, S., Prokudin, S., Schölkopf, B., Peters, J.

IEEE Robotics and Automation Letters, 5(2):970-976, IEEE, January 2020 (article)

ei ps

arXiv DOI [BibTex]

arXiv DOI [BibTex]


Toward a Formal Theory of Proactivity
Toward a Formal Theory of Proactivity

Lieder, F., Iwama, G.

January 2020 (article) Submitted

Abstract
Beyond merely reacting to their environment and impulses, people have the remarkable capacity to proactively set and pursue their own goals. But the extent to which they leverage this capacity varies widely across people and situations. The goal of this article is to make the mechanisms and variability of proactivity more amenable to rigorous experiments and computational modeling. We proceed in three steps. First, we develop and validate a mathematically precise behavioral measure of proactivity and reactivity that can be applied across a wide range of experimental paradigms. Second, we propose a formal definition of proactivity and reactivity, and develop a computational model of proactivity in the AX Continuous Performance Task (AX-CPT). Third, we develop and test a computational-level theory of meta-control over proactivity in the AX-CPT that identifies three distinct meta-decision-making problems: intention setting, resolving response conflict between intentions and automaticity, and deciding whether to recall context and intentions into working memory. People's response frequencies in the AX-CPT were remarkably well captured by a mixture between the predictions of our models of proactive and reactive control. Empirical data from an experiment varying the incentives and contextual load of an AX-CPT confirmed the predictions of our meta-control model of individual differences in proactivity. Our results suggest that proactivity can be understood in terms of computational models of meta-control. Our model makes additional empirically testable predictions. Future work will extend our models from proactive control in the AX-CPT to proactive goal creation and goal pursuit in the real world.

re

Toward a formal theory of proactivity DOI [BibTex]


no image
Analytical classical density functionals from an equation learning network

Lin, S., Martius, G., Oettel, M.

The Journal of Chemical Physics, 152(2):021102, 2020, arXiv preprint \url{https://arxiv.org/abs/1910.12752} (article)

al

Preprint_PDF DOI [BibTex]

Preprint_PDF DOI [BibTex]


no image
An Adaptive Optimizer for Measurement-Frugal Variational Algorithms

Kübler, J. M., Arrasmith, A., Cincio, L., Coles, P. J.

Quantum, 4, pages: 263, 2020 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Counterfactual Mean Embedding

Muandet, K., Kanagawa, M., Saengkyongam, S., Marukatat, S.

Journal of Machine Learning Research, 2020 (article) Accepted

ei

[BibTex]

[BibTex]

2016


no image
Contextual Policy Search for Linear and Nonlinear Generalization of a Humanoid Walking Controller

Abdolmaleki, A., Lau, N., Reis, L., Peters, J., Neumann, G.

Journal of Intelligent & Robotic Systems, 83(3-4):393-408, (Editors: Luis Almeida, Lino Marques ), September 2016, Special Issue: Autonomous Robot Systems (article)

ei

DOI [BibTex]

2016


DOI [BibTex]


no image
Acquiring and Generalizing the Embodiment Mapping from Human Observations to Robot Skills

Maeda, G., Ewerton, M., Koert, D., Peters, J.

IEEE Robotics and Automation Letters, 1(2):784-791, July 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
On estimation of functional causal models: General results and application to post-nonlinear causal model

Zhang, K., Wang, Z., Zhang, J., Schölkopf, B.

ACM Transactions on Intelligent Systems and Technologies, 7(2):article no. 13, January 2016 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


Gaussian Process-Based Predictive Control for Periodic Error Correction
Gaussian Process-Based Predictive Control for Periodic Error Correction

Klenske, E. D., Zeilinger, M., Schölkopf, B., Hennig, P.

IEEE Transactions on Control Systems Technology , 24(1):110-121, 2016 (article)

ei pn

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Pymanopt: A Python Toolbox for Optimization on Manifolds using Automatic Differentiation

Townsend, J., Koep, N., Weichwald, S.

Journal of Machine Learning Research, 17(137):1-5, 2016 (article)

ei

PDF Arxiv Code Project page link (url) [BibTex]


no image
A Causal, Data-driven Approach to Modeling the Kepler Data

Wang, D., Hogg, D. W., Foreman-Mackey, D., Schölkopf, B.

Publications of the Astronomical Society of the Pacific, 128(967):094503, 2016 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Probabilistic Inference for Determining Options in Reinforcement Learning

Daniel, C., van Hoof, H., Peters, J., Neumann, G.

Machine Learning, Special Issue, 104(2):337-357, (Editors: Gärtner, T., Nanni, M., Passerini, A. and Robardet, C.), European Conference on Machine Learning im Machine Learning, Journal Track, 2016, Best Student Paper Award of ECML-PKDD 2016 (article)

am ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Influence of initial fixation position in scene viewing

Rothkegel, L. O. M., Trukenbrod, H. A., Schütt, H. H., Wichmann, F. A., Engbert, R.

Vision Research, 129, pages: 33-49, 2016 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Testing models of peripheral encoding using metamerism in an oddity paradigm

Wallis, T. S. A., Bethge, M., Wichmann, F. A.

Journal of Vision, 16(2), 2016 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Modeling Confounding by Half-Sibling Regression

Schölkopf, B., Hogg, D., Wang, D., Foreman-Mackey, D., Janzing, D., Simon-Gabriel, C. J., Peters, J.

Proceedings of the National Academy of Science, 113(27):7391-7398, 2016 (article)

ei

Code link (url) DOI Project Page [BibTex]

Code link (url) DOI Project Page [BibTex]


Dual Control for Approximate Bayesian Reinforcement Learning
Dual Control for Approximate Bayesian Reinforcement Learning

Klenske, E. D., Hennig, P.

Journal of Machine Learning Research, 17(127):1-30, 2016 (article)

ei pn

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
A Population Based Gaussian Mixture Model Incorporating 18F-FDG-PET and DW-MRI Quantifies Tumor Tissue Classes

Divine, M. R., Katiyar, P., Kohlhofer, U., Quintanilla-Martinez, L., Disselhorst, J. A., Pichler, B. J.

Journal of Nuclear Medicine, 57(3):473-479, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Hierarchical Relative Entropy Policy Search

Daniel, C., Neumann, G., Kroemer, O., Peters, J.

Journal of Machine Learning Research, 17(93):1-50, 2016 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data

Schütt, H. H., Harmeling, S., Macke, J. H., Wichmann, F. A.

Vision Research, 122, pages: 105-123, 2016 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Causal discovery and inference: concepts and recent methodological advances

Spirtes, P., Zhang, K.

Applied Informatics, 3(3):1-28, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Self-regulation of brain rhythms in the precuneus: a novel BCI paradigm for patients with ALS

Fomina, T., Lohmann, G., Erb, M., Ethofer, T., Schölkopf, B., Grosse-Wentrup, M.

Journal of Neural Engineering, 13(6):066021, 2016 (article)

ei

link (url) Project Page [BibTex]


no image
Influence Estimation and Maximization in Continuous-Time Diffusion Networks

Gomez-Rodriguez, M., Song, L., Du, N., Zha, H., Schölkopf, B.

ACM Transactions on Information Systems, 34(2):9:1-9:33, 2016 (article)

ei

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


no image
Kernel Mean Shrinkage Estimators

Muandet, K., Sriperumbudur, B., Fukumizu, K., Gretton, A., Schölkopf, B.

Journal of Machine Learning Research, 17(48):1-41, 2016 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Learning to Deblur

Schuler, C. J., Hirsch, M., Harmeling, S., Schölkopf, B.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7):1439-1451, IEEE, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Transfer Learning in Brain-Computer Interfaces

Jayaram, V., Alamgir, M., Altun, Y., Schölkopf, B., Grosse-Wentrup, M.

IEEE Computational Intelligence Magazine, 11(1):20-31, 2016 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
MERLiN: Mixture Effect Recovery in Linear Networks

Weichwald, S., Grosse-Wentrup, M., Gretton, A.

IEEE Journal of Selected Topics in Signal Processing, 10(7):1254-1266, 2016 (article)

ei

Arxiv Code PDF DOI Project Page [BibTex]

Arxiv Code PDF DOI Project Page [BibTex]


no image
Causal inference using invariant prediction: identification and confidence intervals

Peters, J., Bühlmann, P., Meinshausen, N.

Journal of the Royal Statistical Society, Series B (Statistical Methodology), 78(5):947-1012, 2016, (with discussion) (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
The population of long-period transiting exoplanets

Foreman-Mackey, D., Morton, T. D., Hogg, D. W., Agol, E., Schölkopf, B.

The Astronomical Journal, 152(6):206, 2016 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
An overview of quantitative approaches in Gestalt perception

Jäkel, F., Singh, M., Wichmann, F. A., Herzog, M. H.

Vision Research, 126, pages: 3-8, 2016 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Bootstrat: Population Informed Bootstrapping for Rare Variant Tests

Huang, H., Peloso, G. M., Howrigan, D., Rakitsch, B., Simon-Gabriel, C. J., Goldstein, J. I., Daly, M. J., Borgwardt, K., Neale, B. M.

bioRxiv, 2016, preprint (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Probabilistic Movement Models Show that Postural Control Precedes and Predicts Volitional Motor Control

Rueckert, E., Camernik, J., Peters, J., Babic, J.

Nature PG: Scientific Reports, 6(Article number: 28455), 2016 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
BOiS—Berlin Object in Scene Database: Controlled Photographic Images for Visual Search Experiments with Quantified Contextual Priors

Mohr, J., Seyfarth, J., Lueschow, A., Weber, J. E., Wichmann, F. A., Obermayer, K.

Frontiers in Psychology, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Preface to the ACM TIST Special Issue on Causal Discovery and Inference

Zhang, K., Li, J., Bareinboim, E., Schölkopf, B., Pearl, J.

ACM Transactions on Intelligent Systems and Technologies, 7(2):article no. 17, 2016 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Recurrent Spiking Networks Solve Planning Tasks

Rueckert, E., Kappel, D., Tanneberg, D., Pecevski, D., Peters, J.

Nature PG: Scientific Reports, 6(Article number: 21142), 2016 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning Taxonomy Adaptation in Large-scale Classification

Babbar, R., Partalas, I., Gaussier, E., Amini, M., Amblard, C.

Journal of Machine Learning Research, 17(98):1-37, 2016 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Bio-inspired feedback-circuit implementation of discrete, free energy optimizing, winner-take-all computations

Genewein, T, Braun, DA

Biological Cybernetics, 110(2):135–150, June 2016 (article)

Abstract
Bayesian inference and bounded rational decision-making require the accumulation of evidence or utility, respectively, to transform a prior belief or strategy into a posterior probability distribution over hypotheses or actions. Crucially, this process cannot be simply realized by independent integrators, since the different hypotheses and actions also compete with each other. In continuous time, this competitive integration process can be described by a special case of the replicator equation. Here we investigate simple analog electric circuits that implement the underlying differential equation under the constraint that we only permit a limited set of building blocks that we regard as biologically interpretable, such as capacitors, resistors, voltage-dependent conductances and voltage- or current-controlled current and voltage sources. The appeal of these circuits is that they intrinsically perform normalization without requiring an explicit divisive normalization. However, even in idealized simulations, we find that these circuits are very sensitive to internal noise as they accumulate error over time. We discuss in how far neural circuits could implement these operations that might provide a generic competitive principle underlying both perception and action.

ei

DOI [BibTex]

DOI [BibTex]