Header logo is



no image
Doing more with less: Meta-reasoning and meta-learning in humans and machines

Griffiths, T., Callaway, F., Chang, M., Grant, E., Krueger, P. M., Lieder, F.

Current Opinion in Behavioral Sciences, 2019 (article)

re

DOI [BibTex]

DOI [BibTex]


no image
Cognitive Prostheses for Goal Achievement

Lieder, F., Chen, O. X., Krueger, P. M., Griffiths, T.

Nature Human Behavior, 2019 (article)

re

DOI [BibTex]

DOI [BibTex]


no image
Effects of system response delays on elderly humans’ cognitive performance in a virtual training scenario

Wirzberger, M., Schmidt, R., Georgi, M., Hardt, W., Brunnett, G., Rey, G. D.

Scientific Reports, 9:8291, 2019 (article)

Abstract
Observed influences of system response delay in spoken human-machine dialogues are rather ambiguous and mainly focus on perceived system quality. Studies that systematically inspect effects on cognitive performance are still lacking, and effects of individual characteristics are also often neglected. Building on benefits of cognitive training for decelerating cognitive decline, this Wizard-of-Oz study addresses both issues by testing 62 elderly participants in a dialogue-based memory training with a virtual agent. Participants acquired the method of loci with fading instructional guidance and applied it afterward to memorizing and recalling lists of German nouns. System response delays were randomly assigned, and training performance was included as potential mediator. Participants’ age, gender, and subscales of affinity for technology (enthusiasm, competence, positive and negative perception of technology) were inspected as potential moderators. The results indicated positive effects on recall performance with higher training performance, female gender, and less negative perception of technology. Additionally, memory retention and facets of affinity for technology moderated increasing system response delays. Participants also provided higher ratings in perceived system quality with higher enthusiasm for technology but reported increasing frustration with a more positive perception of technology. Potential explanations and implications for the design of spoken dialogue systems are discussed.

re

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A meta-analysis of the segmenting effect

Rey, G. D., Beege, M., Nebel, S., Wirzberger, M., Schmitt, T., Schneider, S.

Educational Psychology Review, 2019 (article)

Abstract
The segmenting effect states that people learn better when multimedia instructions are presented in (meaningful and coherent) learner-paced segments, rather than as continuous units. This meta-analysis contains 56 investigations including 88 pairwise comparisons and reveals a significant segmenting effect with small to medium effects for retention and transfer performance. Segmentation also reduces the overall cognitive load and increases learning time. These four effects are confirmed for a system-paced segmentation. The meta-analysis tests different explanations for the segmenting effect that concern facilitating chunking and structuring due to segmenting the multimedia instruction by the instructional designer, providing more time for processing the instruction and allowing the learners to adapt the presentation pace to their individual needs. Moderation analyses indicate that learners with high prior knowledge benefitted more from segmenting instructional material than learners with no or low prior knowledge in terms of retention performance.

re

DOI [BibTex]

DOI [BibTex]


no image
A rational reinterpretation of dual process theories

Milli, S., Lieder, F., Griffiths, T.

2019 (article)

re

DOI [BibTex]

DOI [BibTex]


Thumb xl linear solvers stco figure7 1
Probabilistic Linear Solvers: A Unifying View

Bartels, S., Cockayne, J., Ipsen, I. C. F., Hennig, P.

Statistics and Computing, 2019 (article) Accepted

pn

link (url) [BibTex]

link (url) [BibTex]

2018


Thumb xl stco paper figure11
Probabilistic Solutions To Ordinary Differential Equations As Non-Linear Bayesian Filtering: A New Perspective

Tronarp, F., Kersting, H., Särkkä, S., Hennig, P.

ArXiv preprint 2018, arXiv:1810.03440 [stat.ME], October 2018 (article)

Abstract
We formulate probabilistic numerical approximations to solutions of ordinary differential equations (ODEs) as problems in Gaussian process (GP) regression with non-linear measurement functions. This is achieved by defining the measurement sequence to consists of the observations of the difference between the derivative of the GP and the vector field evaluated at the GP---which are all identically zero at the solution of the ODE. When the GP has a state-space representation, the problem can be reduced to a Bayesian state estimation problem and all widely-used approximations to the Bayesian filtering and smoothing problems become applicable. Furthermore, all previous GP-based ODE solvers, which were formulated in terms of generating synthetic measurements of the vector field, come out as specific approximations. We derive novel solvers, both Gaussian and non-Gaussian, from the Bayesian state estimation problem posed in this paper and compare them with other probabilistic solvers in illustrative experiments.

pn

link (url) Project Page [BibTex]

2018



Thumb xl grasping
Leveraging Contact Forces for Learning to Grasp

Merzic, H., Bogdanovic, M., Kappler, D., Righetti, L., Bohg, J.

arXiv, September 2018, Submitted to ICRA'19 (article) Submitted

Abstract
Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used two- fingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.

am mg

video arXiv [BibTex]

video arXiv [BibTex]


Thumb xl mazen
Robust Physics-based Motion Retargeting with Realistic Body Shapes

Borno, M. A., Righetti, L., Black, M. J., Delp, S. L., Fiume, E., Romero, J.

Computer Graphics Forum, 37, pages: 6:1-12, July 2018 (article)

Abstract
Motion capture is often retargeted to new, and sometimes drastically different, characters. When the characters take on realistic human shapes, however, we become more sensitive to the motion looking right. This means adapting it to be consistent with the physical constraints imposed by different body shapes. We show how to take realistic 3D human shapes, approximate them using a simplified representation, and animate them so that they move realistically using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The approach automatically adapts the motion of the mocap subject to the body shape of a target subject. This motion respects the physical properties of the new body and every body shape results in a different and appropriate movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters. In an interactive environment, successful retargeting requires adapting the motion to unexpected external forces. We achieve robustness to such forces using a novel LQR-tree formulation. We show that the simulated motions look appropriate to each character’s anatomy and their actions are robust to perturbations.

mg ps

pdf video Project Page Project Page [BibTex]

pdf video Project Page Project Page [BibTex]


no image
Convergence Rates of Gaussian ODE Filters

Kersting, H., Sullivan, T. J., Hennig, P.

arXiv preprint 2018, arXiv:1807.09737 [math.NA], July 2018 (article)

Abstract
A recently-introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems. These methods model the true solution $x$ and its first $q$ derivatives a priori as a Gauss--Markov process $\boldsymbol{X}$, which is then iteratively conditioned on information about $\dot{x}$. We prove worst-case local convergence rates of order $h^{q+1}$ for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order $h^q$ in the case of $q=1$ and an integrated Brownian motion prior, and analyse how inaccurate information on $\dot{x}$ coming from approximate evaluations of $f$ affects these rates. Moreover, we present explicit formulas for the steady states and show that the posterior confidence intervals are well calibrated in all considered cases that exhibit global convergence---in the sense that they globally contract at the same rate as the truncation error.

pn

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences

Kanagawa, M., Hennig, P., Sejdinovic, D., Sriperumbudur, B. K.

Arxiv e-prints, arXiv:1805.08845v1 [stat.ML], 2018 (article)

Abstract
This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other. It is widely known in machine learning that these two formalisms are closely related; for instance, the estimator of kernel ridge regression is identical to the posterior mean of Gaussian process regression. However, they have been studied and developed almost independently by two essentially separate communities, and this makes it difficult to seamlessly transfer results between them. Our aim is to overcome this potential difficulty. To this end, we review several old and new results and concepts from either side, and juxtapose algorithmic quantities from each framework to highlight close similarities. We also provide discussions on subtle philosophical and theoretical differences between the two approaches.

pn

arXiv [BibTex]

arXiv [BibTex]


no image
Schema-related cognitive load influences performance, speech, and physiology in a dual-task setting: A continuous multi-measure approach

Wirzberger, M., Herms, R., Esmaeili Bijarsari, S., Eibl, M., Rey, G. D.

Cognitive Research: Principles and Implications, 3:46, Springer Nature, 2018 (article)

Abstract
Schema acquisition processes comprise an essential source of cognitive demands in learning situations. To shed light on related mechanisms and influencing factors, this study applied a continuous multi-measure approach for cognitive load assessment. In a dual-task setting, a sample of 123 student participants learned visually presented symbol combinations with one of two levels of complexity while memorizing auditorily presented number sequences. Learners’ cognitive load during the learning task was addressed by secondary task performance, prosodic speech parameters (pauses, articulation rate), and physiological markers (heart rate, skin conductance response). While results revealed increasing primary and secondary task performance over the trials, decreases in speech and physiological parameters indicated a reduction in the overall level of cognitive load with task progression. In addition, the robustness of the acquired schemata was confirmed by a transfer task that required participants to apply the obtained symbol combinations. Taken together, the observed pattern of evidence supports the idea of a logarithmically decreasing progression of cognitive load with increasing schema acquisition, and further hints on robust and stable transfer performance, even under enhanced transfer demands. Finally, theoretical and practical consequences consider evidence on desirable difficulties in learning as well as the potential of multimodal cognitive load detection in learning applications.

re

DOI [BibTex]

DOI [BibTex]


no image
Counterfactual Mean Embedding: A Kernel Method for Nonparametric Causal Inference

Muandet, K., Kanagawa, M., Saengkyongam, S., Marukata, S.

Arxiv e-prints, arXiv:1805.08845v1 [stat.ML], 2018 (article)

Abstract
This paper introduces a novel Hilbert space representation of a counterfactual distribution---called counterfactual mean embedding (CME)---with applications in nonparametric causal inference. Counterfactual prediction has become an ubiquitous tool in machine learning applications, such as online advertisement, recommendation systems, and medical diagnosis, whose performance relies on certain interventions. To infer the outcomes of such interventions, we propose to embed the associated counterfactual distribution into a reproducing kernel Hilbert space (RKHS) endowed with a positive definite kernel. Under appropriate assumptions, the CME allows us to perform causal inference over the entire landscape of the counterfactual distribution. The CME can be estimated consistently from observational data without requiring any parametric assumption about the underlying distributions. We also derive a rate of convergence which depends on the smoothness of the conditional mean and the Radon-Nikodym derivative of the underlying marginal distributions. Our framework can deal with not only real-valued outcome, but potentially also more complex and structured outcomes such as images, sequences, and graphs. Lastly, our experimental results on off-policy evaluation tasks demonstrate the advantages of the proposed estimator.

ei pn

arXiv [BibTex]

arXiv [BibTex]


no image
Model-based Kernel Sum Rule: Kernel Bayesian Inference with Probabilistic Models

Nishiyama, Y., Kanagawa, M., Gretton, A., Fukumizu, K.

Arxiv e-prints, arXiv:1409.5178v2 [stat.ML], 2018 (article)

Abstract
Kernel Bayesian inference is a powerful nonparametric approach to performing Bayesian inference in reproducing kernel Hilbert spaces or feature spaces. In this approach, kernel means are estimated instead of probability distributions, and these estimates can be used for subsequent probabilistic operations (as for inference in graphical models) or in computing the expectations of smooth functions, for instance. Various algorithms for kernel Bayesian inference have been obtained by combining basic rules such as the kernel sum rule (KSR), kernel chain rule, kernel product rule and kernel Bayes' rule. However, the current framework only deals with fully nonparametric inference (i.e., all conditional relations are learned nonparametrically), and it does not allow for flexible combinations of nonparametric and parametric inference, which are practically important. Our contribution is in providing a novel technique to realize such combinations. We introduce a new KSR referred to as the model-based KSR (Mb-KSR), which employs the sum rule in feature spaces under a parametric setting. Incorporating the Mb-KSR into existing kernel Bayesian framework provides a richer framework for hybrid (nonparametric and parametric) kernel Bayesian inference. As a practical application, we propose a novel filtering algorithm for state space models based on the Mb-KSR, which combines the nonparametric learning of an observation process using kernel mean embedding and the additive Gaussian noise model for a state transition process. While we focus on additive Gaussian noise models in this study, the idea can be extended to other noise models, such as the Cauchy and alpha-stable noise models.

pn

arXiv [BibTex]

arXiv [BibTex]


no image
Attention please! Enhanced attention control abilities compensate for instructional impairments in multimedia learning

Wirzberger, M., Rey, G. D.

Journal of Computers in Education, 5(2):243-257, Springer Nature, 2018 (article)

Abstract
Learners exposed to multimedia learning contexts have to deal with a variety of visual stimuli, demanding a conducive design of learning material to maintain limitations in attentional resources. Within the current study, effects and constraints arising from two selected impairing features are investigated in more detail within a computer-based learning task on factor analysis. A sample of 53 students received a combination of textual and pictorial elements that explained the topic, while impaired attention was systematically induced in a 2 × 2 factorial between-subjects design by interrupting system-notifications (with vs. without) and seductive text passages (with vs. without). Learners’ ability for controlled attention was assessed with a standardized psychological attention inventory. Approaching the results, learners receiving seductive text passages spent significantly more time on the learning material. In addition, a moderation effect of attention control abilities on the relationship between interruptions and retention performance resulted. Explanations for the obtained findings are discussed referring to mechanisms of compensation, load, and activation.

re

DOI Project Page [BibTex]


Thumb xl hp teaser
A probabilistic model for the numerical solution of initial value problems

Schober, M., Särkkä, S., Philipp Hennig,

Statistics and Computing, Springer US, 2018 (article)

Abstract
We study connections between ordinary differential equation (ODE) solvers and probabilistic regression methods in statistics. We provide a new view of probabilistic ODE solvers as active inference agents operating on stochastic differential equation models that estimate the unknown initial value problem (IVP) solution from approximate observations of the solution derivative, as provided by the ODE dynamics. Adding to this picture, we show that several multistep methods of Nordsieck form can be recast as Kalman filtering on q-times integrated Wiener processes. Doing so provides a family of IVP solvers that return a Gaussian posterior measure, rather than a point estimate. We show that some such methods have low computational overhead, nontrivial convergence order, and that the posterior has a calibrated concentration rate. Additionally, we suggest a step size adaptation algorithm which completes the proposed method to a practically useful implementation, which we experimentally evaluate using a representative set of standard codes in the DETEST benchmark set.

pn

PDF Code DOI Project Page [BibTex]


no image
The Computational Challenges of Pursuing Multiple Goals: Network Structure of Goal Systems Predicts Human Performance

Reichman, D., Lieder, F., Bourgin, D. D., Talmon, N., Griffiths, T. L.

PsyArXiv, 2018 (article)

re

DOI [BibTex]

DOI [BibTex]


no image
The moderating role of arousal on the seductive detail effect in a multimedia learning setting

Schneider, S., Wirzberger, M., Rey, G. D.

Applied Cognitive Psychology, Wiley, 2018 (article)

Abstract
Arousal has been found to increase learners' attentional resources. In contrast, seductive details (interesting but learning‐irrelevant information) are considered to distract attention away from relevant information and, thus, hinder learning. However, a possibly moderating role of arousal on the seductive detail effect has not been examined yet. In this study, arousal variations were induced via audio files of false heartbeats. In consequence, 100 participants were randomly assigned to a 2 (with or without seductive details) × 2 (lower vs. higher false heart rates) between‐subjects design. Data on learning performance, cognitive load, motivation, heartbeat frequency, and electro‐dermal activity were collected. Results show learning‐inhibiting effects for seductive details and learning‐enhancing effects for higher false heart rates. Cognitive processes mediate both effects. However, the detrimental effect of seductive details was not present when heart rate was higher. Results indicate that the seductive detail effect is moderated by a learner's state of arousal.

re

DOI [BibTex]

DOI [BibTex]


no image
Learning a Structured Neural Network Policy for a Hopping Task.

Viereck, J., Kozolinsky, J., Herzog, A., Righetti, L.

IEEE Robotics and Automation Letters, 3(4):4092-4099, October 2018 (article)

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Rational metareasoning and the plasticity of cognitive control

Lieder, F., Shenhav, A., Musslick, S., Griffiths, T. L.

{PLoS Computational Biology}, 14(4):e1006043, Public Library of Science, 2018 (article)

re

Project Page Project Page [BibTex]

Project Page Project Page [BibTex]


no image
Over-representation of extreme events in decision making reflects rational use of cognitive resources

Lieder, F., Griffiths, T. L., Hsu, M.

Psychological Review, 125(1):1-32, 2018 (article)

re

[BibTex]

[BibTex]


no image
The Impact of Robotics and Automation on Working Conditions and Employment [Ethical, Legal, and Societal Issues]

Pham, Q., Madhavan, R., Righetti, L., Smart, W., Chatila, R.

IEEE Robotics and Automation Magazine, 25(2):126-128, June 2018 (article)

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues]

Righetti, L., Pham, Q., Madhavan, R., Chatila, R.

IEEE Robotics \& Automation Magazine, 25(1):123-126, March 2018 (article)

Abstract
The topic of lethal autonomous weapon systems has recently caught public attention due to extensive news coverage and apocalyptic declarations from famous scientists and technologists. Weapon systems with increasing autonomy are being developed due to fast improvements in machine learning, robotics, and automation in general. These developments raise important and complex security, legal, ethical, societal, and technological issues that are being extensively discussed by scholars, nongovernmental organizations (NGOs), militaries, governments, and the international community. Unfortunately, the robotics community has stayed out of the debate, for the most part, despite being the main provider of autonomous technologies. In this column, we review the main issues raised by the increase of autonomy in weapon systems and the state of the international discussion. We argue that the robotics community has a fundamental role to play in these discussions, for its own sake, to provide the often-missing technical expertise necessary to frame the debate and promote technological development in line with the IEEE Robotics and Automation Society (RAS) objective of advancing technology to benefit humanity.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2017


Thumb xl probls sketch n3 0 ei0
Probabilistic Line Searches for Stochastic Optimization

Mahsereci, M., Hennig, P.

Journal of Machine Learning Research, 18(119):1-59, November 2017 (article)

pn

link (url) Project Page [BibTex]

2017


link (url) Project Page [BibTex]


no image
Convergence Analysis of Deterministic Kernel-Based Quadrature Rules in Misspecified Settings

Kanagawa, M., Sriperumbudur, B. K., Fukumizu, K.

Arxiv e-prints, arXiv:1709.00147v1 [math.NA], 2017 (article)

Abstract
This paper presents convergence analysis of kernel-based quadrature rules in misspecified settings, focusing on deterministic quadrature in Sobolev spaces. In particular, we deal with misspecified settings where a test integrand is less smooth than a Sobolev RKHS based on which a quadrature rule is constructed. We provide convergence guarantees based on two different assumptions on a quadrature rule: one on quadrature weights, and the other on design points. More precisely, we show that convergence rates can be derived (i) if the sum of absolute weights remains constant (or does not increase quickly), or (ii) if the minimum distance between distance design points does not decrease very quickly. As a consequence of the latter result, we derive a rate of convergence for Bayesian quadrature in misspecified settings. We reveal a condition on design points to make Bayesian quadrature robust to misspecification, and show that, under this condition, it may adaptively achieve the optimal rate of convergence in the Sobolev space of a lesser order (i.e., of the unknown smoothness of a test integrand), under a slightly stronger regularity condition on the integrand.

pn

arXiv [BibTex]

arXiv [BibTex]


Thumb xl early stopping teaser
Early Stopping Without a Validation Set

Mahsereci, M., Balles, L., Lassner, C., Hennig, P.

arXiv preprint arXiv:1703.09580, 2017 (article)

Abstract
Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization. To find a good point to halt the optimizer, a common practice is to split the dataset into a training and a smaller validation set to obtain an ongoing estimate of the generalization performance. In this paper we propose a novel early stopping criterion which is based on fast-to-compute, local statistics of the computed gradients and entirely removes the need for a held-out validation set. Our experiments show that this is a viable approach in the setting of least-squares and logistic regression as well as neural networks.

ps pn

link (url) Project Page Project Page [BibTex]


no image
Krylov Subspace Recycling for Fast Iterative Least-Squares in Machine Learning

Roos, F. D., Hennig, P.

arXiv preprint arXiv:1706.00241, 2017 (article)

Abstract
Solving symmetric positive definite linear problems is a fundamental computational task in machine learning. The exact solution, famously, is cubicly expensive in the size of the matrix. To alleviate this problem, several linear-time approximations, such as spectral and inducing-point methods, have been suggested and are now in wide use. These are low-rank approximations that choose the low-rank space a priori and do not refine it over time. While this allows linear cost in the data-set size, it also causes a finite, uncorrected approximation error. Authors from numerical linear algebra have explored ways to iteratively refine such low-rank approximations, at a cost of a small number of matrix-vector multiplications. This idea is particularly interesting in the many situations in machine learning where one has to solve a sequence of related symmetric positive definite linear problems. From the machine learning perspective, such deflation methods can be interpreted as transfer learning of a low-rank approximation across a time-series of numerical tasks. We study the use of such methods for our field. Our empirical results show that, on regression and classification problems of intermediate size, this approach can interpolate between low computational cost and numerical precision.

pn

link (url) Project Page [BibTex]


no image
Fast Bayesian hyperparameter optimization on large datasets

Klein, A., Falkner, S., Bartels, S., Hennig, P., Hutter, F.

Electronic Journal of Statistics, 11, 2017 (article)

pn

[BibTex]

[BibTex]


no image
Embedded interruptions and task complexity influence schema-related cognitive load progression in an abstract learning task

Wirzberger, M., Bijarsari, S. E., Rey, G. D.

Acta Psychologica, 179, pages: 30-41, Elsevier, 2017 (article)

Abstract
Cognitive processes related to schema acquisition comprise an essential source of demands in learning situations. Since the related amount of cognitive load is supposed to change over time, plausible temporal models of load progression based on different theoretical backgrounds are inspected in this study. A total of 116 student participants completed a basal symbol sequence learning task, which provided insights into underlying cognitive dynamics. Two levels of task complexity were determined by the amount of elements within the symbol sequence. In addition, interruptions due to an embedded secondary task occurred at five predefined stages over the task. Within the resulting 2x5-factorial mixed between-within design, the continuous monitoring of efficiency in learning performance enabled assumptions on relevant resource investment. From the obtained results, a nonlinear change of learning efficiency over time seems most plausible in terms of cognitive load progression. Moreover, different effects of the induced interruptions show up in conditions of task complexity, which indicate the activation of distinct cognitive mechanisms related to structural aspects of the task. Findings are discussed in the light of evidence from research on memory and information processing.

re

DOI [BibTex]

DOI [BibTex]


no image
Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

Wahl, N., Hennig, P., Wieser, H. P., Bangert, M.

Physics in Medicine & Biology, 62(14):5790-5807, 2017 (article)

Abstract
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ##IMG## [http://ej.iop.org/images/0031-9155/62/14/5790/pmbaa6ec5ieqn001.gif] {$\leqslant {5}$} min). The resulting standard deviation (expectation value) of dose show average global ##IMG## [http://ej.iop.org/images/0031-9155/62/14/5790/pmbaa6ec5ieqn002.gif] {$\gamma_{{3}\% / {3}~{\rm mm}}$} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

pn

link (url) [BibTex]

link (url) [BibTex]


no image
Analytical probabilistic modeling of RBE-weighted dose for ion therapy

Wieser, H., Hennig, P., Wahl, N., Bangert, M.

Physics in Medicine and Biology (PMB), 62(23):8959-8982, 2017 (article)

pn

link (url) [BibTex]

link (url) [BibTex]


no image
Empirical Evidence for Resource-Rational Anchoring and Adjustment

Lieder, F., Griffiths, T. L., Huys, Q. J. M., Goodman, N. D.

Psychonomic Bulletin \& Review, 25, pages: 775-784, Springer, 2017 (article)

re

[BibTex]

[BibTex]


no image
Strategy selection as rational metareasoning

Lieder, F., Griffiths, T.

Psychological Review, 124, pages: 762-794, American Psychological Association, 2017 (article)

re

Project Page [BibTex]

Project Page [BibTex]


no image
A computerized training program for teaching people how to plan better

Lieder, F., Krueger, P. M., Callaway, F., Griffiths, T. L.

PsyArXiv, 2017 (article)

re

Project Page [BibTex]

Project Page [BibTex]


no image
Toward a rational and mechanistic account of mental effort

Shenhav, A., Musslick, S., Lieder, F., Kool, W., Griffiths, T., Cohen, J., Botvinick, M.

Annual Review of Neuroscience, 40, pages: 99-124, Annual Reviews, 2017 (article)

re

Project Page [BibTex]

Project Page [BibTex]


no image
The anchoring bias reflects rational use of cognitive resources

Lieder, F., Griffiths, T. L., Huys, Q. J. M., Goodman, N. D.

Psychonomic Bulletin \& Review, 25, pages: 762-794, Springer, 2017 (article)

re

[BibTex]

[BibTex]

2016


Thumb xl cloud tracking
Gaussian Process-Based Predictive Control for Periodic Error Correction

Klenske, E. D., Zeilinger, M., Schölkopf, B., Hennig, P.

IEEE Transactions on Control Systems Technology , 24(1):110-121, 2016 (article)

ei pn

PDF DOI Project Page [BibTex]

2016



Thumb xl dual control sampled b
Dual Control for Approximate Bayesian Reinforcement Learning

Klenske, E. D., Hennig, P.

Journal of Machine Learning Research, 17(127):1-30, 2016 (article)

ei pn

PDF link (url) Project Page [BibTex]


no image
One for all?! Simultaneous examination of load-inducing factors for advancing media-related instructional research

Wirzberger, M., Beege, M., Schneider, S., Nebel, S., Rey, G. D.

Computers {\&} Education, 100, pages: 18-31, Elsevier BV, 2016 (article)

Abstract
In multimedia learning settings, limitations in learners' mental resource capacities need to be considered to avoid impairing effects on learning performance. Based on the prominent and often quoted Cognitive Load Theory, this study investigates the potential of a single experimental approach to provide simultaneous and separate measures for the postulated load-inducing factors. Applying a basal letter-learning task related to the process of working memory updating, intrinsic cognitive load (by varying task complexity), extraneous cognitive load (via inducing split-attention demands) and germane cognitive load (by varying the presence of schemata) were manipulated within a 3 × 2 × 2-factorial full repeated-measures design. The performance of a student sample (N = 96) was inspected regarding reaction times and errors in updating and recall steps. Approaching the results with linear mixed models, the effect of complexity gained substantial strength, whereas the other factors received at least partial significant support. Additionally, interactions between two or all load-inducing factors occurred. Despite various open questions, the study comprises a promising step for the empirical investigation of existing construction yards in cognitive load research.

re

DOI [BibTex]

DOI [BibTex]


no image
Momentum Control with Hierarchical Inverse Dynamics on a Torque-Controlled Humanoid

Herzog, A., Rotella, N., Mason, S., Grimminger, F., Schaal, S., Righetti, L.

Autonomous Robots, 40(3):473-491, 2016 (article)

Abstract
Hierarchical inverse dynamics based on cascades of quadratic programs have been proposed for the control of legged robots. They have important benefits but to the best of our knowledge have never been implemented on a torque controlled humanoid where model inaccuracies, sensor noise and real-time computation requirements can be problematic. Using a reformulation of existing algorithms, we propose a simplification of the problem that allows to achieve real-time control. Momentum-based control is integrated in the task hierarchy and a LQR design approach is used to compute the desired associated closed-loop behavior and improve performance. Extensive experiments on various balancing and tracking tasks show very robust performance in the face of unknown disturbances, even when the humanoid is standing on one foot. Our results demonstrate that hierarchical inverse dynamics together with momentum control can be efficiently used for feedback control under real robot conditions.

am mg

link (url) DOI [BibTex]

2015


no image
Probabilistic Interpretation of Linear Solvers

Hennig, P.

SIAM Journal on Optimization, 25(1):234-260, 2015 (article)

ei pn

Web PDF link (url) DOI [BibTex]

2015


Web PDF link (url) DOI [BibTex]


no image
Modeling interruption and resumption in a smartphone task: An ACT-R approach

Wirzberger, M., Russwinkel, N.

i-com, 14(2), Walter de Gruyter GmbH, 2015 (article)

Abstract
This research aims to inspect human cognition when being interrupted while performing a smartphone task with varying levels of mental demand. Due to its benefits especially in the early stages of interface development, a cognitive modeling approach is used. It applies the cognitive architecture ACT-R to shed light on task-related cognitive processing. The inspected task setting involves a shopping scenario, manipulating interruption via product advertisements and mental demands by the respective number of people shopping is done for. Model predictions are validated through a corresponding experimental setting with 62 human participants. Comparing model and human data in a defined set of performance-related parameters displays mixed results that indicate an acceptable fit – at least in some cases. Potential explanations for the observed differences are discussed at the end.

re

DOI [BibTex]

DOI [BibTex]


no image
Probabilistic numerics and uncertainty in computations

Hennig, P., Osborne, M. A., Girolami, M.

Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 471(2179), 2015 (article)

Abstract
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

ei pn

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
The optimism bias may support rational action

Lieder, F., Goel, S., Kwan, R., Griffiths, T. L.

NIPS 2015 Workshop on Bounded Optimality and Rational Metareasoning, 2015 (article)

re

[BibTex]

[BibTex]


no image
Kinematic and gait similarities between crawling human infants and other quadruped mammals

Righetti, L., Nylen, A., Rosander, K., Ijspeert, A.

Frontiers in Neurology, 6(17), February 2015 (article)

Abstract
Crawling on hands and knees is an early pattern of human infant locomotion, which offers an interesting way of studying quadrupedalism in one of its simplest form. We investigate how crawling human infants compare to other quadruped mammals, especially primates. We present quantitative data on both the gait and kinematics of seven 10-month-old crawling infants. Body movements were measured with an optoelectronic system giving precise data on 3-dimensional limb movements. Crawling on hands and knees is very similar to the locomotion of non-human primates in terms of the quite protracted arm at touch-down, the coordination between the spine movements in the lateral plane and the limbs, the relatively extended limbs during locomotion and the strong correlation between stance duration and speed of locomotion. However, there are important differences compared to primates, such as the choice of a lateral-sequence walking gait, which is similar to most non-primate mammals and the relatively stiff elbows during stance as opposed to the quite compliant gaits of primates. These finding raise the question of the role of both the mechanical structure of the body and neural control on the determination of these characteristics.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic

Griffiths, T. L., Lieder, F., Goodman, N. D.

Topics in Cognitive Science, 7(2):217-229, Wiley, 2015 (article)

re

[BibTex]

[BibTex]


no image
Model-based strategy selection learning

Lieder, F., Griffiths, T. L.

The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2015 (article)

re

Project Page [BibTex]

Project Page [BibTex]

2014


no image
An autonomous manipulation system based on force control and optimization

Righetti, L., Kalakrishnan, M., Pastor, P., Binney, J., Kelly, J., Voorhies, R. C., Sukhatme, G. S., Schaal, S.

Autonomous Robots, 36(1-2):11-30, January 2014 (article)

Abstract
In this paper we present an architecture for autonomous manipulation. Our approach is based on the belief that contact interactions during manipulation should be exploited to improve dexterity and that optimizing motion plans is useful to create more robust and repeatable manipulation behaviors. We therefore propose an architecture where state of the art force/torque control and optimization-based motion planning are the core components of the system. We give a detailed description of the modules that constitute the complete system and discuss the challenges inherent to creating such a system. We present experimental results for several grasping and manipulation tasks to demonstrate the performance and robustness of our approach.

am mg

link (url) DOI [BibTex]

2014


link (url) DOI [BibTex]