Header logo is


2017


no image
Strategy selection as rational metareasoning

Lieder, F., Griffiths, T. L.

Psychological Review, 124, pages: 762-794, American Psychological Association, November 2017 (article)

Abstract
Many contemporary accounts of human reasoning assume that the mind is equipped with multiple heuristics that could be deployed to perform a given task. This raises the question of how the mind determines when to use which heuristic. To answer this question, we developed a rational model of strategy selection, based on the theory of rational metareasoning developed in the artificial intelligence literature. According to our model people learn to efficiently choose the strategy with the best cost–benefit tradeoff by learning a predictive model of each strategy’s performance. We found that our model can provide a unifying explanation for classic findings from domains ranging from decision-making to arithmetic by capturing the variability of people’s strategy choices, their dependence on task and context, and their development over time. Systematic model comparisons supported our theory, and 4 new experiments confirmed its distinctive predictions. Our findings suggest that people gradually learn to make increasingly more rational use of fallible heuristics. This perspective reconciles the 2 poles of the debate about human rationality by integrating heuristics and biases with learning and rationality. (APA PsycInfo Database Record (c) 2017 APA, all rights reserved)

re

DOI Project Page [BibTex]

2017


DOI Project Page [BibTex]


no image
Empirical Evidence for Resource-Rational Anchoring and Adjustment

Lieder, F., Griffiths, T. L., Huys, Q. J. M., Goodman, N. D.

Psychonomic Bulletin \& Review, 25, pages: 775-784, Springer, May 2017 (article)

Abstract
People’s estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people’s rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people’s knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.

re

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Self-Organized Behavior Generation for Musculoskeletal Robots

Der, R., Martius, G.

Frontiers in Neurorobotics, 11, pages: 8, 2017 (article)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A computerized training program for teaching people how to plan better

Lieder, F., Krueger, P. M., Callaway, F., Griffiths, T. L.

PsyArXiv, 2017 (article)

re

Project Page [BibTex]

Project Page [BibTex]


no image
Toward a rational and mechanistic account of mental effort

Shenhav, A., Musslick, S., Lieder, F., Kool, W., Griffiths, T., Cohen, J., Botvinick, M.

Annual Review of Neuroscience, 40, pages: 99-124, Annual Reviews, 2017 (article)

re

Project Page [BibTex]

Project Page [BibTex]


no image
The anchoring bias reflects rational use of cognitive resources

Lieder, F., Griffiths, T. L., Huys, Q. J. M., Goodman, N. D.

Psychonomic Bulletin \& Review, 25, pages: 762-794, Springer, 2017 (article)

re

[BibTex]

[BibTex]

2013


no image
Efficient 3D Object Perception and Grasp Planning for Mobile Manipulation in Domestic Environments

Stueckler, J., Steffens, R., Holz, D., Behnke, S.

Robotics and Autonomous Systems (RAS), 61(10):1106-1115, 2013 (article)

ev

link (url) DOI [BibTex]

2013


link (url) DOI [BibTex]


no image
Information Driven Self-Organization of Complex Robotic Behaviors

Martius, G., Der, R., Ay, N.

PLoS ONE, 8(5):e63400, Public Library of Science, 2013 (article)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Modelling trial-by-trial changes in the mismatch negativity

Lieder, F., Daunizeau, J., Garrido, M. I., Friston, K. J., Stephan, K. E.

{PLoS} {C}omputational {B}iology, 9(2):e1002911, Public Library of Science, 2013 (article)

re

[BibTex]

[BibTex]


no image
A neurocomputational model of the mismatch negativity

Lieder, F., Stephan, K. E., Daunizeau, J., Garrido, M. I., Friston, K. J.

{PLoS Computational Biology}, 9(11):e1003288, Public Library of Science, 2013 (article)

re

[BibTex]

[BibTex]


no image
Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis

Zahedi, K., Martius, G., Ay, N.

Frontiers in Psychology, 4(801), 2013 (article)

Abstract
One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviours. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviours, because a maximisation of the PI corresponds to an exploration of morphology- and environment-dependent behavioural regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost.

al

link (url) DOI [BibTex]


no image
Robustness of guided self-organization against sensorimotor disruptions

Martius, G.

Advances in Complex Systems, 16(02n03):1350001, 2013 (article)

Abstract
Self-organizing processes are crucial for the development of living beings. Practical applications in robots may benefit from the self-organization of behavior, e.g.~to increase fault tolerance and enhance flexibility, provided that external goals can also be achieved. We present results on the guidance of self-organizing control by visual target stimuli and show a remarkable robustness to sensorimotor disruptions. In a proof of concept study an autonomous wheeled robot is learning an object finding and ball-pushing task from scratch within a few minutes in continuous domains. The robustness is demonstrated by the rapid recovery of the performance after severe changes of the sensor configuration.

al

DOI [BibTex]

DOI [BibTex]

2012


no image
Burn-in, bias, and the rationality of anchoring

Lieder, F., Griffiths, T. L., Goodman, N. D.

Advances in Neural Information Processing Systems 25, pages: 2699-2707, 2012 (article)

Abstract
Bayesian inference provides a unifying framework for addressing problems in machine learning, artificial intelligence, and robotics, as well as the problems facing the human mind. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide range of time-accuracy tradeoffs, but what is the optimal tradeoff? We investigate time-accuracy tradeoffs using the Metropolis-Hastings algorithm as a metaphor for the mind's inference algorithm(s). We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as burn-in. Therefore the strategy that is optimal subject to the mind's bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoring-and-adjustment heuristic. The model's quantitative predictions are tested against published data on anchoring in numerical estimation tasks. Our theoretical and empirical results suggest that the anchoring bias is consistent with approximate Bayesian inference.

re

link (url) [BibTex]

2012


link (url) [BibTex]


no image
RoboCup@Home: Demonstrating Everyday Manipulation Skills in RoboCup@Home

Stueckler, J., Holz, D., Behnke, S.

IEEE Robotics and Automation Magazine (RAM), 19(2):34-42, 2012 (article)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Variants of guided self-organization for robot control

Martius, G., Herrmann, J.

Theory in Biosci., 131(3):129-137, Springer Berlin / Heidelberg, 2012 (article)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
The Playful Machine - Theoretical Foundation and Practical Realization of Self-Organizing Robots

Der, R., Martius, G.

Springer, Berlin Heidelberg, 2012 (book)

Abstract
Autonomous robots may become our closest companions in the near future. While the technology for physically building such machines is already available today, a problem lies in the generation of the behavior for such complex machines. Nature proposes a solution: young children and higher animals learn to master their complex brain-body systems by playing. Can this be an option for robots? How can a machine be playful? The book provides answers by developing a general principle---homeokinesis, the dynamical symbiosis between brain, body, and environment---that is shown to drive robots to self-determined, individual development in a playful and obviously embodiment-related way: a dog-like robot starts playing with a barrier, eventually jumping or climbing over it; a snakebot develops coiling and jumping modes; humanoids develop climbing behaviors when fallen into a pit, or engage in wrestling-like scenarios when encountering an opponent. The book also develops guided self-organization, a new method that helps to make the playful machines fit for fulfilling tasks in the real world.

al

link (url) [BibTex]


no image
Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers

Rolinek, M., Swoboda, P., Zietlow, D., Paulus, A., Musil, V., Martius, G.

Arxiv (article)

Abstract
Building on recent progress at the intersection of combinatorial optimization and deep learning, we propose an end-to-end trainable architecture for deep graph matching that contains unmodified combinatorial solvers. Using the presence of heavily optimized combinatorial solvers together with some improvements in architecture design, we advance state-of-the-art on deep graph matching benchmarks for keypoint correspondence. In addition, we highlight the conceptual advantages of incorporating solvers into deep learning architectures, such as the possibility of post-processing with a strong multi-graph matching solver or the indifference to changes in the training setting. Finally, we propose two new challenging experimental setups

al

Arxiv [BibTex]