Header logo is


2018


no image
Rational metareasoning and the plasticity of cognitive control

Lieder, F., Shenhav, A., Musslick, S., Griffiths, T. L.

PLOS Computational Biology, 14(4):e1006043, Public Library of Science, April 2018 (article)

Abstract
The human brain has the impressive capacity to adapt how it processes information to high-level goals. While it is known that these cognitive control skills are malleable and can be improved through training, the underlying plasticity mechanisms are not well understood. Here, we develop and evaluate a model of how people learn when to exert cognitive control, which controlled process to use, and how much effort to exert. We derive this model from a general theory according to which the function of cognitive control is to select and configure neural pathways so as to make optimal use of finite time and limited computational resources. The central idea of our Learned Value of Control model is that people use reinforcement learning to predict the value of candidate control signals of different types and intensities based on stimulus features. This model correctly predicts the learning and transfer effects underlying the adaptive control-demanding behavior observed in an experiment on visual attention and four experiments on interference control in Stroop and Flanker paradigms. Moreover, our model explained these findings significantly better than an associative learning model and a Win-Stay Lose-Shift model. Our findings elucidate how learning and experience might shape people’s ability and propensity to adaptively control their minds and behavior. We conclude by predicting under which circumstances these learning mechanisms might lead to self-control failure.

re

Rational metareasoning and the plasticity of cognitive control DOI Project Page Project Page [BibTex]

2018


Rational metareasoning and the plasticity of cognitive control DOI Project Page Project Page [BibTex]


no image
Over-Representation of Extreme Events in Decision Making Reflects Rational Use of Cognitive Resources

Lieder, F., Griffiths, T. L., Hsu, M.

Psychological Review, 125(1):1-32, January 2018 (article)

Abstract
People’s decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision-makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision-makers should over-represent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people’s availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall.

re

DOI [BibTex]

DOI [BibTex]


no image
Beyond Bounded Rationality: Reverse-Engineering and Enhancing Human Intelligence

(Glushko Prize 2020)

Lieder, F.

University of California, Berkeley, 2018 (phdthesis)

Abstract
Bad decisions can have devastating consequences: There is a vast body of literature claiming that human judgment and decision-making are riddled with numerous systematic violations of the rules of logic, probability theory, and expected utility theory. The discovery of these cognitive biases in the 1970s (Tversky & Kahneman, 1974) made people question the concept of Homo sapiens as the rational animal, profoundly shaking the foundations of economics and rational models in the cognitive, neural, and social sciences. Four decades later, these disciplines still lack a rigorous theoretical foundation for explaining and remedying people’s cognitive biases. To solve this problem, my dissertation offers a mathematically precise theory of bounded rationality and demonstrates how it can be leveraged to elucidate the cognitive mechanisms of judgment and decision-making (Part 1) and to help people make better decisions (Part 2).

re

Précis of Beyond Bounded Rationality: Reverse-Engineering and Enhancing Human Intelligence DOI [BibTex]


no image
The Computational Challenges of Pursuing Multiple Goals: Network Structure of Goal Systems Predicts Human Performance

Reichman, D., Lieder, F., Bourgin, D. D., Talmon, N., Griffiths, T. L.

PsyArXiv, 2018 (article)

Abstract
Extant psychological theories attribute people’s failure to achieve their goals primarily to failures of self-control, insufficient motivation, or lacking skills. We develop a complementary theory specifying conditions under which the computational complexity of making the right decisions becomes prohibitive of goal achievement regardless of skill or motivation. We support our theory by predicting human performance from factors determining the computational complexity of selecting the optimal set of means for goal achievement. Following previous theories of goal pursuit, we express the relationship between goals and means as a bipartite graph where edges between means and goals indicate which means can be used to achieve which goals. This allows us to map two computational challenges that arise in goal achievement onto two classic combinatorial optimization problems: Set Cover and Maximum Coverage. While these problems are believed to be computationally intractable on general networks, their solution can be nevertheless efficiently approximated when the structure of the network resembles a tree. Thus, our initial prediction was that people should perform better with goal systems that are more tree-like. In addition, our theory predicted that people’s performance at selecting means should be a U-shaped function of the average number of goals each means is relevant to and the average number of means through which each goal could be accomplished. Here we report on six behavioral experiments which confirmed these predictions. Our results suggest that combinatorial parameters that are instrumental to algorithm design can also be useful for understanding when and why people struggle to pursue their goals effectively.

re

DOI [BibTex]

DOI [BibTex]

2012


no image
Burn-in, bias, and the rationality of anchoring

Lieder, F., Griffiths, T. L., Goodman, N. D.

Advances in Neural Information Processing Systems 25, pages: 2699-2707, 2012 (article)

Abstract
Bayesian inference provides a unifying framework for addressing problems in machine learning, artificial intelligence, and robotics, as well as the problems facing the human mind. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide range of time-accuracy tradeoffs, but what is the optimal tradeoff? We investigate time-accuracy tradeoffs using the Metropolis-Hastings algorithm as a metaphor for the mind's inference algorithm(s). We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as burn-in. Therefore the strategy that is optimal subject to the mind's bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoring-and-adjustment heuristic. The model's quantitative predictions are tested against published data on anchoring in numerical estimation tasks. Our theoretical and empirical results suggest that the anchoring bias is consistent with approximate Bayesian inference.

re

link (url) [BibTex]

2012


link (url) [BibTex]