Header logo is


2019


Life Improvement Science: A Manifesto
Life Improvement Science: A Manifesto

Lieder, F.

December 2019 (article) In revision

Abstract
Rapid technological advances present unprecedented opportunities for helping people thrive. This manifesto presents a road map for establishing a solid scientific foundation upon which those opportunities can be realized. It highlights fundamental open questions about the cognitive underpinnings of effective living and how they can be improved, supported, and augmented. These questions are at the core of my proposal for a new transdisciplinary research area called life improvement science. Recent advances have made these questions amenable to scientific rigor, and emerging approaches are paving the way towards practical strategies, clever interventions, and (intelligent) apps for empowering people to reach unprecedented levels of personal effectiveness and wellbeing.

re

Life improvement science: a manifesto DOI [BibTex]


Learning to Explore in Motion and Interaction Tasks
Learning to Explore in Motion and Interaction Tasks

Bogdanovic, M., Righetti, L.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, November 2019 (conference)

Abstract
Model free reinforcement learning suffers from the high sampling complexity inherent to robotic manipulation or locomotion tasks. Most successful approaches typically use random sampling strategies which leads to slow policy convergence. In this paper we present a novel approach for efficient exploration that leverages previously learned tasks. We exploit the fact that the same system is used across many tasks and build a generative model for exploration based on data from previously solved tasks to improve learning new tasks. The approach also enables continuous learning of improved exploration strategies as novel tasks are learned. Extensive simulations on a robot manipulator performing a variety of motion and contact interaction tasks demonstrate the capabilities of the approach. In particular, our experiments suggest that the exploration strategy can more than double learning speed, especially when rewards are sparse. Moreover, the algorithm is robust to task variations and parameter tuning, making it beneficial for complex robotic problems.

mg

arXiv [BibTex]

arXiv [BibTex]


no image
Robust Humanoid Locomotion Using Trajectory Optimization and Sample-Efficient Learning

Yeganegi, M. H., Khadiv, M., Moosavian, S. A. A., Zhu, J., Prete, A. D., Righetti, L.

Proceedings International Conference on Humanoid Robots, IEEE, 2019 IEEE-RAS International Conference on Humanoid Robots, October 2019 (conference)

Abstract
Trajectory optimization (TO) is one of the most powerful tools for generating feasible motions for humanoid robots. However, including uncertainties and stochasticity in the TO problem to generate robust motions can easily lead to intractable problems. Furthermore, since the models used in TO have always some level of abstraction, it can be hard to find a realistic set of uncertainties in the model space. In this paper we leverage a sample-efficient learning technique (Bayesian optimization) to robustify TO for humanoid locomotion. The main idea is to use data from full-body simulations to make the TO stage robust by tuning the cost weights. To this end, we split the TO problem into two phases. The first phase solves a convex optimization problem for generating center of mass (CoM) trajectories based on simplified linear dynamics. The second stage employs iterative Linear-Quadratic Gaussian (iLQG) as a whole-body controller to generate full body control inputs. Then we use Bayesian optimization to find the cost weights to use in the first stage that yields robust performance in the simulation/experiment, in the presence of different disturbance/uncertainties. The results show that the proposed approach is able to generate robust motions for different sets of disturbances and uncertainties.

mg

https://arxiv.org/abs/1907.04616 link (url) [BibTex]

https://arxiv.org/abs/1907.04616 link (url) [BibTex]


How do people learn how to plan?
How do people learn how to plan?

Jain, Y. R., Gupta, S., Rakesh, V., Dayan, P., Callaway, F., Lieder, F.

Conference on Cognitive Computational Neuroscience, September 2019 (conference)

Abstract
How does the brain learn how to plan? We reverse-engineer people's underlying learning mechanisms by combining rational process models of cognitive plasticity with recently developed empirical methods that allow us to trace the temporal evolution of people's planning strategies. We find that our Learned Value of Computation model (LVOC) accurately captures people's average learning curve. However, there were also substantial individual differences in metacognitive learning that are best understood in terms of multiple different learning mechanisms-including strategy selection learning. Furthermore, we observed that LVOC could not fully capture people's ability to adaptively decide when to stop planning. We successfully extended the LVOC model to address these discrepancies. Our models broadly capture people's ability to improve their decision mechanisms and represent a significant step towards reverse-engineering how the brain learns increasingly effective cognitive strategies through its interaction with the environment.

re

How do people learn to plan? How do people learn to plan? [BibTex]

How do people learn to plan? How do people learn to plan? [BibTex]


Cognitive Prostheses for Goal Achievement
Cognitive Prostheses for Goal Achievement

Lieder, F., Chen, O. X., Krueger, P. M., Griffiths, T. L.

Nature Human Behavior, 3, August 2019 (article)

Abstract
Procrastination and impulsivity take a significant toll on people’s lives and the economy at large. Both can result from the misalignment of an action's proximal rewards with its long-term value. Therefore, aligning immediate reward with long-term value could be a way to help people overcome motivational barriers and make better decisions. Previous research has shown that game elements, such as points, levels, and badges, can be used to motivate people and nudge their decisions on serious matters. Here, we develop a new approach to decision support that leveragesartificial intelligence and game elements to restructure challenging sequential decision problems in such a way that it becomes easier for people to take the right course of action. A series of four increasingly more realistic experiments suggests that this approach can enable people to make better decisions faster, procrastinate less, complete their work on time, and waste less time on unimportant tasks. These findings suggest that our method is a promising step towards developing cognitive prostheses that help people achieve their goals by enhancing their motivation and decision-making in everyday life.

re

DOI [BibTex]

DOI [BibTex]


Learning Variable Impedance Control for Contact Sensitive Tasks
Learning Variable Impedance Control for Contact Sensitive Tasks

Bogdanovic, M., Khadiv, M., Righetti, L.

arXiv preprint, arXiv:1907.07500, July 2019 (article)

Abstract
Reinforcement learning algorithms have shown great success in solving different problems ranging from playing video games to robotics. However, they struggle to solve delicate robotic problems, especially those involving contact interactions. Though in principle a policy outputting joint torques should be able to learn these tasks, in practice we see that they have difficulty to robustly solve the problem without any structure in the action space. In this paper, we investigate how the choice of action space can give robust performance in presence of contact uncertainties. We propose to learn a policy that outputs impedance and desired position in joint space as a function of system states without imposing any other structure to the problem. We compare the performance of this approach to torque and position control policies under different contact uncertainties. Extensive simulation results on two different systems, a hopper (floating-base) with intermittent contacts and a manipulator (fixed-base) wiping a table, show that our proposed approach outperforms policies outputting torque or position in terms of both learning rate and robustness to environment uncertainty.

mg

[BibTex]


no image
Extending Rationality

Pothos, E. M., Busemeyer, J. R., Pleskac, T., Yearsley, J. M., Tenenbaum, J. B., Goodman, N. D., Tessler, M. H., Griffiths, T. L., Lieder, F., Hertwig, R., Pachur, T., Leuker, C., Shiffrin, R. M.

Proceedings of the 41st Annual Conference of the Cognitive Science Society, pages: 39-40, CogSci 2019, July 2019 (conference)

re

Proceedings of the 41st Annual Conference of the Cognitive Science Society [BibTex]

Proceedings of the 41st Annual Conference of the Cognitive Science Society [BibTex]


no image
What’s in the Adaptive Toolbox and How Do People Choose From It? Rational Models of Strategy Selection in Risky Choice

Mohnert, F., Pachur, T., Lieder, F.

41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

re

[BibTex]


no image
Measuring how people learn how to plan

Jain, Y. R., Callaway, F., Lieder, F.

RLDM 2019, July 2019 (conference)

re

[BibTex]

[BibTex]


no image
Measuring how people learn how to plan

Jain, Y. R., Callaway, F., Lieder, F.

41st Annual Meeting of the Cognitive Science Society, July 2019 (conference)

re

[BibTex]

[BibTex]


no image
A cognitive tutor for helping people overcome present bias

Lieder, F., Callaway, F., Jain, Y., Krueger, P., Das, P., Gul, S., Griffiths, T.

RLDM 2019, July 2019 (conference)

re

[BibTex]

[BibTex]


no image
Introducing the Decision Advisor: A simple online tool that helps people overcome cognitive biases and experience less regret in real-life decisions

Iwama, G., Greenberg, S., Moore, D., Lieder, F.

40th Annual Meeting of the Society for Judgement and Decision Making, June 2019 (conference)

re

[BibTex]

[BibTex]


no image
Efficient Humanoid Contact Planning using Learned Centroidal Dynamics Prediction

Lin, Y., Ponton, B., Righetti, L., Berenson, D.

International Conference on Robotics and Automation (ICRA), pages: 5280-5286, IEEE, May 2019 (conference)

mg

DOI [BibTex]

DOI [BibTex]


Leveraging Contact Forces for Learning to Grasp
Leveraging Contact Forces for Learning to Grasp

Merzic, H., Bogdanovic, M., Kappler, D., Righetti, L., Bohg, J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2019, IEEE, International Conference on Robotics and Automation, May 2019 (inproceedings)

Abstract
Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used two- fingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.

am mg

video arXiv [BibTex]

video arXiv [BibTex]


no image
Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources

Lieder, F., Griffiths, T. L.

Behavioral and Brain Sciences, 43, E1, Febuary 2019 (article)

Abstract
Modeling human cognition is challenging because there are infinitely many mechanisms that can generate any given observation. Some researchers address this by constraining the hypothesis space through assumptions about what the human mind can and cannot do, while others constrain it through principles of rationality and adaptation. Recent work in economics, psychology, neuroscience, and linguistics has begun to integrate both approaches by augmenting rational models with cognitive constraints, incorporating rational principles into cognitive architectures, and applying optimality principles to understanding neural representations. We identify the rational use of limited resources as a unifying principle underlying these diverse approaches, expressing it in a new cognitive modeling paradigm called resource-rational analysis. The integration of rational principles with realistic cognitive constraints makes resource-rational analysis a promising framework for reverse-engineering cognitive mechanisms and representations. It has already shed new light on the debate about human rationality and can be leveraged to revisit classic questions of cognitive psychology within a principled computational framework. We demonstrate that resource-rational models can reconcile the mind's most impressive cognitive skills with people's ostensive irrationality. Resource-rational analysis also provides a new way to connect psychological theory more deeply with artificial intelligence, economics, neuroscience, and linguistics.

re

DOI [BibTex]

DOI [BibTex]


no image
A Robustness Analysis of Inverse Optimal Control of Bipedal Walking

Rebula, J. R., Schaal, S., Finley, J., Righetti, L.

IEEE Robotics and Automation Letters, 4(4):4531-4538, 2019 (article)

mg

DOI [BibTex]

DOI [BibTex]


no image
Rigid vs compliant contact: an experimental study on biped walking

Khadiv, M., Moosavian, S. A. A., Yousefi-Koma, A., Sadedel, M., Ehsani-Seresht, A., Mansouri, S.

Multibody System Dynamics, 45(4):379-401, 2019 (article)

mg

DOI [BibTex]

DOI [BibTex]


no image
Doing more with less: Meta-reasoning and meta-learning in humans and machines

Griffiths, T., Callaway, F., Chang, M., Grant, E., Krueger, P. M., Lieder, F.

Current Opinion in Behavioral Sciences, 2019 (article)

re

DOI [BibTex]

DOI [BibTex]


no image
Remediating cognitive decline with cognitive tutors

Das, P., Callaway, F., Griffiths, T., Lieder, F.

RLDM 2019, 2019 (conference)

re

[BibTex]

[BibTex]


no image
Birch tar production does not prove Neanderthal behavioral complexity

Schmidt, P., Blessing, M., Rageot, M., Iovita, R., Pfleging, J., Nickel, K. G., Righetti, L., Tennie, C.

Proceedings of the National Academy of Sciences (PNAS), 116(36):17707-17711, 2019 (article)

mg

DOI [BibTex]

DOI [BibTex]


no image
A rational reinterpretation of dual process theories

Milli, S., Lieder, F., Griffiths, T.

2019 (article)

re

DOI [BibTex]

DOI [BibTex]

2013


no image
AGILITY – Dynamic Full Body Locomotion and Manipulation with Autonomous Legged Robots

Hutter, M., Bloesch, M., Buchli, J., Semini, C., Bazeille, S., Righetti, L., Bohg, J.

In 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pages: 1-4, IEEE, Linköping, Sweden, 2013 (inproceedings)

mg

link (url) DOI [BibTex]

2013


link (url) DOI [BibTex]


no image
Learning Objective Functions for Manipulation

Kalakrishnan, M., Pastor, P., Righetti, L., Schaal, S.

In 2013 IEEE International Conference on Robotics and Automation, IEEE, Karlsruhe, Germany, 2013 (inproceedings)

Abstract
We present an approach to learning objective functions for robotic manipulation based on inverse reinforcement learning. Our path integral inverse reinforcement learning algorithm can deal with high-dimensional continuous state-action spaces, and only requires local optimality of demonstrated trajectories. We use L 1 regularization in order to achieve feature selection, and propose an efficient algorithm to minimize the resulting convex objective function. We demonstrate our approach by applying it to two core problems in robotic manipulation. First, we learn a cost function for redundancy resolution in inverse kinematics. Second, we use our method to learn a cost function over trajectories, which is then used in optimization-based motion planning for grasping and manipulation tasks. Experimental results show that our method outperforms previous algorithms in high-dimensional settings.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Optimal distribution of contact forces with inverse-dynamics control

Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., Schaal, S.

The International Journal of Robotics Research, 32(3):280-298, March 2013 (article)

Abstract
The development of legged robots for complex environments requires controllers that guarantee both high tracking performance and compliance with the environment. More specifically the control of the contact interaction with the environment is of crucial importance to ensure stable, robust and safe motions. In this contribution we develop an inverse-dynamics controller for floating-base robots under contact constraints that can minimize any combination of linear and quadratic costs in the contact constraints and the commands. Our main result is the exact analytical derivation of the controller. Such a result is particularly relevant for legged robots as it allows us to use torque redundancy to directly optimize contact interactions. For example, given a desired locomotion behavior, we can guarantee the minimization of contact forces to reduce slipping on difficult terrains while ensuring high tracking performance of the desired motion. The main advantages of the controller are its simplicity, computational efficiency and robustness to model inaccuracies. We present detailed experimental results on simulated humanoid and quadruped robots as well as a real quadruped robot. The experiments demonstrate that the controller can greatly improve the robustness of locomotion of the robots.1

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Controlled Reduction with Unactuated Cyclic Variables: Application to 3D Bipedal Walking with Passive Yaw Rotation

Gregg, R., Righetti, L.

IEEE Transactions on Automatic Control, 58(10):2679-2685, October 2013 (article)

Abstract
This technical note shows that viscous damping can shape momentum conservation laws in a manner that stabilizes yaw rotation and enables steering for underactuated 3D walking. We first show that unactuated cyclic variables can be controlled by passively shaped conservation laws given a stabilizing controller in the actuated coordinates. We then exploit this result to realize controlled geometric reduction with multiple unactuated cyclic variables. We apply this underactuated control strategy to a five-link 3D biped to produce exponentially stable straight-ahead walking and steering in the presence of passive yawing.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning Task Error Models for Manipulation

Pastor, P., Kalakrishnan, M., Binney, J., Kelly, J., Righetti, L., Sukhatme, G. S., Schaal, S.

In 2013 IEEE Conference on Robotics and Automation, IEEE, Karlsruhe, Germany, 2013 (inproceedings)

Abstract
Precise kinematic forward models are important for robots to successfully perform dexterous grasping and manipulation tasks, especially when visual servoing is rendered infeasible due to occlusions. A lot of research has been conducted to estimate geometric and non-geometric parameters of kinematic chains to minimize reconstruction errors. However, kinematic chains can include non-linearities, e.g. due to cable stretch and motor-side encoders, that result in significantly different errors for different parts of the state space. Previous work either does not consider such non-linearities or proposes to estimate non-geometric parameters of carefully engineered models that are robot specific. We propose a data-driven approach that learns task error models that account for such unmodeled non-linearities. We argue that in the context of grasping and manipulation, it is sufficient to achieve high accuracy in the task relevant state space. We identify this relevant state space using previously executed joint configurations and learn error corrections for those. Therefore, our system is developed to generate subsequent executions that are similar to previous ones. The experiments show that our method successfully captures the non-linearities in the head kinematic chain (due to a counterbalancing spring) and the arm kinematic chains (due to cable stretch) of the considered experimental platform, see Fig. 1. The feasibility of the presented error learning approach has also been evaluated in independent DARPA ARM-S testing contributing to successfully complete 67 out of 72 grasping and manipulation tasks.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2009


no image
Modelling the interplay of central pattern generation and sensory feedback in the neuromuscular control of running

Daley, M., Righetti, L., Ijspeert, A.

In Comparative Biochemistry and Physiology - Part A: Molecular & Integrative Physiology. Annual Main Meeting for the Society for Experimental Biology, 153, Glasgow, Scotland, 2009 (inproceedings)

mg

link (url) DOI [BibTex]

2009


link (url) DOI [BibTex]


no image
Adaptive Frequency Oscillators and Applications

Righetti, L., Buchli, J., Ijspeert, A.

The Open Cybernetics \& Systemics Journal, 3, pages: 64-69, 2009 (article)

Abstract
In this contribution we present a generic mechanism to transform an oscillator into an adaptive frequency oscillator, which can then dynamically adapt its parameters to learn the frequency of any periodic driving signal. Adaptation is done in a dynamic way: it is part of the dynamical system and not an offline process. This mechanism goes beyond entrainment since it works for any initial frequencies and the learned frequency stays encoded in the system even if the driving signal disappears. Interestingly, this mechanism can easily be applied to a large class of oscillators from harmonic oscillators to relaxation types and strange attractors. Several practical applications of this mechanism are then presented, ranging from adaptive control of compliant robots to frequency analysis of signals and construction of limit cycles of arbitrary shape.

mg

link (url) [BibTex]

link (url) [BibTex]