Header logo is


2015


Automatic LQR Tuning Based on Gaussian Process Optimization: Early Experimental Results
Automatic LQR Tuning Based on Gaussian Process Optimization: Early Experimental Results

Marco, A., Hennig, P., Bohg, J., Schaal, S., Trimpe, S.

Machine Learning in Planning and Control of Robot Motion Workshop at the IEEE/RSJ International Conference on Intelligent Robots and Systems (iROS), pages: , , Machine Learning in Planning and Control of Robot Motion Workshop, October 2015 (conference)

Abstract
This paper proposes an automatic controller tuning framework based on linear optimal control combined with Bayesian optimization. With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data. The underlying Bayesian optimization algorithm is Entropy Search, which represents the latent objective as a Gaussian process and constructs an explicit belief over the location of the objective minimum. This is used to maximize the information gain from each experimental evaluation. Thus, this framework shall yield improved controllers with fewer evaluations compared to alternative approaches. A seven-degree-of-freedom robot arm balancing an inverted pole is used as the experimental demonstrator. Preliminary results of a low-dimensional tuning problem highlight the method’s potential for automatic controller tuning on robotic platforms.

am ei ics pn

PDF DOI Project Page [BibTex]

2015


PDF DOI Project Page [BibTex]


Direct Loss Minimization Inverse Optimal Control
Direct Loss Minimization Inverse Optimal Control

Doerr, A., Ratliff, N., Bohg, J., Toussaint, M., Schaal, S.

In Proceedings of Robotics: Science and Systems, Rome, Italy, Robotics: Science and Systems XI, July 2015 (inproceedings)

Abstract
Inverse Optimal Control (IOC) has strongly impacted the systems engineering process, enabling automated planner tuning through straightforward and intuitive demonstration. The most successful and established applications, though, have been in lower dimensional problems such as navigation planning where exact optimal planning or control is feasible. In higher dimensional systems, such as humanoid robots, research has made substantial progress toward generalizing the ideas to model free or locally optimal settings, but these systems are complicated to the point where demonstration itself can be difficult. Typically, real-world applications are restricted to at best noisy or even partial or incomplete demonstrations that prove cumbersome in existing frameworks. This work derives a very flexible method of IOC based on a form of Structured Prediction known as Direct Loss Minimization. The resulting algorithm is essentially Policy Search on a reward function that rewards similarity to demonstrated behavior (using Covariance Matrix Adaptation (CMA) in our experiments). Our framework blurs the distinction between IOC, other forms of Imitation Learning, and Reinforcement Learning, enabling us to derive simple, versatile, and practical algorithms that blend imitation and reinforcement signals into a unified framework. Our experiments analyze various aspects of its performance and demonstrate its efficacy on conveying preferences for motion shaping and combined reach and grasp quality optimization.

am ics

PDF Video Project Page [BibTex]

PDF Video Project Page [BibTex]


no image
LMI-Based Synthesis for Distributed Event-Based State Estimation

Muehlebach, M., Trimpe, S.

In Proceedings of the American Control Conference, July 2015 (inproceedings)

Abstract
This paper presents an LMI-based synthesis procedure for distributed event-based state estimation. Multiple agents observe and control a dynamic process by sporadically exchanging data over a broadcast network according to an event-based protocol. In previous work [1], the synthesis of event-based state estimators is based on a centralized design. In that case three different types of communication are required: event-based communication of measurements, periodic reset of all estimates to their joint average, and communication of inputs. The proposed synthesis problem eliminates the communication of inputs as well as the periodic resets (under favorable circumstances) by accounting explicitly for the distributed structure of the control system.

am ics

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
Guaranteed H2 Performance in Distributed Event-Based State Estimation

Muehlebach, M., Trimpe, S.

In Proceeding of the First International Conference on Event-based Control, Communication, and Signal Processing, June 2015 (inproceedings)

am ics

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
On the Choice of the Event Trigger in Event-based Estimation

Trimpe, S., Campi, M.

In Proceeding of the First International Conference on Event-based Control, Communication, and Signal Processing, June 2015 (inproceedings)

am ics

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
Event-based Estimation and Control for Remote Robot Operation with Reduced Communication

Trimpe, S., Buchli, J.

In Proceedings of the IEEE International Conference on Robotics and Automation, May 2015 (inproceedings)

Abstract
An event-based communication framework for remote operation of a robot via a bandwidth-limited network is proposed. The robot sends state and environment estimation data to the operator, and the operator transmits updated control commands or policies to the robot. Event-based communication protocols are designed to ensure that data is transmitted only when required: the robot sends new estimation data only if this yields a significant information gain at the operator, and the operator transmits an updated control policy only if this comes with a significant improvement in control performance. The developed framework is modular and can be used with any standard estimation and control algorithms. Simulation results of a robotic arm highlight its potential for an efficient use of limited communication resources, for example, in disaster response scenarios such as the DARPA Robotics Challenge.

am ics

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
When to use which heuristic: A rational solution to the strategy selection problem

Lieder, F., Griffiths, T. L.

In Proceedings of the 37th Annual Conference of the Cognitive Science Society, 2015 (inproceedings)

Abstract
The human mind appears to be equipped with a toolbox full of cognitive strategies, but how do people decide when to use which strategy? We leverage rational metareasoning to derive a rational solution to this problem and apply it to decision making under uncertainty. The resulting theory reconciles the two poles of the debate about human rationality by proposing that people gradually learn to make rational use of fallible heuristics. We evaluate this theory against empirical data and existing accounts of strategy selection (i.e. SSL and RELACS). Our results suggest that while SSL and RELACS can explain people's ability to adapt to homogeneous environments in which all decision problems are of the same type, rational metareasoning can additionally explain people's ability to adapt to heterogeneous environments and flexibly switch strategies from one decision to the next.

re

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Children and Adults Differ in their Strategies for Social Learning

Lieder, F., Sim, Z. L., Hu, J. C., Griffiths, T. L., Xu, F.

In Proceedings of the 37th Annual Conference of the Cognitive Science Society, 2015 (inproceedings)

Abstract
Adults and children rely heavily on other people’s testimony. However, domains of knowledge where there is no consensus on the truth are likely to result in conflicting testimonies. Previous research has demonstrated that in these cases, learners look towards the majority opinion to make decisions. However, it remains unclear how learners evaluate social information, given that considering either the overall valence, or the number of testimonies, or both may lead to different conclusions. We therefore formalized several social learning strategies and compared them to the performance of adults and children. We find that children use different strategies than adults. This suggests that the development of social learning may involve the acquisition of cognitive strategies.

re

link (url) [BibTex]

link (url) [BibTex]


no image
A New Perspective and Extension of the Gaussian Filter

Wüthrich, M., Trimpe, S., Kappler, D., Schaal, S.

In Robotics: Science and Systems, 2015 (inproceedings)

Abstract
The Gaussian Filter (GF) is one of the most widely used filtering algorithms; instances are the Extended Kalman Filter, the Unscented Kalman Filter and the Divided Difference Filter. GFs represent the belief of the current state by a Gaussian with the mean being an affine function of the measurement. We show that this representation can be too restrictive to accurately capture the dependencies in systems with nonlinear observation models, and we investigate how the GF can be generalized to alleviate this problem. To this end we view the GF from a variational-inference perspective, and analyze how restrictions on the form of the belief can be relaxed while maintaining simplicity and efficiency. This analysis provides a basis for generalizations of the GF. We propose one such generalization which coincides with a GF using a virtual measurement, obtained by applying a nonlinear function to the actual measurement. Numerical experiments show that the proposed Feature Gaussian Filter (FGF) can have a substantial performance advantage over the standard GF for systems with nonlinear observation models.

am ics

Web PDF Project Page [BibTex]


no image
Learning from others: Adult and child strategies in assessing conflicting ratings

Hu, J., Lieder, F., Griffiths, T. L., Xu, F.

In Biennial Meeting of the Society for Research in Child Development, Philadelphia, Pennsylvania, USA, 2015 (inproceedings)

re

[BibTex]

[BibTex]


no image
Utility-weighted sampling in decisions from experience

Lieder, F., Griffiths, T. L., Hsu, M.

In The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2015 (inproceedings)

re

[BibTex]

[BibTex]

2013


no image
Controllability and Resource-Rational Planning

Lieder, F., Goodman, N. D., Huys, Q. J.

In Computational and Systems Neuroscience (Cosyne), pages: 112, 2013 (inproceedings)

Abstract
Learned helplessness experiments involving controllable vs. uncontrollable stressors have shown that the perceived ability to control events has profound consequences for decision making. Normative models of decision making, however, do not naturally incorporate knowledge about controllability, and previous approaches to incorporating it have led to solutions with biologically implausible computational demands [1,2]. Intuitively, controllability bounds the differential rewards for choosing one strategy over another, and therefore believing that the environment is uncontrollable should reduce one’s willingness to invest time and effort into choosing between options. Here, we offer a normative, resource-rational account of the role of controllability in trading mental effort for expected gain. In this view, the brain not only faces the task of solving Markov decision problems (MDPs), but it also has to optimally allocate its finite computational resources to solve them efficiently. This joint problem can itself be cast as a MDP [3], and its optimal solution respects computational constraints by design. We start with an analytic characterisation of the influence of controllability on the use of computational resources. We then replicate previous results on the effects of controllability on the differential value of exploration vs. exploitation, showing that these are also seen in a cognitively plausible regime of computational complexity. Third, we find that controllability makes computation valuable, so that it is worth investing more mental effort the higher the subjective controllability. Fourth, we show that in this model the perceived lack of control (helplessness) replicates empirical findings [4] whereby patients with major depressive disorder are less likely to repeat a choice that led to a reward, or to avoid a choice that led to a loss. Finally, the model makes empirically testable predictions about the relationship between reaction time and helplessness.

re

[BibTex]

2013


[BibTex]


no image
Learned helplessness and generalization

Lieder, F., Goodman, N. D., Huys, Q. J. M.

In 35th Annual Conference of the Cognitive Science Society, 2013 (inproceedings)

re

[BibTex]

[BibTex]


no image
Reverse-Engineering Resource-Efficient Algorithms

Lieder, F., Goodman, N. D., Griffiths, T. L.

In NIPS Workshop Resource-Efficient Machine Learning, 2013 (inproceedings)

re

[BibTex]

[BibTex]

2012


no image
Event-based State Estimation with Switching Static-gain Observers

Trimpe, S.

In Proceedings of the 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems, 2012 (inproceedings)

am ics

PDF DOI [BibTex]

2012


PDF DOI [BibTex]


no image
Event-based State Estimation with Variance-Based Triggering

Trimpe, S., D’Andrea, R.

In Proceedings of the 51st IEEE Conference on Decision and Control, 2012 (inproceedings)

am ics

PDF Supplementary material DOI [BibTex]

PDF Supplementary material DOI [BibTex]

2007


no image
Less Conservative Polytopic LPV Models for Charge Control by Combining Parameter Set Mapping and Set Intersection

Kwiatkowski, A., Trimpe, S., Werner, H.

In Proceedings of the 46th IEEE Conference on Decision and Control, 2007 (inproceedings)

am ics

DOI [BibTex]

2007


DOI [BibTex]