Header logo is


2020


Bayesian Optimization in Robot Learning - Automatic Controller Tuning and Sample-Efficient Methods
Bayesian Optimization in Robot Learning - Automatic Controller Tuning and Sample-Efficient Methods

Marco-Valle, A.

University of Tübingen, June 2020 (thesis)

Abstract
The problem of designing controllers to regulate dynamical systems has been studied by engineers during the past millennia. Ever since, suboptimal performance lingers in many closed loops as an unavoidable side effect of manually tuning the parameters of the controllers. Nowadays, industrial settings remain skeptic about data-driven methods that allow one to automatically learn controller parameters. In the context of robotics, machine learning (ML) keeps growing its influence on increasing autonomy and adaptability, for example to aid automating controller tuning. However, data-hungry ML methods, such as standard reinforcement learning, require a large number of experimental samples, prohibitive in robotics, as hardware can deteriorate and break. This brings about the following question: Can manual controller tuning, in robotics, be automated by using data-efficient machine learning techniques? In this thesis, we tackle the question above by exploring Bayesian optimization (BO), a data-efficient ML framework, to buffer the human effort and side effects of manual controller tuning, while retaining a low number of experimental samples. We focus this work in the context of robotic systems, providing thorough theoretical results that aim to increase data-efficiency, as well as demonstrations in real robots. Specifically, we present four main contributions. We first consider using BO to replace manual tuning in robotic platforms. To this end, we parametrize the design weights of a linear quadratic regulator (LQR) and learn its parameters using an information-efficient BO algorithm. Such algorithm uses Gaussian processes (GPs) to model the unknown performance objective. The GP model is used by BO to suggest controller parameters that are expected to increment the information about the optimal parameters, measured as a gain in entropy. The resulting “automatic LQR tuning” framework is demonstrated on two robotic platforms: A robot arm balancing an inverted pole and a humanoid robot performing a squatting task. In both cases, an existing controller is automatically improved in a handful of experiments without human intervention. BO compensates for data scarcity by means of the GP, which is a probabilistic model that encodes prior assumptions about the unknown performance objective. Usually, incorrect or non-informed assumptions have negative consequences, such as higher number of robot experiments, poor tuning performance or reduced sample-efficiency. The second to fourth contributions presented herein attempt to alleviate this issue. The second contribution proposes to include the robot simulator into the learning loop as an additional information source for automatic controller tuning. While doing a real robot experiment generally entails high associated costs (e.g., require preparation and take time), simulations are cheaper to obtain (e.g., they can be computed faster). However, because the simulator is an imperfect model of the robot, its information is biased and could have negative repercussions in the learning performance. To address this problem, we propose “simu-vs-real”, a principled multi-fidelity BO algorithm that trades off cheap, but inaccurate information from simulations with expensive and accurate physical experiments in a cost-effective manner. The resulting algorithm is demonstrated on a cart-pole system, where simulations and real experiments are alternated, thus sparing many real evaluations. The third contribution explores how to adequate the expressiveness of the probabilistic prior to the control problem at hand. To this end, the mathematical structure of LQR controllers is leveraged and embedded into the GP, by means of the kernel function. Specifically, we propose two different “LQR kernel” designs that retain the flexibility of Bayesian nonparametric learning. Simulated results indicate that the LQR kernel yields superior performance than non-informed kernel choices when used for controller learning with BO. Finally, the fourth contribution specifically addresses the problem of handling controller failures, which are typically unavoidable in practice while learning from data, specially if non-conservative solutions are expected. Although controller failures are generally problematic (e.g., the robot has to be emergency-stopped), they are also a rich information source about what should be avoided. We propose “failures-aware excursion search”, a novel algorithm for Bayesian optimization under black-box constraints, where failures are limited in number. Our results in numerical benchmarks indicate that by allowing a confined number of failures, better optima are revealed as compared with state-of-the-art methods. The first contribution of this thesis, “automatic LQR tuning”, lies among the first on applying BO to real robots. While it demonstrated automatic controller learning from few experimental samples, it also revealed several important challenges, such as the need of higher sample-efficiency, which opened relevant research directions that we addressed through several methodological contributions. Summarizing, we proposed “simu-vs-real”, a novel BO algorithm that includes the simulator as an additional information source, an “LQR kernel” design that learns faster than standard choices and “failures-aware excursion search”, a new BO algorithm for constrained black-box optimization problems, where the number of failures is limited.

ics

Repository (Universitätsbibliothek) - University of Tübingen PDF DOI [BibTex]


no image
Deep learning for the parameter estimation of tight-binding Hamiltonians

Cacioppo, A.

University of Roma, La Sapienza, Italy, May 2020 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Learning Algorithms, Invariances, and the Real World

Zecevic, M.

Technical University of Darmstadt, Germany, April 2020 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Advances in Latent Variable and Causal Models

Rubenstein, P.

University of Cambridge, UK, 2020, (Cambridge-Tuebingen-Fellowship) (phdthesis)

ei

[BibTex]

[BibTex]

2012


no image
Scalable graph kernels

Shervashidze, N.

Eberhard Karls Universität Tübingen, Germany, October 2012 (phdthesis)

ei

Web [BibTex]

2012


Web [BibTex]


no image
Learning Motor Skills: From Algorithms to Robot Experiments

Kober, J.

Technische Universität Darmstadt, Germany, March 2012 (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Structure and Dynamics of Diffusion Networks

Gomez Rodriguez, M.

Department of Electrical Engineering, Stanford University, 2012 (phdthesis)

ei

Web [BibTex]

Web [BibTex]


no image
Blind Deconvolution in Scientific Imaging & Computational Photography

Hirsch, M.

Eberhard Karls Universität Tübingen, Germany, 2012 (phdthesis)

ei

Web [BibTex]

Web [BibTex]


no image
Nanodroplets at topographic steps

Bartsch, H.

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Janus particles in critical liquids

Labbe-Laurent, M.

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Phase equilibria of binary liquid crystals

Klöss, Hans-Christian

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Pinning of drops at superhydrophobic surfaces

Daschke, Lena

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Impedance spectroscopy of ions at interfaces

Reindl, A.

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Surface of an evaporating liquid

Arnold, Daniel

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Statics and dynamics of critical Casimir forces

Tröndle, M.

Universität Stuttgart, Stuttgart, 2012 (phdthesis)

icm

link (url) [BibTex]

link (url) [BibTex]


no image
Critical Casimir forces beyond the Derjaguin approximation

Brunner, Niklas

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Crystallization of flexible molecules

Held, Felix

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]

2011


no image
Crowdsourcing for optimisation of deconvolution methods via an iPhone application

Lang, A.

Hochschule Reutlingen, Germany, April 2011 (mastersthesis)

ei

[BibTex]

2011


[BibTex]


no image
Model Learning in Robot Control

Nguyen-Tuong, D.

Albert-Ludwigs-Universität Freiburg, Germany, 2011 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Simulation einer fast kritischen binären Flüssigkeit in einem Temperaturgradienten

Single, F.

Universität Stuttgart, Stuttgart, 2011 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Struktur dichter ionischer Flüssigkeiten

Dannenmann, O.

Universität Stuttgart, Stuttgart, 2011 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Parallelisierung Stokesscher Dynamik für Graphikprozessoren zur Simulation kolloidaler Suspensionen

Kopp, M.

Universität Stuttgart, Stuttgart, 2011 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Diffusion in Wandnähe

Müller, J.

Universität Stuttgart, Stuttgart, 2011 (mastersthesis)

icm

[BibTex]

[BibTex]

2005


no image
Extension to Kernel Dependency Estimation with Applications to Robotics

BakIr, G.

Biologische Kybernetik, Technische Universität Berlin, Berlin, November 2005 (phdthesis)

Abstract
Kernel Dependency Estimation(KDE) is a novel technique which was designed to learn mappings between sets without making assumptions on the type of the involved input and output data. It learns the mapping in two stages. In a first step, it tries to estimate coordinates of a feature space representation of elements of the set by solving a high dimensional multivariate regression problem in feature space. Following this, it tries to reconstruct the original representation given the estimated coordinates. This thesis introduces various algorithmic extensions to both stages in KDE. One of the contributions of this thesis is to propose a novel linear regression algorithm that explores low-dimensional subspaces during learning. Furthermore various existing strategies for reconstructing patterns from feature maps involved in KDE are discussed and novel pre-image techniques are introduced. In particular, pre-image techniques for data-types that are of discrete nature such as graphs and strings are investigated. KDE is then explored in the context of robot pose imitation where the input is a an image with a human operator and the output is the robot articulated variables. Thus, using KDE, robot pose imitation is formulated as a regression problem.

ei

PDF PDF [BibTex]

2005


PDF PDF [BibTex]


no image
Geometrical aspects of statistical learning theory

Hein, M.

Biologische Kybernetik, Darmstadt, Darmstadt, November 2005 (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Implicit Surfaces For Modelling Human Heads

Steinke, F.

Biologische Kybernetik, Eberhard-Karls-Universität, Tübingen, September 2005 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
Machine Learning Methods for Brain-Computer Interdaces

Lal, TN.

Biologische Kybernetik, University of Darmstadt, September 2005 (phdthesis)

ei

Web [BibTex]

Web [BibTex]


no image
Efficient Adaptive Sampling of the Psychometric Function by Maximizing Information Gain

Tanner, TG.

Biologische Kybernetik, Eberhard-Karls University Tübingen, Tübingen, Germany, May 2005 (diplomathesis)

Abstract
A common task in psychophysics is to measure the psychometric function. A psychometric function can be described by its shape and four parameters: offset or threshold, slope or width, false alarm rate or chance level and miss or lapse rate. Depending on the parameters of interest some points on the psychometric function may be more informative than others. Adaptive methods attempt to place trials on the most informative points based on the data collected in previous trials. A new Bayesian adaptive psychometric method placing trials by minimising the expected entropy of the posterior probabilty dis- tribution over a set of possible stimuli is introduced. The method is more flexible, faster and at least as efficient as the established method (Kontsevich and Tyler, 1999). Comparably accurate (2dB) threshold and slope estimates can be obtained after about 30 and 500 trials, respectively. By using a dynamic termination criterion the efficiency can be further improved. The method can be applied to all experimental designs including yes/no designs and allows acquisition of any set of free parameters. By weighting the importance of parameters one can include nuisance parameters and adjust the relative expected errors. Use of nuisance parameters may lead to more accurate estimates than assuming a guessed fixed value. Block designs are supported and do not harm the performance if a sufficient number of trials are performed. The method was evaluated by computer simulations in which the role of parametric assumptions, its robustness, the quality of different point estimates, the effect of dynamic termination criteria and many other settings were investigated.

ei

[BibTex]

[BibTex]


no image
Interplay between geometry and fluid properties

König, P.-M.

Universität Stuttgart, Stuttgart, 2005 (phdthesis)

icm

link (url) [BibTex]

link (url) [BibTex]


no image
Molecular dynamics of wet granular media

Goll, C.

Universität Stuttgart, Stuttgart, 2005 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Grenzflächenfluktuationen binärer Flüssigkeiten

Hiester, T.

Universität Stuttgart, Stuttgart, 2005 (phdthesis)

icm

link (url) [BibTex]

link (url) [BibTex]