Header logo is


2019


no image
Robot Learning for Muscular Systems

Büchler, D.

Technical University Darmstadt, Germany, December 2019 (phdthesis)

ei

[BibTex]

2019


[BibTex]


no image
Real Time Probabilistic Models for Robot Trajectories

Gomez-Gonzalez, S.

Technical University Darmstadt, Germany, December 2019 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Reinforcement Learning for a Two-Robot Table Tennis Simulation

Li, G.

RWTH Aachen University, Germany, July 2019 (mastersthesis)

ei

[BibTex]

[BibTex]


Fast and Resource-Efficient Control of Wireless Cyber-Physical Systems
Fast and Resource-Efficient Control of Wireless Cyber-Physical Systems

Baumann, D.

KTH Royal Institute of Technology, Stockholm, Febuary 2019 (phdthesis)

ics

PDF [BibTex]

PDF [BibTex]


no image
Learning Transferable Representations

Rojas-Carulla, M.

University of Cambridge, UK, 2019 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Sample-efficient deep reinforcement learning for continuous control

Gu, S.

University of Cambridge, UK, 2019 (phdthesis)

ei

[BibTex]


no image
Spatial Filtering based on Riemannian Manifold for Brain-Computer Interfacing

Xu, J.

Technical University of Munich, Germany, 2019 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Quantification of tumor heterogeneity using PET/MRI and machine learning

Katiyar, P.

Eberhard Karls Universität Tübingen, Germany, 2019 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Actively Learning Dynamical Systems with Gaussian Processes

Buisson-Fenet, M.

Mines ParisTech, PSL University, 2019 (mastersthesis)

Abstract
Predicting the behavior of complex systems is of great importance in many fields such as engineering, economics or meteorology. The evolution of such systems often follows a certain structure, which can be induced, for example from the laws of physics or of market forces. Mathematically, this structure is often captured by differential equations. The internal functional dependencies, however, are usually unknown. Hence, using machine learning approaches that recreate this structure directly from data is a promising alternative to designing physics-based models. In particular, for high dimensional systems with nonlinear effects, this can be a challenging task. Learning dynamical systems is different from the classical machine learning tasks, such as image processing, and necessitates different tools. Indeed, dynamical systems can be actuated, often by applying torques or voltages. Hence, the user has a power of decision over the system, and can drive it to certain states by going through the dynamics. Actuating this system generates data, from which a machine learning model of the dynamics can be trained. However, gathering informative data that is representative of the whole state space remains a challenging task. The question of active learning then becomes important: which control inputs should be chosen by the user so that the data generated during an experiment is informative, and enables efficient training of the dynamics model? In this context, Gaussian processes can be a useful framework for approximating system dynamics. Indeed, they perform well on small and medium sized data sets, as opposed to most other machine learning frameworks. This is particularly important considering data is often costly to generate and process, most of all when producing it involves actuating a complex physical system. Gaussian processes also yield a notion of uncertainty, which indicates how sure the model is about its predictions. In this work, we investigate in a principled way how to actively learn dynamical systems, by selecting control inputs that generate informative data. We model the system dynamics by a Gaussian process, and use information-theoretic criteria to identify control trajectories that maximize the information gain. Thus, the input space can be explored efficiently, leading to a data-efficient training of the model. We propose several methods, investigate their theoretical properties and compare them extensively in a numerical benchmark. The final method proves to be efficient at generating informative data. Thus, it yields the lowest prediction error with the same amount of samples on most benchmark systems. We propose several variants of this method, allowing the user to trade off computations with prediction accuracy, and show it is versatile enough to take additional objectives into account.

ics

[BibTex]

[BibTex]


Das Tier als Modell für Roboter, und Roboter als Modell für Tiere
Das Tier als Modell für Roboter, und Roboter als Modell für Tiere

Badri-Spröwitz, A.

In pages: 167-175, Springer, 2019 (incollection)

dlg

DOI [BibTex]

DOI [BibTex]

2012


no image
Scalable graph kernels

Shervashidze, N.

Eberhard Karls Universität Tübingen, Germany, October 2012 (phdthesis)

ei

Web [BibTex]

2012


Web [BibTex]


no image
Learning Motor Skills: From Algorithms to Robot Experiments

Kober, J.

Technische Universität Darmstadt, Germany, March 2012 (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Expectation-Maximization methods for solving (PO)MDPs and optimal control problems

Toussaint, M., Storkey, A., Harmeling, S.

In Inference and Learning in Dynamic Models, (Editors: Barber, D., Cemgil, A.T. and Chiappa, S.), Cambridge University Press, Cambridge, UK, January 2012 (inbook) In press

ei

PDF [BibTex]

PDF [BibTex]


no image
Inferential structure determination from NMR data

Habeck, M.

In Bayesian methods in structural bioinformatics, pages: 287-312, (Editors: Hamelryck, T., Mardia, K. V. and Ferkinghoff-Borg, J.), Springer, New York, 2012 (inbook)

ei

[BibTex]

[BibTex]


no image
Structure and Dynamics of Diffusion Networks

Gomez Rodriguez, M.

Department of Electrical Engineering, Stanford University, 2012 (phdthesis)

ei

Web [BibTex]

Web [BibTex]


no image
Robot Learning

Sigaud, O., Peters, J.

In Encyclopedia of the sciences of learning, (Editors: Seel, N.M.), Springer, Berlin, Germany, 2012 (inbook)

ei

Web [BibTex]

Web [BibTex]


no image
Reinforcement Learning in Robotics: A Survey

Kober, J., Peters, J.

In Reinforcement Learning, 12, pages: 579-610, (Editors: Wiering, M. and Otterlo, M.), Springer, Berlin, Germany, 2012 (inbook)

Abstract
As most action generation problems of autonomous robots can be phrased in terms of sequential decision problems, robotics offers a tremendously important and interesting application platform for reinforcement learning. Similarly, the real-world challenges of this domain pose a major real-world check for reinforcement learning. Hence, the interplay between both disciplines can be seen as promising as the one between physics and mathematics. Nevertheless, only a fraction of the scientists working on reinforcement learning are sufficiently tied to robotics to oversee most problems encountered in this context. Thus, we will bring the most important challenges faced by robot reinforcement learning to their attention. To achieve this goal, we will attempt to survey most work that has successfully applied reinforcement learning to behavior generation for real robots. We discuss how the presented successful approaches have been made tractable despite the complexity of the domain and will study how representations or the inclusion of prior knowledge can make a significant difference. As a result, a particular focus of our chapter lies on the choice between model-based and model-free as well as between value function-based and policy search methods. As a result, we obtain a fairly complete survey of robot reinforcement learning which should allow a general reinforcement learning researcher to understand this domain.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Blind Deconvolution in Scientific Imaging & Computational Photography

Hirsch, M.

Eberhard Karls Universität Tübingen, Germany, 2012 (phdthesis)

ei

Web [BibTex]

Web [BibTex]


no image
Higher-Order Tensors in Diffusion MRI

Schultz, T., Fuster, A., Ghosh, A., Deriche, R., Florack, L., Lim, L.

In Visualization and Processing of Tensors and Higher Order Descriptors for Multi-Valued Data, (Editors: Westin, C. F., Vilanova, A. and Burgeth, B.), Springer, 2012 (inbook) Accepted

ei

[BibTex]

[BibTex]