Header logo is


2020


no image
Measuring the Costs of Planning

Felso, V., Jain, Y. R., Lieder, F.

CogSci 2020, July 2020 (poster) Accepted

Abstract
Which information is worth considering depends on how much effort it would take to acquire and process it. From this perspective people’s tendency to neglect considering the long-term consequences of their actions (present bias) might reflect that looking further into the future becomes increasingly more effortful. In this work, we introduce and validate the use of Bayesian Inverse Reinforcement Learning (BIRL) for measuring individual differences in the subjective costs of planning. We extend the resource-rational model of human planning introduced by Callaway, Lieder, et al. (2018) by parameterizing the cost of planning. Using BIRL, we show that increased subjective cost for considering future outcomes may be associated with both the present bias and acting without planning. Our results highlight testing the causal effects of the cost of planning on both present bias and mental effort avoidance as a promising direction for future work.

re

[BibTex]

2020


[BibTex]


no image
Automatic Discovery of Interpretable Planning Strategies

Skirzyński, J., Becker, F., Lieder, F.

May 2020 (article) Submitted

Abstract
When making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decisionmakers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that the provision of decision rules as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Furthermore, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making.

re

Automatic Discovery of Interpretable Planning Strategies The code for our algorithm and the experiments is available [BibTex]


no image
Advancing Rational Analysis to the Algorithmic Level

Lieder, F., Griffiths, T. L.

Behavioral and Brain Sciences, 43, E27, March 2020 (article)

Abstract
The commentaries raised questions about normativity, human rationality, cognitive architectures, cognitive constraints, and the scope or resource rational analysis (RRA). We respond to these questions and clarify that RRA is a methodological advance that extends the scope of rational modeling to understanding cognitive processes, why they differ between people, why they change over time, and how they could be improved.

re

Advancing rational analysis to the algorithmic level DOI [BibTex]

Advancing rational analysis to the algorithmic level DOI [BibTex]


no image
Learning to Overexert Cognitive Control in a Stroop Task

Bustamante, L., Lieder, F., Musslick, S., Shenhav, A., Cohen, J.

Febuary 2020, Laura Bustamante and Falk Lieder contributed equally to this publication. (article) In revision

Abstract
How do people learn when to allocate how much cognitive control to which task? According to the Learned Value of Control (LVOC) model, people learn to predict the value of alternative control allocations from features of a given situation. This suggests that people may generalize the value of control learned in one situation to other situations with shared features, even when the demands for cognitive control are different. This makes the intriguing prediction that what a person learned in one setting could, under some circumstances, cause them to misestimate the need for, and potentially over-exert control in another setting, even if this harms their performance. To test this prediction, we had participants perform a novel variant of the Stroop task in which, on each trial, they could choose to either name the color (more control-demanding) or read the word (more automatic). However only one of these tasks was rewarded, it changed from trial to trial, and could be predicted by one or more of the stimulus features (the color and/or the word). Participants first learned colors that predicted the rewarded task. Then they learned words that predicted the rewarded task. In the third part of the experiment, we tested how these learned feature associations transferred to novel stimuli with some overlapping features. The stimulus-task-reward associations were designed so that for certain combinations of stimuli the transfer of learned feature associations would incorrectly predict that more highly rewarded task would be color naming, which would require the exertion of control, even though the actually rewarded task was word reading and therefore did not require the engagement of control. Our results demonstrated that participants over-exerted control for these stimuli, providing support for the feature-based learning mechanism described by the LVOC model.

re

Learning to Overexert Cognitive Control in a Stroop Task DOI [BibTex]

Learning to Overexert Cognitive Control in a Stroop Task DOI [BibTex]


Toward a Formal Theory of Proactivity
Toward a Formal Theory of Proactivity

Lieder, F., Iwama, G.

January 2020 (article) Submitted

Abstract
Beyond merely reacting to their environment and impulses, people have the remarkable capacity to proactively set and pursue their own goals. But the extent to which they leverage this capacity varies widely across people and situations. The goal of this article is to make the mechanisms and variability of proactivity more amenable to rigorous experiments and computational modeling. We proceed in three steps. First, we develop and validate a mathematically precise behavioral measure of proactivity and reactivity that can be applied across a wide range of experimental paradigms. Second, we propose a formal definition of proactivity and reactivity, and develop a computational model of proactivity in the AX Continuous Performance Task (AX-CPT). Third, we develop and test a computational-level theory of meta-control over proactivity in the AX-CPT that identifies three distinct meta-decision-making problems: intention setting, resolving response conflict between intentions and automaticity, and deciding whether to recall context and intentions into working memory. People's response frequencies in the AX-CPT were remarkably well captured by a mixture between the predictions of our models of proactive and reactive control. Empirical data from an experiment varying the incentives and contextual load of an AX-CPT confirmed the predictions of our meta-control model of individual differences in proactivity. Our results suggest that proactivity can be understood in terms of computational models of meta-control. Our model makes additional empirically testable predictions. Future work will extend our models from proactive control in the AX-CPT to proactive goal creation and goal pursuit in the real world.

re

Toward a formal theory of proactivity DOI Project Page [BibTex]


Combining learned and analytical models for predicting action effects from sensory data
Combining learned and analytical models for predicting action effects from sensory data

Kloss, A., Schaal, S., Bohg, J.

International Journal of Robotics Research, 2020 (article) Accepted

Abstract
One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. This enables optimal action selection to reach a certain goal state. Traditionally, dynamics are approximated by physics-based analytical models. These models rely on specific state representations that may be hard to obtain from raw sensory data, especially if no knowledge of the object shape is assumed. More recently, we have seen learning approaches that can predict the effect of complex physical interactions directly from sensory input. It is however an open question how far these models generalize beyond their training data. In this work, we investigate the advantages and limitations of neural network based learning approaches for predicting the effects of actions based on sensory input and show how analytical and learned models can be combined to leverage the best of both worlds. As physical interaction task, we use planar pushing, for which there exists a well-known analytical model and a large real-world dataset. We propose to use a convolutional neural network to convert raw depth images or organized point clouds into a suitable representation for the analytical model and compare this approach to using neural networks for both, perception and prediction. A systematic evaluation of the proposed approach on a very large real-world dataset shows two main advantages of the hybrid architecture. Compared to a pure neural network, it significantly (i) reduces required training data and (ii) improves generalization to novel physical interaction.

am

arXiv pdf link (url) [BibTex]


Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control
Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control

Nubert, J., Koehler, J., Berenz, V., Allgower, F., Trimpe, S.

IEEE Robotics and Automation Letters, 2020 (article) Accepted

Abstract
Fast feedback control and safety guarantees are essential in modern robotics. We present an approach that achieves both by combining novel robust model predictive control (MPC) with function approximation via (deep) neural networks (NNs). The result is a new approach for complex tasks with nonlinear, uncertain, and constrained dynamics as are common in robotics. Specifically, we leverage recent results in MPC research to propose a new robust setpoint tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic setpoint while guaranteeing stability and constraint satisfaction. The presented robust MPC scheme constitutes a one-layer approach that unifies the often separated planning and control layers, by directly computing the control command based on a reference and possibly obstacle positions. As a separate contribution, we show how the computation time of the MPC can be drastically reduced by approximating the MPC law with a NN controller. The NN is trained and validated from offline samples of the MPC, yielding statistical guarantees, and used in lieu thereof at run time. Our experiments on a state-of-the-art robot manipulator are the first to show that both the proposed robust and approximate MPC schemes scale to real-world robotic systems.

am ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]

2009


Valero-Cuevas, F., Hoffmann, H., Kurse, M. U., Kutch, J. J., Theodorou, E. A.

IEEE Reviews in Biomedical Engineering – (All authors have equally contributed), (2):110?135, 2009, clmc (article)

Abstract
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data.

am

link (url) [BibTex]

2009


link (url) [BibTex]


no image
Adaptive compliance control of a multi-legged stair-climbing robot based on proprioceptive data

Waldron, K. J., Eich, M., Grimminger, F., Kirchner, F.

Industrial Robot: An International Journal, 36, pages: 331-339, Emerald Group Publishing Limited, June 2009 (article)

am

DOI [BibTex]

DOI [BibTex]


no image
On-line learning and modulation of periodic movements with nonlinear dynamical systems

Gams, A., Ijspeert, A., Schaal, S., Lenarčič, J.

Autonomous Robots, 27(1):3-23, 2009, clmc (article)

Abstract
Abstract  The paper presents a two-layered system for (1) learning and encoding a periodic signal without any knowledge on its frequency and waveform, and (2) modulating the learned periodic trajectory in response to external events. The system is used to learn periodic tasks on a humanoid HOAP-2 robot. The first layer of the system is a dynamical system responsible for extracting the fundamental frequency of the input signal, based on adaptive frequency oscillators. The second layer is a dynamical system responsible for learning of the waveform based on a built-in learning algorithm. By combining the two dynamical systems into one system we can rapidly teach new trajectories to robots without any knowledge of the frequency of the demonstration signal. The system extracts and learns only one period of the demonstration signal. Furthermore, the trajectories are robust to perturbations and can be modulated to cope with a dynamic environment. The system is computationally inexpensive, works on-line for any periodic signal, requires no additional signal processing to determine the frequency of the input signal and can be applied in parallel to multiple dimensions. Additionally, it can adapt to changes in frequency and shape, e.g. to non-stationary signals, such as hand-generated signals and human demonstrations.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Local dimensionality reduction for non-parametric regression

Hoffman, H., Schaal, S., Vijayakumar, S.

Neural Processing Letters, 2009, clmc (article)

Abstract
Locally-weighted regression is a computationally-efficient technique for non-linear regression. However, for high-dimensional data, this technique becomes numerically brittle and computationally too expensive if many local models need to be maintained simultaneously. Thus, local linear dimensionality reduction combined with locally-weighted regression seems to be a promising solution. In this context, we review linear dimensionality-reduction methods, compare their performance on nonparametric locally-linear regression, and discuss their ability to extend to incremental learning. The considered methods belong to the following three groups: (1) reducing dimensionality only on the input data, (2) modeling the joint input-output data distribution, and (3) optimizing the correlation between projection directions and output data. Group 1 contains principal component regression (PCR); group 2 contains principal component analysis (PCA) in joint input and output space, factor analysis, and probabilistic PCA; and group 3 contains reduced rank regression (RRR) and partial least squares (PLS) regression. Among the tested methods, only group 3 managed to achieve robust performance even for a non-optimal number of components (factors or projection directions). In contrast, group 1 and 2 failed for fewer components since these methods rely on the correct estimate of the true intrinsic dimensionality. In group 3, PLS is the only method for which a computationally-efficient incremental implementation exists. Thus, PLS appears to be ideally suited as a building block for a locally-weighted regressor in which projection directions are incrementally added on the fly.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Incorporating Muscle Activation-Contraction dynamics to an optimal control framework for finger movements

Theodorou, Evangelos A., Valero-Cuevas, Francisco J.

Abstracts of Neural Control of Movement Conference (NCM 2009), 2009, clmc (article)

Abstract
Recent experimental and theoretical work [1] investigated the neural control of contact transition between motion and force during tapping with the index finger as a nonlinear optimization problem. Such transitions from motion to well-directed contact force are a fundamental part of dexterous manipulation. There are 3 alternative hypotheses of how this transition could be accomplished by the nervous system as a function of changes in direction and magnitude of the torque vector controlling the finger. These hypotheses are 1) an initial change in direction with a subsequent change in magnitude of the torque vector; 2) an initial change in magnitude with a subsequent directional change of the torque vector; and 3) a simultaneous and proportionally equal change of both direction and magnitude of the torque vector. Experimental work in [2] shows that the nervous system selects the first strategy, and in [1] we suggest that this may in fact be the optimal strategy. In [4] the framework of Iterative Linear Quadratic Optimal Regulator (ILQR) was extended to incorporate motion and force control. However, our prior simulation work assumed direct and instantaneous control of joint torques, which ignores the known delays and filtering properties of skeletal muscle. In this study, we implement an ILQR controller for a more biologically plausible biomechanical model of the index finger than [4], and add activation-contraction dynamics to the system to simulate muscle function. The planar biomechanical model includes the kinematics of the 3 joints while the applied torques are driven by activation?contraction dynamics with biologically plausible time constants [3]. In agreement with our experimental work [2], the task is to, within 500 ms, move the finger from a given resting configuration to target configuration with a desired terminal velocity. ILQR does not only stabilize the finger dynamics according to the objective function, but it also generates smooth joint space trajectories with minimal tuning and without an a-priori initial control policy (which is difficult to find for highly dimensional biomechanical systems). Furthemore, the use of this optimal control framework and the addition of activation-contraction dynamics considers the full nonlinear dynamics of the index finger and produces a sequence of postures which are compatible with experimental motion data [2]. These simulations combined with prior experimental results suggest that optimal control is a strong candidate for the generation of finger movements prior to abrupt motion-to-force transitions. This work is funded in part by grants NIH R01 0505520 and NSF EFRI-0836042 to Dr. Francisco J. Valero- Cuevas 1 Venkadesan M, Valero-Cuevas FJ. 
Effects of neuromuscular lags on controlling contact transitions. 
Philosophical Transactions of the Royal Society A: 2008. 2 Venkadesan M, Valero-Cuevas FJ. 
Neural Control of Motion-to-Force Transitions with the Fingertip. 
J. Neurosci., Feb 2008; 28: 1366 - 1373; 3 Zajac. Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control. Crit Rev Biomed Eng, 17 4. Weiwei Li., Francisco Valero Cuevas: ?Linear Quadratic Optimal Control of Contact Transition with Fingertip ? ACC 2009

am

PDF [BibTex]

PDF [BibTex]


no image
On-line learning and modulation of periodic movements with nonlinear dynamical systems

Gams, A., Ijspeert, A., Schaal, S., Lenarčič, J.

Autonomous Robots, 27(1):3-23, 2009, clmc (article)

Abstract
Abstract  The paper presents a two-layered system for (1) learning and encoding a periodic signal without any knowledge on its frequency and waveform, and (2) modulating the learned periodic trajectory in response to external events. The system is used to learn periodic tasks on a humanoid HOAP-2 robot. The first layer of the system is a dynamical system responsible for extracting the fundamental frequency of the input signal, based on adaptive frequency oscillators. The second layer is a dynamical system responsible for learning of the waveform based on a built-in learning algorithm. By combining the two dynamical systems into one system we can rapidly teach new trajectories to robots without any knowledge of the frequency of the demonstration signal. The system extracts and learns only one period of the demonstration signal. Furthermore, the trajectories are robust to perturbations and can be modulated to cope with a dynamic environment. The system is computationally inexpensive, works on-line for any periodic signal, requires no additional signal processing to determine the frequency of the input signal and can be applied in parallel to multiple dimensions. Additionally, it can adapt to changes in frequency and shape, e.g. to non-stationary signals, such as hand-generated signals and human demonstrations.

am

link (url) [BibTex]

link (url) [BibTex]

2008


no image
Learning to control in operational space

Peters, J., Schaal, S.

International Journal of Robotics Research, 27, pages: 197-212, 2008, clmc (article)

Abstract
One of the most general frameworks for phrasing control problems for complex, redundant robots is operational space control. However, while this framework is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in com- plex robots, e.g., humanoid robots. In this paper, we suggest a learning approach for opertional space control as a direct inverse model learning problem. A first important insight for this paper is that a physically cor- rect solution to the inverse problem with redundant degrees-of-freedom does exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component for our work is based on the insight that many operational space controllers can be understood in terms of a constrained optimal control problem. The cost function as- sociated with this optimal control problem allows us to formulate a learn- ing algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational space controller. From the machine learning point of view, this learning problem corre- sponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees of freedom robot arm are used to illustrate the suggested approach. The applica- tion to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility for complex high degree-of-freedom robots. We also show that the proposed method works in the setting of learning resolved motion rate control on real, physical Mitsubishi PA-10 medical robotics arm.

am ei

link (url) DOI [BibTex]

2008


link (url) DOI [BibTex]


no image
Adaptation to a sub-optimal desired trajectory

M. Mistry, E. A. G. L. T. Y. S. S. M. K.

Advances in Computational Motor Control VII, Symposium at the Society for Neuroscience Meeting, Washington DC, 2008, 2008, clmc (article)

am

PDF [BibTex]

PDF [BibTex]


no image
Operational space control: A theoretical and emprical comparison

Nakanishi, J., Cory, R., Mistry, M., Peters, J., Schaal, S.

International Journal of Robotics Research, 27(6):737-757, 2008, clmc (article)

Abstract
Dexterous manipulation with a highly redundant movement system is one of the hallmarks of hu- man motor skills. From numerous behavioral studies, there is strong evidence that humans employ compliant task space control, i.e., they focus control only on task variables while keeping redundant degrees-of-freedom as compliant as possible. This strategy is robust towards unknown disturbances and simultaneously safe for the operator and the environment. The theory of operational space con- trol in robotics aims to achieve similar performance properties. However, despite various compelling theoretical lines of research, advanced operational space control is hardly found in actual robotics imple- mentations, in particular new kinds of robots like humanoids and service robots, which would strongly profit from compliant dexterous manipulation. To analyze the pros and cons of different approaches to operational space control, this paper focuses on a theoretical and empirical evaluation of different methods that have been suggested in the literature, but also some new variants of operational space controllers. We address formulations at the velocity, acceleration and force levels. First, we formulate all controllers in a common notational framework, including quaternion-based orientation control, and discuss some of their theoretical properties. Second, we present experimental comparisons of these approaches on a seven-degree-of-freedom anthropomorphic robot arm with several benchmark tasks. As an aside, we also introduce a novel parameter estimation algorithm for rigid body dynamics, which ensures physical consistency, as this issue was crucial for our successful robot implementations. Our extensive empirical results demonstrate that one of the simplified acceleration-based approaches can be advantageous in terms of task performance, ease of parameter tuning, and general robustness and compliance in face of inevitable modeling errors.

am

link (url) [BibTex]

link (url) [BibTex]


no image
A library for locally weighted projection regression

Klanke, S., Vijayakumar, S., Schaal, S.

Journal of Machine Learning Research, 9, pages: 623-626, 2008, clmc (article)

Abstract
In this paper we introduce an improved implementation of locally weighted projection regression (LWPR), a supervised learning algorithm that is capable of handling high-dimensional input data. As the key features, our code supports multi-threading, is available for multiple platforms, and provides wrappers for several programming languages.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Adaptive stair-climbing behaviour with a hybrid legged-wheeled robot

Eich, M., Grimminger, F., Kirchner, F.

In Advances In Mobile Robotics, pages: 768-775, World Scientific, August 2008 (incollection)

am

DOI [BibTex]

DOI [BibTex]


no image
Optimization strategies in human reinforcement learning

Hoffmann, H., Theodorou, E., Schaal, S.

Advances in Computational Motor Control VII, Symposium at the Society for Neuroscience Meeting, Washington DC, 2008, 2008, clmc (article)

am

PDF [BibTex]

PDF [BibTex]

2005


no image
Composite adaptive control with locally weighted statistical learning

Nakanishi, J., Farrell, J. A., Schaal, S.

Neural Networks, 18(1):71-90, January 2005, clmc (article)

Abstract
This paper introduces a provably stable learning adaptive control framework with statistical learning. The proposed algorithm employs nonlinear function approximation with automatic growth of the learning network according to the nonlinearities and the working domain of the control system. The unknown function in the dynamical system is approximated by piecewise linear models using a nonparametric regression technique. Local models are allocated as necessary and their parameters are optimized on-line. Inspired by composite adaptive control methods, the proposed learning adaptive control algorithm uses both the tracking error and the estimation error to update the parameters. We first discuss statistical learning of nonlinear functions, and motivate our choice of the locally weighted learning framework. Second, we begin with a class of first order SISO systems for theoretical development of our learning adaptive control framework, and present a stability proof including a parameter projection method that is needed to avoid potential singularities during adaptation. Then, we generalize our adaptive controller to higher order SISO systems, and discuss further extension to MIMO problems. Finally, we evaluate our theoretical control framework in numerical simulations to illustrate the effectiveness of the proposed learning adaptive controller for rapid convergence and high accuracy of control.

am

link (url) [BibTex]

2005


link (url) [BibTex]


no image
A model of smooth pursuit based on learning of the target dynamics using only retinal signals

Shibata, T., Tabata, H., Schaal, S., Kawato, M.

Neural Networks, 18, pages: 213-225, 2005, clmc (article)

Abstract
While the predictive nature of the primate smooth pursuit system has been evident through several behavioural and neurophysiological experiments, few models have attempted to explain these results comprehensively. The model we propose in this paper in line with previous models employing optimal control theory; however, we hypothesize two new issues: (1) the medical superior temporal (MST) area in the cerebral cortex implements a recurrent neural network (RNN) in order to predict the current or future target velocity, and (2) a forward model of the target motion is acquired by on-line learning. We use stimulation studies to demonstrate how our new model supports these hypotheses.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Parametric and Non-Parametric approaches for nonlinear tracking of moving objects

Hidaka, Y, Theodorou, E.

Technical Report-2005-1, 2005, clmc (article)

am

PDF [BibTex]

PDF [BibTex]

2004


no image
Discovering optimal imitation strategies

Billard, A., Epars, Y., Calinon, S., Cheng, G., Schaal, S.

Robotics and Autonomous Systems, 47(2-3):68-77, 2004, clmc (article)

Abstract
This paper develops a general policy for learning relevant features of an imitation task. We restrict our study to imitation of manipulative tasks or of gestures. The imitation process is modeled as a hierarchical optimization system, which minimizes the discrepancy between two multi-dimensional datasets. To classify across manipulation strategies, we apply a probabilistic analysis to data in Cartesian and joint spaces. We determine a general metric that optimizes the policy of task reproduction, following strategy determination. The model successfully discovers strategies in six different imitative tasks and controls task reproduction by a full body humanoid robot.

am

[BibTex]

2004


[BibTex]


no image
Rhythmic movement is not discrete

Schaal, S., Sternad, D., Osu, R., Kawato, M.

Nature Neuroscience, 7(10):1137-1144, 2004, clmc (article)

Abstract
Rhythmic movements, like walking, chewing, or scratching, are phylogenetically old mo-tor behaviors found in many organisms, ranging from insects to primates. In contrast, discrete movements, like reaching, grasping, or kicking, are behaviors that have reached sophistication primarily in younger species, particularly in primates. Neurophysiological and computational research on arm motor control has focused almost exclusively on dis-crete movements, essentially assuming similar neural circuitry for rhythmic tasks. In con-trast, many behavioral studies focused on rhythmic models, subsuming discrete move-ment as a special case. Here, using a human functional neuroimaging experiment, we show that in addition to areas activated in rhythmic movement, discrete movement in-volves several higher cortical planning areas, despite both movement conditions were confined to the same single wrist joint. These results provide the first neuroscientific evi-dence that rhythmic arm movement cannot be part of a more general discrete movement system, and may require separate neurophysiological and theoretical treatment.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Learning from demonstration and adaptation of biped locomotion

Nakanishi, J., Morimoto, J., Endo, G., Cheng, G., Schaal, S., Kawato, M.

Robotics and Autonomous Systems, 47(2-3):79-91, 2004, clmc (article)

Abstract
In this paper, we introduce a framework for learning biped locomotion using dynamical movement primitives based on non-linear oscillators. Our ultimate goal is to establish a design principle of a controller in order to achieve natural human-like locomotion. We suggest dynamical movement primitives as a central pattern generator (CPG) of a biped robot, an approach we have previously proposed for learning and encoding complex human movements. Demonstrated trajectories are learned through movement primitives by locally weighted regression, and the frequency of the learned trajectories is adjusted automatically by a novel frequency adaptation algorithmbased on phase resetting and entrainment of coupled oscillators. Numerical simulations and experimental implementation on a physical robot demonstrate the effectiveness of the proposed locomotioncontroller.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Feedback error learning and nonlinear adaptive control

Nakanishi, J., Schaal, S.

Neural Networks, 17(10):1453-1465, 2004, clmc (article)

Abstract
In this paper, we present our theoretical investigations of the technique of feedback error learning (FEL) from the viewpoint of adaptive control. We first discuss the relationship between FEL and nonlinear adaptive control with adaptive feedback linearization, and show that FEL can be interpreted as a form of nonlinear adaptive control. Second, we present a Lyapunov analysis suggesting that the condition of strictly positive realness (SPR) associated with the tracking error dynamics is a sufficient condition for asymptotic stability of the closed-loop dynamics. Specifically, for a class of second order SISO systems, we show that this condition reduces to KD^2 > KP; where KP and KD are positive position and velocity feedback gains, respectively. Moreover, we provide a ÔpassivityÕ-based stability analysis which suggests that SPR of the tracking error dynamics is a necessary and sufficient condition for asymptotic hyperstability. Thus, the condition KD^2>KP mentioned above is not only a sufficient but also necessary condition to guarantee asymptotic hyperstability of FEL, i.e. the tracking error is bounded and asymptotically converges to zero. As a further point, we explore the adaptive control and FEL framework for feedforward control formulations, and derive an additional sufficient condition for asymptotic stability in the sense of Lyapunov. Finally, we present numerical simulations to illustrate the stability properties of FEL obtained from our mathematical analysis.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Computational approaches to motor learning by imitation

Schaal, S., Ijspeert, A., Billard, A.

In The Neuroscience of Social Interaction, (1431):199-218, (Editors: Frith, C. D.;Wolpert, D.), Oxford University Press, Oxford, 2004, clmc (inbook)

Abstract
Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking - indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions.

am

link (url) [BibTex]

link (url) [BibTex]

1999


no image
Is imitation learning the route to humanoid robots?

Schaal, S.

Trends in Cognitive Sciences, 3(6):233-242, 1999, clmc (article)

Abstract
This review will focus on two recent developments in artificial intelligence and neural computation: learning from imitation and the development of humanoid robots. It will be postulated that the study of imitation learning offers a promising route to gain new insights into mechanisms of perceptual motor control that could ultimately lead to the creation of autonomous humanoid robots. This hope is justified because imitation learning channels research efforts towards three important issues: efficient motor learning, the connection between action and perception, and modular motor control in form of movement primitives. In order to make these points, first, a brief review of imitation learning will be given from the view of psychology and neuroscience. In these fields, representations and functional connections between action and perception have been explored that contribute to the understanding of motor acts of other beings. The recent discovery that some areas in the primate brain are active during both movement perception and execution provided a first idea of the possible neural basis of imitation. Secondly, computational approaches to imitation learning will be described, initially from the perspective of traditional AI and robotics, and then with a focus on neural network models and statistical learning research. Parallels and differences between biological and computational approaches to imitation will be highlighted. The review will end with an overview of current projects that actually employ imitation learning for humanoid robots.

am

link (url) [BibTex]

1999


link (url) [BibTex]


no image
Nonparametric regression for learning nonlinear transformations

Schaal, S.

In Prerational Intelligence in Strategies, High-Level Processes and Collective Behavior, 2, pages: 595-621, (Editors: Ritter, H.;Cruse, H.;Dean, J.), Kluwer Academic Publishers, 1999, clmc (inbook)

Abstract
Information processing in animals and artificial movement systems consists of a series of transformations that map sensory signals to intermediate representations, and finally to motor commands. Given the physical and neuroanatomical differences between individuals and the need for plasticity during development, it is highly likely that such transformations are learned rather than pre-programmed by evolution. Such self-organizing processes, capable of discovering nonlinear dependencies between different groups of signals, are one essential part of prerational intelligence. While neural network algorithms seem to be the natural choice when searching for solutions for learning transformations, this paper will take a more careful look at which types of neural networks are actually suited for the requirements of an autonomous learning system. The approach that we will pursue is guided by recent developments in learning theory that have linked neural network learning to well established statistical theories. In particular, this new statistical understanding has given rise to the development of neural network systems that are directly based on statistical methods. One family of such methods stems from nonparametric regression. This paper will compare nonparametric learning with the more widely used parametric counterparts in a non technical fashion, and investigate how these two families differ in their properties and their applicabilities. We will argue that nonparametric neural networks offer a set of characteristics that make them a very promising candidate for on-line learning in autonomous system.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Segmentation of endpoint trajectories does not imply segmented control

Sternad, D., Schaal, D.

Experimental Brain Research, 124(1):118-136, 1999, clmc (article)

Abstract
While it is generally assumed that complex movements consist of a sequence of simpler units, the quest to define these units of action, or movement primitives, still remains an open question. In this context, two hypotheses of movement segmentation of endpoint trajectories in 3D human drawing movements are re-examined: (1) the stroke-based segmentation hypothesis based on the results that the proportionality coefficient of the 2/3 power law changes discontinuously with each new â??strokeâ?, and (2) the segmentation hypothesis inferred from the observation of piecewise planar endpoint trajectories of 3D drawing movements. In two experiments human subjects performed a set of elliptical and figure-8 patterns of different sizes and orientations using their whole arm in 3D. The kinematic characteristics of the endpoint trajectories and the seven joint angles of the arm were analyzed. While the endpoint trajectories produced similar segmentation features as reported in the literature, analyses of the joint angles show no obvious segmentation but rather continuous oscillatory patterns. By approximating the joint angle data of human subjects with sinusoidal trajectories, and by implementing this model on a 7-degree-of-freedom anthropomorphic robot arm, it is shown that such a continuous movement strategy can produce exactly the same features as observed by the above segmentation hypotheses. The origin of this apparent segmentation of endpoint trajectories is traced back to the nonlinear transformations of the forward kinematics of human arms. The presented results demonstrate that principles of discrete movement generation may not be reconciled with those of rhythmic movement as easily as has been previously suggested, while the generalization of nonlinear pattern generators to arm movements can offer an interesting alternative to approach the question of units of action.

am

link (url) [BibTex]

link (url) [BibTex]

1998


no image
Constructive incremental learning from only local information

Schaal, S., Atkeson, C. G.

Neural Computation, 10(8):2047-2084, 1998, clmc (article)

Abstract
We introduce a constructive, incremental learning system for regression problems that models data by means of spatially localized linear models. In contrast to other approaches, the size and shape of the receptive field of each locally linear model as well as the parameters of the locally linear model itself are learned independently, i.e., without the need for competition or any other kind of communication. Independent learning is accomplished by incrementally minimizing a weighted local cross validation error. As a result, we obtain a learning system that can allocate resources as needed while dealing with the bias-variance dilemma in a principled way. The spatial localization of the linear models increases robustness towards negative interference. Our learning system can be interpreted as a nonparametric adaptive bandwidth smoother, as a mixture of experts where the experts are trained in isolation, and as a learning system which profits from combining independent expert knowledge on the same problem. This paper illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields. 

am

link (url) [BibTex]

1998


link (url) [BibTex]


no image
Local adaptive subspace regression

Vijayakumar, S., Schaal, S.

Neural Processing Letters, 7(3):139-149, 1998, clmc (article)

Abstract
Incremental learning of sensorimotor transformations in high dimensional spaces is one of the basic prerequisites for the success of autonomous robot devices as well as biological movement systems. So far, due to sparsity of data in high dimensional spaces, learning in such settings requires a significant amount of prior knowledge about the learning task, usually provided by a human expert. In this paper we suggest a partial revision of the view. Based on empirical studies, we observed that, despite being globally high dimensional and sparse, data distributions from physical movement systems are locally low dimensional and dense. Under this assumption, we derive a learning algorithm, Locally Adaptive Subspace Regression, that exploits this property by combining a dynamically growing local dimensionality reduction technique  as a preprocessing step with a nonparametric learning technique, locally weighted regression, that also learns the region of validity of the regression. The usefulness of the algorithm and the validity of its assumptions are illustrated for a synthetic data set, and for data of the inverse dynamics of human arm movements and an actual 7 degree-of-freedom anthropomorphic robot arm. 

am

link (url) [BibTex]

link (url) [BibTex]

1995


no image
Batting a ball: Dynamics of a rhythmic skill

Sternad, D., Schaal, S., Atkeson, C. G.

In Studies in Perception and Action, pages: 119-122, (Editors: Bardy, B.;Bostma, R.;Guiard, Y.), Erlbaum, Hillsdayle, NJ, 1995, clmc (inbook)

am

[BibTex]

1995


[BibTex]


no image
Memory-based neural networks for robot learning

Atkeson, C. G., Schaal, S.

Neurocomputing, 9, pages: 1-27, 1995, clmc (article)

Abstract
This paper explores a memory-based approach to robot learning, using memory-based neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a set of nearest neighbors. This network design is equivalent to a statistical approach known as locally weighted regression, in which a local model is formed to answer each query, using a weighted regression in which nearby points (similar experiences) are weighted more than distant points (less relevant experiences). We illustrate this approach by describing how it has been used to enable a robot to learn a difficult juggling task. Keywords: memory-based, robot learning, locally weighted regression, nearest neighbor, local models.

am

link (url) [BibTex]

link (url) [BibTex]

1991


no image
Ways to smarter CAD-systems

Ehrlenspiel, K., Schaal, S.

In Proceedings of ICED’91Heurista, pages: 10-16, (Editors: Hubka), Edition, Schriftenreihe WDK 21. Zürich, 1991, clmc (inbook)

am

[BibTex]

1991


[BibTex]