Header logo is


2008


no image
Consistent Minimization of Clustering Objective Functions

von Luxburg, U., Bubeck, S., Jegelka, S., Kaufmann, M.

In Advances in neural information processing systems 20, pages: 961-968, (Editors: Platt, J. C., D. Koller, Y. Singer, S. Roweis), Curran, Red Hook, NY, USA, Twenty-First Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Clustering is often formulated as a discrete optimization problem. The objective is to find, among all partitions of the data set, the best one according to some quality measure. However, in the statistical setting where we assume that the finite data set has been sampled from some underlying space, the goal is not to find the best partition of the given sample, but to approximate the true partition of the underlying space. We argue that the discrete optimization approach usually does not achieve this goal. As an alternative, we suggest the paradigm of nearest neighbor clustering‘‘. Instead of selecting the best out of all partitions of the sample, it only considers partitions in some restricted function class. Using tools from statistical learning theory we prove that nearest neighbor clustering is statistically consistent. Moreover, its worst case complexity is polynomial by co nstructi on, and it can b e implem ented wi th small average case co mplexity using b ranch an d bound.

ei

PDF Web [BibTex]

2008


PDF Web [BibTex]


no image
A Single-shot Measurement of the Energy of Product States in a Translation Invariant Spin Chain Can Replace Any Quantum Computation

Janzing, D., Wocjan, P., Zhang, S.

New Journal of Physics, 10(093004):1-18, September 2008 (article)

Abstract
In measurement-based quantum computation, quantum algorithms are implemented via sequences of measurements. We describe a translationally invariant finite-range interaction on a one-dimensional qudit chain and prove that a single-shot measurement of the energy of an appropriate computational basis state with respect to this Hamiltonian provides the output of any quantum circuit. The required measurement accuracy scales inverse polynomially with the size of the simulated quantum circuit. This shows that the implementation of energy measurements on generic qudit chains is as hard as the realization of quantum computation. Here, a ‘measurement‘ is any procedure that samples from the spectral measurement induced by the observable and the state under consideration. As opposed to measurement-based quantum computation, the post-measurement state is irrelevant.

ei

PDF DOI [BibTex]


no image
A Kernel Statistical Test of Independence

Gretton, A., Fukumizu, K., Teo, C., Song, L., Schölkopf, B., Smola, A.

In Advances in neural information processing systems 20, pages: 585-592, (Editors: JC Platt and D Koller and Y Singer and S Roweis), Curran, Red Hook, NY, USA, 21st Annual Conference on Neural Information Processing Systems (NIPS), September 2008 (inproceedings)

Abstract
Whereas kernel measures of independence have been widely applied in machine learning (notably in kernel ICA), there is as yet no method to determine whether they have detected statistically significant dependence. We provide a novel test of the independence hypothesis for one particular kernel independence measure, the Hilbert-Schmidt independence criterion (HSIC). The resulting test costs O(m^2), where m is the sample size. We demonstrate that this test outperforms established contingency table-based tests. Finally, we show the HSIC test also applies to text (and to structured data more generally), for which no other independence test presently exists.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Fitness Expectation Maximization

Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.

In PPSN 2008, pages: 337-346, (Editors: Rudolph, G. , T. Jansen, S. Lucas, C. Poloni, N. Beume), Springer, Berlin, Germany, 10th International Conference on Parallel Problem Solving From Nature, September 2008 (inproceedings)

Abstract
We present Fitness Expectation Maximization (FEM), a novel method for performing ‘black box’ function optimization. FEM searches the fitness landscape of an objective function using an instantiation of the well-known Expectation Maximization algorithm, producing search points to match the sample distribution weighted according to higher expected fitness. FEM updates both candidate solution parameters and the search policy, which is represented as a multinormal distribution. Inheriting EM’s stability and strong guarantees, the method is both elegant and competitive with some of the best heuristic search methods in the field, and performs well on a number of unimodal and multimodal benchmark tasks. To illustrate the potential practical applications of the approach, we also show experiments on finding the parameters for a controller of the challenging non-Markovian double pole balancing task.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Voluntary Brain Regulation and Communication with ECoG-Signals

Hinterberger, T., Widmann, G., Lal, T., Hill, J., Tangermann, M., Rosenstiel, W., Schölkopf, B., Elger, C., Birbaumer, N.

Epilepsy and Behavior, 13(2):300-306, August 2008 (article)

Abstract
Brain–computer interfaces (BCIs) can be used for communication in writing without muscular activity or for learning to control seizures by voluntary regulation of brain signals such as the electroencephalogram (EEG). Three of five patients with epilepsy were able to spell their names with electrocorticogram (ECoG) signals derived from motor-related areas within only one or two training sessions. Imagery of finger or tongue movements was classified with support-vector classification of autoregressive coefficients derived from the ECoG signals. After training of the classifier, binary classification responses were used to select letters from a computer-generated menu. Offline analysis showed increased theta activity in the unsuccessful patients, whereas the successful patients exhibited dominant sensorimotor rhythms that they could control. The high spatial resolution and increased signal-to-noise ratio in ECoG signals, combined with short training periods, may offer an alternative for communication in complete paralysis, locked-in syndrome, and motor restoration.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Partial Least Squares Regression for Graph Mining

Saigo, H., Krämer, N., Tsuda, K.

In KDD2008, pages: 578-586, (Editors: Li, Y. , B. Liu, S. Sarawagi), ACM Press, New York, NY, USA, 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2008 (inproceedings)

Abstract
Attributed graphs are increasingly more common in many application domains such as chemistry, biology and text processing. A central issue in graph mining is how to collect informative subgraph patterns for a given learning task. We propose an iterative mining method based on partial least squares regression (PLS). To apply PLS to graph data, a sparse version of PLS is developed first and then it is combined with a weighted pattern mining algorithm. The mining algorithm is iteratively called with different weight vectors, creating one latent component per one mining call. Our method, graph PLS, is efficient and easy to implement, because the weight vector is updated with elementary matrix calculations. In experiments, our graph PLS algorithm showed competitive prediction accuracies in many chemical datasets and its efficiency was significantly superior to graph boosting (gboost) and the naive method based on frequent graph mining.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Semi-Supervised Laplacian Regularization of Kernel Canonical Correlation Analysis

Blaschko, M., Lampert, C., Gretton, A.

In ECML PKDD 2008, pages: 133-145, (Editors: Daelemans, W. , B. Goethals, K. Morik), Springer, Berlin, Germany, 19th European Conference on Machine Learning, August 2008 (inproceedings)

Abstract
Kernel canonical correlation analysis (KCCA) is a dimensionality reduction technique for paired data. By finding directions that maximize correlation, KCCA learns representations that are more closely tied to the underlying semantics of the data rather than noise. However, meaningful directions are not only those that have high correlation to another modality, but also those that capture the manifold structure of the data. We propose a method that is simultaneously able to find highly correlated directions that are also located on high variance directions along the data manifold. This is achieved by the use of semi-supervised Laplacian regularization of KCCA. We show experimentally that Laplacian regularized training improves class separation over KCCA with only Tikhonov regularization, while causing no degradation in the correlation between modalities. We propose a model selection criterion based on the Hilbert-Schmidt norm of the semi-supervised Laplacian regularized cross-covariance operator, which we compute in closed form.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
RKHS Representation of Measures Applied to Homogeneity, Independence, and Fourier Optics

Schölkopf, B., Sriperumbudur, B., Gretton, A., Fukumizu, K.

In OWR 2008, pages: 42-44, (Editors: K Jetter and S Smale and D-X Zhou), Mathematisches Forschungsinstitut, Oberwolfach-Walke, Germany, 30. Oberwolfach Report, August 2008 (inproceedings)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Learning Robot Dynamics for Computed Torque Control Using Local Gaussian Processes Regression

Nguyen-Tuong, D., Peters, J.

In LAB-RS 2008, pages: 59-64, (Editors: Stoica, A. , E. Tunstel, T. Huntsberger, T. Arslan, S. Vijayakumar, A. O. El-Rayis), IEEE Computer Society, Los Alamitos, CA, USA, 2008 ECSIS Symposium on Learning and Adaptive Behaviors for Robotic Systems, August 2008 (inproceedings)

Abstract
Accurate models of the robot dynamics allow the design of significantly more precise, energy-efficient and more compliant computed torque control for robots. However, in some cases the accuracy of rigid-body models does not suffice for sound control performance due to unmodeled nonlinearities such as hydraulic cables, complex friction, or actuator dynamics. In such cases, learning the models from data poses an interesting alternative and estimating the dynamics model using regression techniques becomes an important problem. However, the most accurate regression methods, e.g. Gaussian processes regression (GPR) and support vector regression (SVR), suffer from exceptional high computational complexity which prevents their usage for large numbers of samples or online learning to date. We proposed an approximation to the standard GPR using local Gaussian processes models. Due to reduced computational cost, local Gaussian processes (LGP) is capable for an online learning. Comparisons with other nonparametric regre ssions, e.g. standard GPR, SVR and locally weighted projection regression (LWPR), show that LGP has higher accuracy than LWPR and close to the performance of standard GPR and SVR while being sufficiently fast for online learning.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-class Common Spatial Pattern and Information Theoretic Feature Extraction

Grosse-Wentrup, M., Buss, M.

IEEE Transactions on Biomedical Engineering, 55(8):1991-2000, August 2008 (article)

Abstract
We address two shortcomings of the common spatial patterns (CSP) algorithm for spatial filtering in the context of brain--computer interfaces (BCIs) based on electroencephalography/magnetoencephalography (EEG/MEG): First, the question of optimality of CSP in terms of the minimal achievable classification error remains unsolved. Second, CSP has been initially proposed for two-class paradigms. Extensions to multiclass paradigms have been suggested, but are based on heuristics. We address these shortcomings in the framework of information theoretic feature extraction (ITFE). We show that for two-class paradigms, CSP maximizes an approximation of mutual information of extracted EEG/MEG components and class labels. This establishes a link between CSP and the minimal classification error. For multiclass paradigms, we point out that CSP by joint approximate diagonalization (JAD) is equivalent to independent component analysis (ICA), and provide a method to choose those independent components (ICs) that approximately maximize mutual information of ICs and class labels. This eliminates the need for heuristics in multiclass CSP, and allows incorporating prior class probabilities. The proposed method is applied to the dataset IIIa of the third BCI competition, and is shown to increase the mean classification accuracy by 23.4% in comparison to multiclass CSP.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A Decoupled Approach to Exemplar-based Unsupervised Learning

Nowozin, S., BakIr, G.

In ICML 2008, pages: 704-711, (Editors: Cohen, W. W., A. McCallum, S. Roweis), ACM Press, New York, NY, USA, 25th International Conference on Machine Learning, July 2008 (inproceedings)

Abstract
A recent trend in exemplar based unsupervised learning is to formulate the learning problem as a convex optimization problem. Convexity is achieved by restricting the set of possible prototypes to training exemplars. In particular, this has been done for clustering, vector quantization and mixture model density estimation. In this paper we propose a novel algorithm that is theoretically and practically superior to these convex formulations. This is possible by posing the unsupervised learning problem as a single convex master problem" with non-convex subproblems. We show that for the above learning tasks the subproblems are extremely wellbehaved and can be solved efficiently.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Relating clustering stability to properties of cluster boundaries

Ben-David, S., von Luxburg, U.

In COLT 2008, pages: 379-390, (Editors: Servedio, R. A., T. Zhang), Omnipress, Madison, WI, USA, 21st Annual Conference on Learning Theory, July 2008 (inproceedings)

Abstract
In this paper, we investigate stability-based methods for cluster model selection, in particular to select the number K of clusters. The scenario under consideration is that clustering is performed by minimizing a certain clustering quality function, and that a unique global minimizer exists. On the one hand we show that stability can be upper bounded by certain properties of the optimal clustering, namely by the mass in a small tube around the cluster boundaries. On the other hand, we provide counterexamples which show that a reverse statement is not true in general. Finally, we give some examples and arguments why, from a theoretic point of view, using clustering stability in a high sample setting can be problematic. It can be seen that distribution-free guarantees bounding the difference between the finite sample stability and the “true stability” cannot exist, unless one makes strong assumptions on the underlying distribution.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
At-TAX: A Whole Genome Tiling Array Resource for Developmental Expression Analysis and Transcript Identification in Arabidopsis thaliana

Laubinger, S., Zeller, G., Henz, S., Sachsenberg, T., Widmer, C., Naouar, N., Vuylsteke, M., Schölkopf, B., Rätsch, G., Weigel, D.

Genome Biology, 9(7: R112):1-16, July 2008 (article)

Abstract
Gene expression maps for model organisms, including Arabidopsis thaliana, have typically been created using gene-centric expression arrays. Here, we describe a comprehensive expression atlas, Arabidopsis thaliana Tiling Array Express (At-TAX), which is based on whole-genome tiling arrays. We demonstrate that tiling arrays are accurate tools for gene expression analysis and identified more than 1,000 unannotated transcribed regions. Visualizations of gene expression estimates, transcribed regions, and tiling probe measurements are accessible online at the At-TAX homepage.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Compressed Sensing and Bayesian Experimental Design

Seeger, M., Nickisch, H.

In ICML 2008, pages: 912-919, (Editors: Cohen, W. W., A. McCallum, S. Roweis), ACM Press, New York, NY, USA, 25th International Conference on Machine Learning, July 2008 (inproceedings)

Abstract
We relate compressed sensing (CS) with Bayesian experimental design and provide a novel efficient approximate method for the latter, based on expectation propagation. In a large comparative study about linearly measuring natural images, we show that the simple standard heuristic of measuring wavelet coefficients top-down systematically outperforms CS methods using random measurements; the sequential projection optimisation approach of (Ji & Carin, 2007) performs even worse. We also show that our own approximate Bayesian method is able to learn measurement filters on full images efficiently which ouperform the wavelet heuristic. To our knowledge, ours is the first successful attempt at "learning compressed sensing" for images of realistic size. In contrast to common CS methods, our framework is not restricted to sparse signals, but can readily be applied to other notions of signal complexity or noise models. We give concrete ideas how our method can be scaled up to large signal representations.

ei

PDF PDF Web DOI [BibTex]

PDF PDF Web DOI [BibTex]


no image
Tailoring density estimation via reproducing kernel moment matching

Song, L., Zhang, X., Smola, A., Gretton, A., Schölkopf, B.

In Proceedings of the 25th International Conference onMachine Learning, pages: 992-999, (Editors: WW Cohen and A McCallum and S Roweis), ACM Press, New York, NY, USA, ICML, July 2008 (inproceedings)

Abstract
Moment matching is a popular means of parametric density estimation. We extend this technique to nonparametric estimation of mixture models. Our approach works by embedding distributions into a reproducing kernel Hilbert space, and performing moment matching in that space. This allows us to tailor density estimators to a function class of interest (i.e., for which we would like to compute expectations). We show our density estimation approach is useful in applications such as message compression in graphical models, and image classification and retrieval.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Injective Hilbert Space Embeddings of Probability Measures

Sriperumbudur, B., Gretton, A., Fukumizu, K., Lanckriet, G., Schölkopf, B.

In Proceedings of the 21st Annual Conference on Learning Theory, pages: 111-122, (Editors: RA Servedio and T Zhang), Omnipress, Madison, WI, USA, 21st Annual Conference on Learning Theory (COLT), July 2008 (inproceedings)

Abstract
A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). The embedding function has been proven to be injective when the reproducing kernel is universal. In this case, the embedding induces a metric on the space of probability distributions defined on compact metric spaces. In the present work, we consider more broadly the problem of specifying characteristic kernels, defined as kernels for which the RKHS embedding of probability measures is injective. In particular, characteristic kernels can include non-universal kernels. We restrict ourselves to translation-invariant kernels on Euclidean space, and define the associated metric on probability measures in terms of the Fourier spectrum of the kernel and characteristic functions of these measures. The support of the kernel spectrum is important in finding whether a kernel is characteristic: in particular, the embedding is injective if and only if the kernel spectrum has the entire domain as its support. Characteristic kernels may nonetheless have difficulty in distinguishing certain distributions on the basis of finite samples, again due to the interaction of the kernel spectrum and the characteristic functions of the measures.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Hilbert-Schmidt Dependence Maximization Approach to Unsupervised Structure Discovery

Blaschko, M., Gretton, A.

In MLG 2008, pages: 1-3, 6th International Workshop on Mining and Learning with Graphs, July 2008 (inproceedings)

Abstract
In recent work by (Song et al., 2007), it has been proposed to perform clustering by maximizing a Hilbert-Schmidt independence criterion with respect to a predefined cluster structure Y , by solving for the partition matrix, II. We extend this approach here to the case where the cluster structure Y is not fixed, but is a quantity to be optimized; and we use an independence criterion which has been shown to be more sensitive at small sample sizes (the Hilbert-Schmidt Normalized Information Criterion, or HSNIC, Fukumizu et al., 2008). We demonstrate the use of this framework in two scenarios. In the first, we adopt a cluster structure selection approach in which the HSNIC is used to select a structure from several candidates. In the second, we consider the case where we discover structure by directly optimizing Y.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation

Hachiya, H., Akiyama, T., Sugiyama, M., Peters, J.

In AAAI 2008, pages: 1351-1356, (Editors: Fox, D. , C. P. Gomes), AAAI Press, Menlo Park, CA, USA, Twenty-Third Conference on Artificial Intelligence, July 2008 (inproceedings)

Abstract
Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are usually prohibitively expensive. A common approach is to use importance sampling techniques for compensating for the bias caused by the difference between data-sampling policies and the target policy. However, existing off-policy methods do not often take the variance of value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., Schölkopf, B.

In Proceedings of the 25th International Conference on Machine Learning, pages: 1112-1119, (Editors: WW Cohen and A McCallum and S Roweis), ACM Press, New York, NY, USA, ICML, July 2008 (inproceedings)

Abstract
Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typically better than was previously possible. We perform gradient based optimisation of the marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Although we focus on g.p. regression, the central idea is applicable to all kernel based algorithms, and we also provide some results for the support vector machine (s.v.m.) and kernel ridge regression (k.r.r.). Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Graphical Analysis of NMR Structural Quality and Interactive Contact Map of NOE Assignments in ARIA

Bardiaux, B., Bernard, A., Rieping, W., Habeck, M., Malliavin, T., Nilges, M.

BMC Structural Biology, 8(30):1-5, June 2008 (article)

Abstract
BACKGROUND: The Ambiguous Restraints for Iterative Assignment (ARIA) approach is widely used for NMR structure determination. It is based on simultaneously calculating structures and assigning NOE through an iterative protocol. The final solution consists of a set of conformers and a list of most probable assignments for the input NOE peak list. RESULTS: ARIA was extended with a series of graphical tools to facilitate a detailed analysis of the intermediate and final results of the ARIA protocol. These additional features provide (i) an interactive contact map, serving as a tool for the analysis of assignments, and (ii) graphical representations of structure quality scores and restraint statistics. The interactive contact map between residues can be clicked to obtain information about the restraints and their contributions. Profiles of quality scores are plotted along the protein sequence, and contact maps provide information of the agreement with the data on a residue pair level. CONCLUSIONS: The g raphical tools and outputs described here significantly extend the validation and analysis possibilities of NOE assignments given by ARIA as well as the analysis of the quality of the final structure ensemble. These tools are included in the latest version of ARIA, which is available at http://aria.pasteur.fr. The Web site also contains an installation guide, a user manual and example calculations.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Example-Based Learning for Single-Image Super-Resolution

Kim, K., Kwon, Y.

In DAGM 2008, pages: 456-463, (Editors: Rigoll, G. ), Springer, Berlin, Germany, 30th Annual Symposium of the German Association for Pattern Recognition, June 2008 (inproceedings)

Abstract
This paper proposes a regression-based method for single-image super-resolution. Kernel ridge regression (KRR) is used to estimate the high-frequency details of the underlying high-resolution image. A sparse solution of KRR is found by combining the ideas of kernel matching pursuit and gradient descent, which allows time-complexity to be kept to a moderate level. To resolve the problem of ringing artifacts occurring due to the regularization effect, the regression results are post-processed using a prior model of a generic image class. Experimental results demonstrate the effectiveness of the proposed method.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Multiple Kernel Learning Approach to Joint Multi-Class Object Detection

Lampert, C., Blaschko, M.

In DAGM 2008, pages: 31-40, (Editors: Rigoll, G. ), Springer, Berlin, Germany, 30th Annual Symposium of the German Association for Pattern Recognition, June 2008, Main Award DAGM 2008 (inproceedings)

Abstract
Most current methods for multi-class object classification and localization work as independent 1-vs-rest classifiers. They decide whether and where an object is visible in an image purely on a per-class basis. Joint learning of more than one object class would generally be preferable, since this would allow the use of contextual information such as co-occurrence between classes. However, this approach is usually not employed because of its computational cost. In this paper we propose a method to combine the efficiency of single class localization with a subsequent decision process that works jointly for all given object classes. By following a multiple kernel learning (MKL) approach, we automatically obtain a sparse dependency graph of relevant object classes on which to base the decision. Experiments on the PASCAL VOC 2006 and 2007 datasets show that the subsequent joint decision step clearly improves the accuracy compared to single class detection.

ei

PDF ZIP Web DOI [BibTex]

PDF ZIP Web DOI [BibTex]


no image
Natural Evolution Strategies

Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.

In CEC 2008, pages: 3381-3387, IEEE, Piscataway, NJ, USA, IEEE Congress on Evolutionary Computation, June 2008 (inproceedings)

Abstract
This paper presents natural evolution strategies (NES), a novel algorithm for performing real-valued dasiablack boxpsila function optimization: optimizing an unknown objective function where algorithm-selected function measurements constitute the only information accessible to the method. Natural evolution strategies search the fitness landscape using a multivariate normal distribution with a self-adapting mutation matrix to generate correlated mutations in promising regions. NES shares this property with covariance matrix adaption (CMA), an evolution strategy (ES) which has been shown to perform well on a variety of high-precision optimization tasks. The natural evolution strategies algorithm, however, is simpler, less ad-hoc and more principled. Self-adaptation of the mutation matrix is derived using a Monte Carlo estimate of the natural gradient towards better expected fitness. By following the natural gradient instead of the dasiavanillapsila gradient, we can ensure efficient update steps while preventing early convergence due to overly greedy updates, resulting in reduced sensitivity to local suboptima. We show NES has competitive performance with CMA on unimodal tasks, while outperforming it on several multimodal tasks that are rich in deceptive local optima.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Kernel Methods in Machine Learning

Hofmann, T., Schölkopf, B., Smola, A.

Annals of Statistics, 36(3):1171-1220, June 2008 (article)

Abstract
We review machine learning methods employing positive definite kernels. These methods formulate learning and estimation problems in a reproducing kernel Hilbert space (RKHS) of functions defined on the data domain, expanded in terms of a kernel. Working in linear spaces of function has the benefit of facilitating the construction and analysis of learning algorithms while at the same time allowing large classes of functions. The latter include nonlinear functions as well as functions defined on nonvectorial data.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Cross-validation Optimization for Large Scale Structured Classification Kernel Methods

Seeger, M.

Journal of Machine Learning Research, 9, pages: 1147-1178, June 2008 (article)

Abstract
We propose a highly efficient framework for penalized likelihood kernel methods applied to multi-class models with a large, structured set of classes. As opposed to many previous approaches which try to decompose the fitting problem into many smaller ones, we focus on a Newton optimization of the complete model, making use of model structure and linear conjugate gradients in order to approximate Newton search directions. Crucially, our learning method is based entirely on matrix-vector multiplication primitives with the kernel matrices and their derivatives, allowing straightforward specialization to new kernels, and focusing code optimization efforts to these primitives only. Kernel parameters are learned automatically, by maximizing the cross-validation log likelihood in a gradient-based way, and predictive probabilities are estimated. We demonstrate our approach on large scale text classification tasks with hierarchical structure on thousands of classes, achieving state-of-the-art results in an order of magnitude less time than previous work.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Partitioning of Image Datasets using Discriminative Context Information

Lampert, CH.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008 (inproceedings)

Abstract
We propose a new method to partition an unlabeled dataset, called Discriminative Context Partitioning (DCP). It is motivated by the idea of splitting the dataset based only on how well the resulting parts can be separated from a context class of disjoint data points. This is in contrast to typical clustering techniques like K-means that are based on a generative model by implicitly or explicitly searching for modes in the distribution of samples. The discriminative criterion in DCP avoids the problems that density based methods have when the a priori assumption of multimodality is violated, when the number of samples becomes small in relation to the dimensionality of the feature space, or if the cluster sizes are strongly unbalanced. We formulate DCP‘s separation property as a large-margin criterion, and show how the resulting optimization problem can be solved efficiently. Experiments on the MNIST and USPS datasets of handwritten digits and on a subset of the Caltech256 dataset show that, given a suitable context, DCP can achieve good results even in situation where density-based clustering techniques fail.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Correlational Spectral Clustering

Blaschko, MB., Lampert, CH.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008 (inproceedings)

Abstract
We present a new method for spectral clustering with paired data based on kernel canonical correlation analysis, called correlational spectral clustering. Paired data are common in real world data sources, such as images with text captions. Traditional spectral clustering algorithms either assume that data can be represented by a single similarity measure, or by co-occurrence matrices that are then used in biclustering. In contrast, the proposed method uses separate similarity measures for each data representation, and allows for projection of previously unseen data that are only observed in one representation (e.g. images but not text). We show that this algorithm generalizes traditional spectral clustering algorithms and show consistent empirical improvement over spectral clustering on a variety of datasets of images with associated text.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Approximate Dynamic Programming with Gaussian Processes

Deisenroth, M., Peters, J., Rasmussen, C.

In ACC 2008, pages: 4480-4485, IEEE Service Center, Piscataway, NJ, USA, 2008 American Control Conference, June 2008 (inproceedings)

Abstract
In general, it is difficult to determine an optimal closed-loop policy in nonlinear control problems with continuous-valued state and control domains. Hence, approximations are often inevitable. The standard method of discretizing states and controls suffers from the curse of dimensionality and strongly depends on the chosen temporal sampling rate. In this paper, we introduce Gaussian process dynamic programming (GPDP) and determine an approximate globally optimal closed-loop policy. In GPDP, value functions in the Bellman recursion of the dynamic programming algorithm are modeled using Gaussian processes. GPDP returns an optimal statefeedback for a finite set of states. Based on these outcomes, we learn a possibly discontinuous closed-loop policy on the entire state space by switching between two independently trained Gaussian processes. A binary classifier selects one Gaussian process to predict the optimal control signal. We show that GPDP is able to yield an almost optimal solution to an LQ problem using few sample points. Moreover, we successfully apply GPDP to the underpowered pendulum swing up, a complex nonlinear control problem.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search

Lampert, C., Blaschko, M., Hofmann, T.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008, Best paper award (inproceedings)

Abstract
Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branchand- bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the 2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.

ei

PDF PDF Web DOI [BibTex]

PDF PDF Web DOI [BibTex]


no image
Computed Torque Control with Nonparametric Regression Models

Nguyen-Tuong, D., Seeger, M., Peters, J.

In ACC 2008, pages: 212-217, IEEE Service Center, Piscataway, NJ, USA, 2008 American Control Conference, June 2008 (inproceedings)

Abstract
Computed torque control allows the design of considerably more precise, energy-efficient and compliant controls for robots. However, the major obstacle is the requirement of an accurate model for torque generation, which cannot be obtained in some cases using rigid-body formulations due to unmodeled nonlinearities, such as complex friction or actuator dynamics. In such cases, models approximated from robot data present an appealing alternative. In this paper, we compare two nonparametric regression methods for model approximation, i.e., locally weighted projection regression (LWPR) and Gaussian process regression (GPR). While locally weighted regression was employed for real-time model estimation in learning adaptive control, Gaussian process regression has not been used in control to-date due to high computational requirements. The comparison includes the assessment of model approximation for both regression methods using data originated from SARCOS robot arm, as well as an evaluation of the robot tracking p erformance in computed torque control employing the approximated models. Our results show that GPR can be applied for real-time control achieving higher accuracy. However, for the online learning LWPR is superior by reason of lower computational requirements.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-Classification by Categorical Features via Clustering

Seldin, Y., Tishby, N.

In In the proceedings of the 25th International Conference on Machine Learning (ICML 2008), pages: 920-927, 25th International Conference on Machine Learning (ICML), June 2008 (inproceedings)

Abstract
We derive a generalization bound for multi-classification schemes based on grid clustering in categorical parameter product spaces. Grid clustering partitions the parameter space in the form of a Cartesian product of partitions for each of the parameters. The derived bound provides a means to evaluate clustering solutions in terms of the generalization power of a built-on classifier. For classification based on a single feature the bound serves to find a globally optimal classification rule. Comparison of the generalization power of individual features can then be used for feature ranking. Our experiments show that in this role the bound is much more precise than mutual information or normalized correlation indices.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Kernel Test of Nonlinear Granger Causality

Sun, X.

In Proceedings of the Workshop on Inference and Estimation in Probabilistic Time-Series Models, pages: 79-89, (Editors: Barber, D. , A. T. Cemgil, S. Chiappa), Isaac Newton Institute for Mathematical Sciences, Cambridge, United Kingdom, Workshop on Inference and Estimation in Probabilistic Time-Series Models, June 2008 (inproceedings)

Abstract
We present a novel test of nonlinear Granger causality in bivariate time series. The trace norm of conditional covariance operators is used to capture the prediction errors. Based on this measure, a subsampling-based multiple testing procedure tests the prediction improvement of one time series by the other one. The distributional properties of the resulting p-values reveal the direction of Granger causality. Encouraging results of experiments with simulated and real-world data support our approach.

ei

PDF [BibTex]

PDF [BibTex]


Thumb xl teaser
Bayesian Color Constancy Revisited

Gehler, P., Rother, C., Blake, A., Minka, T., Sharp, T.

In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, June 2008, http://dx.doi.org/10.1109/CVPR.2008.4587765 (inproceedings)

ei

website+code+data pdf [BibTex]

website+code+data pdf [BibTex]


no image
Reinforcement Learning of Motor Skills with Policy Gradients

Peters, J., Schaal, S.

Neural Networks, 21(4):682-697, May 2008 (article)

ei

PDF Web DOI [BibTex]


no image
Information Consistency of Nonparametric Gaussian Process Methods

Seeger, MW., Kakade, SM., Foster, DP.

IEEE Transactions on Information Theory, 54(5):2376-2382, May 2008 (article)

Abstract
Abstract—Bayesian nonparametric models are widely and successfully used for statistical prediction. While posterior consistency properties are well studied in quite general settings, results have been proved using abstract concepts such as metric entropy, and they come with subtle conditions which are hard to validate and not intuitive when applied to concrete models. Furthermore, convergence rates are difficult to obtain. By focussing on the concept of information consistency for Bayesian Gaussian process (GP)models, consistency results and convergence rates are obtained via a regret bound on cumulative log loss. These results depend strongly on the covariance function of the prior process, thereby giving a novel interpretation to penalization with reproducing kernel Hilbert space norms and to commonly used covariance function classes and their parameters. The proof of the main result employs elementary convexity arguments only. A theorem of Widom is used in order to obtain precise convergence rates for several covariance functions widely used in practice.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Real-time Learning of Resolved Velocity Control on a Mitsubishi PA-10

Peters, J., Nguyen-Tuong, D.

In ICRA 2008, pages: 2872-2877, IEEE Service Center, Piscataway, NJ, USA, 2008 IEEE International Conference on Robotics and Automation, May 2008 (inproceedings)

Abstract
Learning inverse kinematics has long been fascinating the robot learning community. While humans acquire this transformation to complicated tool spaces with ease, it is not a straightforward application for supervised learning algorithms due to non-convex learning problem. However, the key insight that the problem can be considered convex in small local regions allows the application of locally linear learning methods. Nevertheless, the local solution of the problem depends on the data distribution which can result into inconsistent global solutions with large model discontinuities. While this problem can be treated in various ways in offline learning, it poses a serious problem for online learning. Previous approaches to the real-time learning of inverse kinematics avoid this problem using smart data generation, such as the learner biasses its own solution. Such biassed solutions can result into premature convergence, and from the resulting solution it is often hard to understand what has been learned in tha t local region. This paper improves and solves this problem by presenting a learning algorithm which can deal with this inconsistency through re-weighting the data online. Furthermore, we show that our algorithms work not only in simulation, but we present real-time learning results on a physical Mitsubishi PA-10 robot arm.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Relating the Thermodynamic Arrow of Time to the Causal Arrow

Allahverdyan, A., Janzing, D.

Journal of Statistical Mechanics, 2008(P04001):1-21, April 2008 (article)

Abstract
Consider a Hamiltonian system that consists of a slow subsystem S and a fast subsystem F. The autonomous dynamics of S is driven by an effective Hamiltonian, but its thermodynamics is unexpected. We show that a well-defined thermodynamic arrow of time (second law) emerges for S whenever there is a well-defined causal arrow from S to F and the back-action is negligible. This is because the back-action of F on S is described by a non-globally Hamiltonian Born–Oppenheimer term that violates the Liouville theorem, and makes the second law inapplicable to S. If S and F are mixing, under the causal arrow condition they are described by microcanonical distributions P(S) and P(S|F). Their structure supports a causal inference principle proposed recently in machine learning.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Generalization and Similarity in Exemplar Models of Categorization: Insights from Machine Learning

Jäkel, F., Schölkopf, B., Wichmann, F.

Psychonomic Bulletin and Review, 15(2):256-271, April 2008 (article)

Abstract
Exemplar theories of categorization depend on similarity for explaining subjects’ ability to generalize to new stimuli. A major criticism of exemplar theories concerns their lack of abstraction mechanisms and thus, seemingly, generalization ability. Here, we use insights from machine learning to demonstrate that exemplar models can actually generalize very well. Kernel methods in machine learning are akin to exemplar models and very successful in real-world applications. Their generalization performance depends crucially on the chosen similaritymeasure. While similarity plays an important role in describing generalization behavior it is not the only factor that controls generalization performance. In machine learning, kernel methods are often combined with regularization techniques to ensure good generalization. These same techniques are easily incorporated in exemplar models. We show that the Generalized Context Model (Nosofsky, 1986) and ALCOVE (Kruschke, 1992) are closely related to a statistical model called kernel logistic regression. We argue that generalization is central to the enterprise of understanding categorization behavior and suggest how insights from machine learning can offer some guidance. Keywords: kernel, similarity, regularization, generalization, categorization.

ei

PDF Web DOI [BibTex]


no image
Graph Mining with Variational Dirichlet Process Mixture Models

Tsuda, K., Kurihara, K.

In SDM 2008, pages: 432-442, (Editors: Zaki, M. J.), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 8th SIAM International Conference on Data Mining, April 2008 (inproceedings)

Abstract
Graph data such as chemical compounds and XML documents are getting more common in many application domains. A main difficulty of graph data processing lies in the intrinsic high dimensionality of graphs, namely, when a graph is represented as a binary feature vector of indicators of all possible subgraph patterns, the dimensionality gets too large for usual statistical methods. We propose a nonparametric Bayesian method for clustering graphs and selecting salient patterns at the same time. Variational inference is adopted here, because sampling is not applicable due to extremely high dimensionality. The feature set minimizing the free energy is efficiently collected with the DFS code tree, where the generation of useless subgraphs is suppressed by a tree pruning condition. In experiments, our method is compared with a simpler approach based on frequent subgraph mining, and graph kernels.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Manifold-valued Thin-plate Splines with Applications in Computer Graphics

Steinke, F., Hein, M., Peters, J., Schölkopf, B.

Computer Graphics Forum, 27(2):437-448, April 2008 (article)

Abstract
We present a generalization of thin-plate splines for interpolation and approximation of manifold-valued data, and demonstrate its usefulness in computer graphics with several applications from different fields. The cornerstone of our theoretical framework is an energy functional for mappings between two Riemannian manifolds which is independent of parametrization and respects the geometry of both manifolds. If the manifolds are Euclidean, the energy functional reduces to the classical thin-plate spline energy. We show how the resulting optimization problems can be solved efficiently in many cases. Our example applications range from orientation interpolation and motion planning in animation over geometric modelling tasks to color interpolation.

ei

PDF AVI Web DOI [BibTex]


no image
Model-Based Reinforcement Learning with Continuous States and Actions

Deisenroth, M., Rasmussen, C., Peters, J.

In ESANN 2008, pages: 19-24, (Editors: Verleysen, M. ), d-side, Evere, Belgium, European Symposium on Artificial Neural Networks, April 2008 (inproceedings)

Abstract
Finding an optimal policy in a reinforcement learning (RL) framework with continuous state and action spaces is challenging. Approximate solutions are often inevitable. GPDP is an approximate dynamic programming algorithm based on Gaussian process (GP) models for the value functions. In this paper, we extend GPDP to the case of unknown transition dynamics. After building a GP model for the transition dynamics, we apply GPDP to this model and determine a continuous-valued policy in the entire state space. We apply the resulting controller to the underpowered pendulum swing up. Moreover, we compare our results on this RL task to a nearly optimal discrete DP solution in a fully known environment.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning Inverse Dynamics: A Comparison

Nguyen-Tuong, D., Peters, J., Seeger, M., Schölkopf, B.

In Advances in Computational Intelligence and Learning: Proceedings of the European Symposium on Artificial Neural Networks, pages: 13-18, (Editors: M Verleysen), d-side, Evere, Belgium, 16th European Symposium on Artificial Neural Networks (ESANN), April 2008 (inproceedings)

Abstract
While it is well-known that model can enhance the control performance in terms of precision or energy efficiency, the practical application has often been limited by the complexities of manually obtaining sufficiently accurate models. In the past, learning has proven a viable alternative to using a combination of rigid-body dynamics and handcrafted approximations of nonlinearities. However, a major open question is what nonparametric learning method is suited best for learning dynamics? Traditionally, locally weighted projection regression (LWPR), has been the standard method as it is capable of online, real-time learning for very complex robots. However, while LWPR has had significant impact on learning in robotics, alternative nonparametric regression methods such as support vector regression (SVR) and Gaussian processes regression (GPR) offer interesting alternatives with fewer open parameters and potentially higher accuracy. In this paper, we evaluate these three alternatives for model learning. Our comparison consists out of the evaluation of learning quality for each regression method using original data from SARCOS robot arm, as well as the robot tracking performance employing learned models. The results show that GPR and SVR achieve a superior learning precision and can be applied for real-time control obtaining higher accuracy. However, for the online learning LWPR presents the better method due to its lower computational requirements.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
The Metric Nearness Problem

Brickell, J., Dhillon, I., Sra, S., Tropp, J.

SIAM Journal on Matrix Analysis and Applications, 30(1):375-396, April 2008 (article)

Abstract
Metric nearness refers to the problem of optimally restoring metric properties to distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric data can be important in various settings, for example, in clustering, classification, metric-based indexing, query processing, and graph theoretic approximation algorithms. This paper formulates and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a “nearest” set of distances that satisfy the properties of a metric—principally the triangle inequality. For solving this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative projection method. An intriguing aspect of the metric nearness problem is that a special case turns out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and develops a new algorithm for the latter problem using a primal-dual method. Applications to graph clustering are provided as an illustratio n. We include experiments that demonstrate the computational superiority of triangle fixing over general purpose convex programming software. Finally, we conclude by suggesting various useful extensions and generalizations to metric nearness.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Bayesian Inference and Optimal Design for the Sparse Linear Model

Seeger, MW.

Journal of Machine Learning Research, 9, pages: 759-813, April 2008 (article)

Abstract
The linear model with sparsity-favouring prior on the coefficients has important applications in many different domains. In machine learning, most methods to date search for maximum a posteriori sparse solutions and neglect to represent posterior uncertainties. In this paper, we address problems of Bayesian optimal design (or experiment planning), for which accurate estimates of uncertainty are essential. To this end, we employ expectation propagation approximate inference for the linear model with Laplace prior, giving new insight into numerical stability properties and proposing a robust algorithm. We also show how to estimate model hyperparameters by empirical Bayesian maximisation of the marginal likelihood, and propose ideas in order to scale up the method to very large underdetermined problems. We demonstrate the versatility of our framework on the application of gene regulatory network identification from micro-array expression data, where both the Laplace prior and the active experimental design approach are shown to result in significant improvements. We also address the problem of sparse coding of natural images, and show how our framework can be used for compressive sensing tasks.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Consistency of Spectral Clustering

von Luxburg, U., Belkin, M., Bousquet, O.

Annals of Statistics, 36(2):555-586, April 2008 (article)

Abstract
Consistency is a key property of statistical algorithms when the data is drawn from some underlying probability distribution. Surprisingly, despite decades of work, little is known about consistency of most clustering algorithms. In this paper we investigate consistency of the popular family of spectral clustering algorithms, which clusters the data with the help of eigenvectors of graph Laplacian matrices. We develop new methods to establish that for increasing sample size, those eigenvectors converge to the eigenvectors of certain limit operators. As a result we can prove that one of the two major classes of spectral clustering (normalized clustering) converges under very general conditions, while the other (unnormalized clustering) is only consistent under strong additional assumptions, which are not always satisfied in real data. We conclude that our analysis provides strong evidence for the superiority of normalized spectral clustering.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Plant Classification from Bat-Like Echolocation Signals

Yovel, Y., Franz, MO., Stilz, P., Schnitzler, H-U.

PLoS Computational Biology, 4(3, e1000032):1-13, March 2008 (article)

Abstract
Classification of plants according to their echoes is an elementary component of bat behavior that plays an important role in spatial orientation and food acquisition. Vegetation echoes are, however, highly complex stochastic signals: from an acoustical point of view, a plant can be thought of as a three-dimensional array of leaves reflecting the emitted bat call. The received echo is therefore a superposition of many reflections. In this work we suggest that the classification of these echoes might not be such a troublesome routine for bats as formerly thought. We present a rather simple approach to classifying signals from a large database of plant echoes that were created by ensonifying plants with a frequency-modulated bat-like ultrasonic pulse. Our algorithm uses the spectrogram of a single echo from which it only uses features that are undoubtedly accessible to bats. We used a standard machine learning algorithm (SVM) to automatically extract suitable linear combinations of time and frequency cues from the spectrograms such that classification with high accuracy is enabled. This demonstrates that ultrasonic echoes are highly informative about the species membership of an ensonified plant, and that this information can be extracted with rather simple, biologically plausible analysis. Thus, our findings provide a new explanatory basis for the poorly understood observed abilities of bats in classifying vegetation and other complex objects.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Causal Reasoning by Evaluating the Complexity of Conditional Densities with Kernel Methods

Sun, X., Janzing, D., Schölkopf, B.

Neurocomputing, 71(7-9):1248-1256, March 2008 (article)

Abstract
We propose a method to quantify the complexity of conditional probability measures by a Hilbert space seminorm of the logarithm of its density. The concept of reproducing kernel Hilbert spaces (RKHSs) is a flexible tool to define such a seminorm by choosing an appropriate kernel. We present several examples with artificial data sets where our kernel-based complexity measure is consistent with our intuitive understanding of complexity of densities. The intention behind the complexity measure is to provide a new approach to inferring causal directions. The idea is that the factorization of the joint probability measure P(effect, cause) into P(effect|cause)P(cause) leads typically to "simpler" and "smoother" terms than the factorization into P(cause|effect)P(effect). Since the conventional constraint-based approach of causal discovery is not able to determine the causal direction between only two variables, our inference principle can in particular be useful when combined with other existing methods. We provide several simple examples with real-world data where the true causal directions indeed lead to simpler (conditional) densities.

ei

Web DOI [BibTex]


no image
Natural Actor-Critic

Peters, J., Schaal, S.

Neurocomputing, 71(7-9):1180-1190, March 2008 (article)

Abstract
In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients em- ploying Amari’s natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by lin- ear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gra- dients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke’s Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Inferring Spike Trains From Local Field Potentials

Rasch, M., Gretton, A., Murayama, Y., Maass, W., Logothetis, N.

Journal of Neurophysiology, 99(3):1461-1476, March 2008 (article)

Abstract
We investigated whether it is possible to infer spike trains solely on the basis of the underlying local field potentials (LFPs). Using support vector machines and linear regression models, we found that in the primary visual cortex (V1) of monkeys, spikes can indeed be inferred from LFPs, at least with moderate success. Although there is a considerable degree of variation across electrodes, the low-frequency structure in spike trains (in the 100-ms range) can be inferred with reasonable accuracy, whereas exact spike positions are not reliably predicted. Two kinds of features of the LFP are exploited for prediction: the frequency power of bands in the high gamma-range (40–90 Hz) and information contained in lowfrequency oscillations ( 10 Hz), where both phase and power modulations are informative. Information analysis revealed that both features code (mainly) independent aspects of the spike-to-LFP relationship, with the low-frequency LFP phase coding for temporally clustered spiking activity. Although both features and prediction quality are similar during seminatural movie stimuli and spontaneous activity, prediction performance during spontaneous activity degrades much more slowly with increasing electrode distance. The general trend of data obtained with anesthetized animals is qualitatively mirrored in that of a more limited data set recorded in V1 of non-anesthetized monkeys. In contrast to the cortical field potentials, thalamic LFPs (e.g., LFPs derived from recordings in the dorsal lateral geniculate nucleus) hold no useful information for predicting spiking activity.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
ISD: A Software Package for Bayesian NMR Structure Calculation

Rieping, W., Nilges, M., Habeck, M.

Bioinformatics, 24(8):1104-1105, February 2008 (article)

Abstract
SUMMARY: The conventional approach to calculating biomolecular structures from nuclear magnetic resonance (NMR) data is often viewed as subjective due to its dependence on rules of thumb for deriving geometric constraints and suitable values for theory parameters from noisy experimental data. As a result, it can be difficult to judge the precision of an NMR structure in an objective manner. The Inferential Structure Determination (ISD) framework, which has been introduced recently, addresses this problem by using Bayesian inference to derive a probability distribution that represents both the unknown structure and its uncertainty. It also determines additional unknowns, such as theory parameters, that normally need be chosen empirically. Here we give an overview of the ISD software package, which implements this methodology. AVAILABILITY: The program is available at http://www.bioc.cam.ac.uk/isd

ei

Web DOI [BibTex]

Web DOI [BibTex]