Header logo is


2004


no image
Selective Attention to Auditory Stimuli: A Brain-Computer Interface Paradigm

Hill, N., Lal, T., Schröder, M., Hinterberger, T., Birbaumer, N., Schölkopf, B.

7, pages: 102, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
During the last 20 years several paradigms for Brain Computer Interfaces have been proposed— see [1] for a recent review. They can be divided into (a) stimulus-driven paradigms, using e.g. event-related potentials or visual evoked potentials from an EEG signal, and (b) patient-driven paradigms such as those that use premotor potentials correlated with imagined action, or slow cortical potentials (e.g. [2]). Our aim is to develop a stimulus-driven paradigm that is applicable in practice to patients. Due to the unreliability of visual perception in “locked-in” patients in the later stages of disorders such as Amyotrophic Lateral Sclerosis, we concentrate on the auditory modality. Speci- cally, we look for the effects, in the EEG signal, of selective attention to one of two concurrent auditory stimulus streams, exploiting the increased activation to attended stimuli that is seen under some circumstances [3]. We present the results of our preliminary experiments on normal subjects. On each of 400 trials, two repetitive stimuli (sequences of drum-beats or other pulsed stimuli) could be heard simultaneously. The two stimuli were distinguishable from one another by their acoustic properties, by their source location (one from a speaker to the left of the subject, the other from the right), and by their differing periodicities. A visual cue preceded the stimulus by 500 msec, indicating which of the two stimuli to attend to, and the subject was instructed to count the beats in the attended stimulus stream. There were up to 6 beats of each stimulus: with equal probability on each trial, all 6 were played, or the fourth was omitted, or the fth was omitted. The 40-channel EEG signals were analyzed ofine to reconstruct which of the streams was attended on each trial. A linear Support Vector Machine [4] was trained on a random subset of the data and tested on the remainder. Results are compared from two types of pre-processing of the signal: for each stimulus stream, (a) EEG signals at the stream's beat periodicity are emphasized, or (b) EEG signals following beats are contrasted with those following missing beats. Both forms of pre-processing show promising results, i.e. that selective attention to one or the other auditory stream yields signals that are classiable signicantly above chance performance. In particular, the second pre-processing was found to be robust to reduction in the number of features used for classication (cf. [5]), helping us to eliminate noise.

ei

PDF Web [BibTex]

2004


PDF Web [BibTex]


no image
Texture and Haptic Cues in Slant Discrimination: Measuring the Effect of Texture Type

Rosas, P., Wichmann, F., Ernst, M., Wagemans, J.

7, pages: 165, (Editors: Bülthoff, H. H., H. A. Mallot, R. Ulrich, F. A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In a number of models of depth cue combination the depth percept is constructed via a weighted average combination of independent depth estimations. The inuence of each cue in such average depends on the reliability of the source of information [1,5]. In particular, Ernst and Banks (2002) formulate such combination as that of the minimum variance unbiased estimator that can be constructed from the available cues. We have observed systematic differences in slant discrimination performance of human observers when different types of textures were used as cue to slant [4]. If the depth percept behaves as described above, our measurements of the slopes of the psychometric functions provide the predicted weights for the texture cue for the ranked texture types. However, the results for slant discrimination obtained when combining these texture types with object motion results are difcult to reconcile with the minimum variance unbiased estimator model [3]. This apparent failure of such model might be explained by the existence of a coupling of texture and motion, violating the assumption of independence of cues. Hillis, Ernst, Banks, and Landy (2002) [2] have shown that while for between-modality combination the human visual system has access to the single-cue information, for withinmodality combination (visual cues) the single-cue information is lost. This suggests a coupling between visual cues and independence between visual and haptic cues. Then, in the present study we combined the different texture types with haptic information in a slant discrimination task, to test whether in the between-modality condition these cues are combined as predicted by an unbiased, minimum variance estimator model. The measured weights for the cues were consistent with a combination rule sensitive to the reliability of the sources of information, but did not match the predictions of a statistically optimal combination.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Efficient Approximations for Support Vector Classiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

ei

Web [BibTex]

Web [BibTex]


no image
EEG Channel Selection for Brain Computer Interface Systems Based on Support Vector Methods

Schröder, M., Lal, T., Bogdan, M., Schölkopf, B.

7, pages: 50, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
A Brain Computer Interface (BCI) system allows the direct interpretation of brain activity patterns (e.g. EEG signals) by a computer. Typical BCI applications comprise spelling aids or environmental control systems supporting paralyzed patients that have lost motor control completely. The design of an EEG based BCI system requires good answers for the problem of selecting useful features during the performance of a mental task as well as for the problem of classifying these features. For the special case of choosing appropriate EEG channels from several available channels, we propose the application of variants of the Support Vector Machine (SVM) for both problems. Although these algorithms do not rely on prior knowledge they can provide more accurate solutions than standard lter methods [1] for feature selection which usually incorporate prior knowledge about neural activity patterns during the performed mental tasks. For judging the importance of features we introduce a new relevance measure and apply it to EEG channels. Although we base the relevance measure for this purpose on the previously introduced algorithms, it does in general not depend on specic algorithms but can be derived using arbitrary combinations of feature selectors and classifiers.

ei

Web [BibTex]

Web [BibTex]


no image
Learning Depth

Sinz, F., Franz, MO.

pages: 69, (Editors: H.H.Bülthoff, H.A.Mallot, R.Ulrich,F.A.Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
The depth of a point in space can be estimated by observing its image position from two different viewpoints. The classical approach to stereo vision calculates depth from the two projection equations which together form a stereocamera model. An unavoidable preparatory work for this solution is a calibration procedure, i.e., estimating the external (position and orientation) and internal (focal length, lens distortions etc.) parameters of each camera from a set of points with known spatial position and their corresponding image positions. This is normally done by iteratively linearizing the single camera models and reestimating their parameters according to the error on the known datapoints. The advantage of the classical method is the maximal usage of prior knowledge about the underlying physical processes and the explicit estimation of meaningful model parameters such as focal length or camera position in space. However, the approach neglects the nonlinear nature of the problem such that the results critically depend on the choice of the initial values for the parameters. In this study, we approach the depth estimation problem from a different point of view by applying generic machine learning algorithms to learn the mapping from image coordinates to spatial position. These algorithms do not require any domain knowledge and are able to learn nonlinear functions by mapping the inputs into a higher-dimensional space. Compared to classical calibration, machine learning methods give a direct solution to the depth estimation problem which means that the values of the stereocamera parameters cannot be extracted from the learned mapping. On the poster, we compare the performance of classical camera calibration to that of different machine learning algorithms such as kernel ridge regression, gaussian processes and support vector regression. Our results indicate that generic learning approaches can lead to higher depth accuracies than classical calibration although no domain knowledge is used.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning from Labeled and Unlabeled Data: Semi-supervised Learning and Ranking

Zhou, D.

January 2004 (talk)

Abstract
We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data.

ei

PDF [BibTex]


no image
Introduction to Category Theory

Bousquet, O.

Internal Seminar, January 2004 (talk)

Abstract
A brief introduction to the general idea behind category theory with some basic definitions and examples. A perspective on higher dimensional categories is given.

ei

PDF [BibTex]

PDF [BibTex]


no image
Multivariate Regression with Stiefel Constraints

Bakir, G., Gretton, A., Franz, M., Schölkopf, B.

(128), MPI for Biological Cybernetics, Spemannstr 38, 72076, Tuebingen, 2004 (techreport)

Abstract
We introduce a new framework for regression between multi-dimensional spaces. Standard methods for solving this problem typically reduce the problem to one-dimensional regression by choosing features in the input and/or output spaces. These methods, which include PLS (partial least squares), KDE (kernel dependency estimation), and PCR (principal component regression), select features based on different a-priori judgments as to their relevance. Moreover, loss function and constraints are chosen not primarily on statistical grounds, but to simplify the resulting optimisation. By contrast, in our approach the feature construction and the regression estimation are performed jointly, directly minimizing a loss function that we specify, subject to a rank constraint. A major advantage of this approach is that the loss is no longer chosen according to the algorithmic requirements, but can be tailored to the characteristics of the task at hand; the features will then be optimal with respect to this objective. Our approach also allows for the possibility of using a regularizer in the optimization. Finally, by processing the observations sequentially, our algorithm is able to work on large scale problems.

ei

PDF [BibTex]

PDF [BibTex]


no image
Neural mechanisms underlying control of a Brain-Computer-Interface (BCI): Simultaneous recording of bold-response and EEG

Hinterberger, T., Wilhelm, B., Veit, R., Weiskopf, N., Lal, TN., Birbaumer, N.

2004 (poster)

Abstract
Brain computer interfaces (BCI) enable humans or animals to communicate or activate external devices without muscle activity using electric brain signals. The BCI Thought Translation Device (TTD) uses learned regulation of slow cortical potentials (SCPs), a skill most people and paralyzed patients can acquire with training periods of several hours up to months. The neurophysiological mechanisms and anatomical sources of SCPs and other event-related brain macro-potentials are well understood, but the neural mechanisms underlying learning of the self-regulation skill for BCI-use are unknown. To uncover the relevant areas of brain activation during regulation of SCPs, the TTD was combined with functional MRI and EEG was recorded inside the MRI scanner in twelve healthy participants who have learned to regulate their SCP with feedback and reinforcement. The results demonstrate activation of specific brain areas during execution of the brain regulation skill: successf! ul control of cortical positivity allowing a person to activate an external device was closely related to an increase of BOLD (blood oxygen level dependent) response in the basal ganglia and frontal premotor deactivation indicating learned regulation of a cortical-striatal loop responsible for local excitation thresholds of cortical assemblies. The data suggest that human users of a BCI learn the regulation of cortical excitation thresholds of large neuronal assemblies as a prerequisite of direct brain communication: the learning of this skill depends critically on an intact and flexible interaction between these cortico-basal ganglia-circuits. Supported by the Deutsche Forschungsgemeinschaft (DFG) and the National Institute of Health (NIH).

ei

[BibTex]

[BibTex]


no image
Learning from Labeled and Unlabeled Data Using Random Walks

Zhou, D., Schölkopf, B.

Max Planck Institute for Biological Cybernetics, 2004 (techreport)

Abstract
We consider the general problem of learning from labeled and unlabeled data. Given a set of points, some of them are labeled, and the remaining points are unlabeled. The goal is to predict the labels of the unlabeled points. Any supervised learning algorithm can be applied to this problem, for instance, Support Vector Machines (SVMs). The problem of our interest is if we can implement a classifier which uses the unlabeled data information in some way and has higher accuracy than the classifiers which use the labeled data only. Recently we proposed a simple algorithm, which can substantially benefit from large amounts of unlabeled data and demonstrates clear superiority to supervised learning methods. In this paper we further investigate the algorithm using random walks and spectral graph theory, which shed light on the key steps in this algorithm.

ei

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
Masking by plaid patterns revisited

Wichmann, F.

Experimentelle Psychologie. Beitr{\"a}ge zur 46. Tagung experimentell arbeitender Psychologen, 46, pages: 285, 2004 (poster)

ei

[BibTex]

[BibTex]


no image
Behaviour and Convergence of the Constrained Covariance

Gretton, A., Smola, A., Bousquet, O., Herbrich, R., Schölkopf, B., Logothetis, N.

(130), MPI for Biological Cybernetics, 2004 (techreport)

Abstract
We discuss reproducing kernel Hilbert space (RKHS)-based measures of statistical dependence, with emphasis on constrained covariance (COCO), a novel criterion to test dependence of random variables. We show that COCO is a test for independence if and only if the associated RKHSs are universal. That said, no independence test exists that can distinguish dependent and independent random variables in all circumstances. Dependent random variables can result in a COCO which is arbitrarily close to zero when the source densities are highly non-smooth, which can make dependence hard to detect empirically. All current kernel-based independence tests share this behaviour. Finally, we demonstrate exponential convergence between the population and empirical COCO, which implies that COCO does not suffer from slow learning rates when used as a dependence test.

ei

PDF [BibTex]

PDF [BibTex]


no image
Early visual processing—data, theory, models

Wichmann, F.

Experimentelle Psychologie. Beitr{\"a}ge zur 46. Tagung experimentell arbeitender Psychologen, 46, pages: 24, 2004 (poster)

ei

[BibTex]

[BibTex]


no image
Statistische Lerntheorie und Empirische Inferenz

Schölkopf, B.

Jahrbuch der Max-Planck-Gesellschaft, 2004, pages: 377-382, 2004 (misc)

Abstract
Statistical learning theory studies the process of inferring regularities from empirical data. The fundamental problem is what is called generalization: how it is possible to infer a law which will be valid for an infinite number of future observations, given only a finite amount of data? This problem hinges upon fundamental issues of statistics and science in general, such as the problems of complexity of explanations, a priori knowledge, and representation of data.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Confidence Sets for Ratios: A Purely Geometric Approach To Fieller’s Theorem

von Luxburg, U., Franz, V.

(133), Max Planck Institute for Biological Cybernetics, 2004 (techreport)

Abstract
We present a simple, geometric method to construct Fieller's exact confidence sets for ratios of jointly normally distributed random variables. Contrary to previous geometric approaches in the literature, our method is valid in the general case where both sample mean and covariance are unknown. Moreover, not only the construction but also its proof are purely geometric and elementary, thus giving intuition into the nature of the confidence sets.

ei

PDF [BibTex]

PDF [BibTex]


no image
Transductive Inference with Graphs

Zhou, D., Schölkopf, B.

Max Planck Institute for Biological Cybernetics, 2004, See the improved version Regularization on Discrete Spaces. (techreport)

Abstract
We propose a general regularization framework for transductive inference. The given data are thought of as a graph, where the edges encode the pairwise relationships among data. We develop discrete analysis and geometry on graphs, and then naturally adapt the classical regularization in the continuous case to the graph situation. A new and effective algorithm is derived from this general framework, as well as an approach we developed before.

ei

[BibTex]

[BibTex]


no image
Implicit Wiener series for capturing higher-order interactions in images

Franz, M., Schölkopf, B.

Sensory coding and the natural environment, (Editors: Olshausen, B.A. and M. Lewicki), 2004 (poster)

Abstract
The information about the objects in an image is almost exclusively described by the higher-order interactions of its pixels. The Wiener series is one of the standard methods to systematically characterize these interactions. However, the classical estimation method of the Wiener expansion coefficients via cross-correlation suffers from severe problems that prevent its application to high-dimensional and strongly nonlinear signals such as images. We propose an estimation method based on regression in a reproducing kernel Hilbert space that overcomes these problems using polynomial kernels as known from Support Vector Machines and other kernel-based methods. Numerical experiments show performance advantages in terms of convergence, interpretability and system sizes that can be handled. By the time of the conference, we will be able to present first results on the higher-order structure of natural images.

ei

[BibTex]

[BibTex]


no image
Classification and Memory Behaviour of Man Revisited by Machine

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

CSHL Meeting on Computational & Systems Neuroscience (COSYNE), 2004 (poster)

ei

[BibTex]

[BibTex]


no image
Advanced Statistical Learning Theory

Bousquet, O.

Machine Learning Summer School, 2004 (talk)

ei

PDF [BibTex]

PDF [BibTex]


no image
Nanoscale Materials for Energy Storage
{Materials Science \& Engineering B}, 108, pages: 292, Elsevier, 2004 (misc)

mms

[BibTex]

[BibTex]