Header logo is


2005


no image
Image Reconstruction by Linear Programming

Tsuda, K., Rätsch, G.

IEEE Transactions on Image Processing, 14(6):737-744, June 2005 (article)

Abstract
One way of image denoising is to project a noisy image to the subspace of admissible images derived, for instance, by PCA. However, a major drawback of this method is that all pixels are updated by the projection, even when only a few pixels are corrupted by noise or occlusion. We propose a new method to identify the noisy pixels by l1-norm penalization and to update the identified pixels only. The identification and updating of noisy pixels are formulated as one linear program which can be efficiently solved. In particular, one can apply the upsilon trick to directly specify the fraction of pixels to be reconstructed. Moreover, we extend the linear program to be able to exploit prior knowledge that occlusions often appear in contiguous blocks (e.g., sunglasses on faces). The basic idea is to penalize boundary points and interior points of the occluded area differently. We are also able to show the upsilon property for this extended LP leading to a method which is easy to use. Experimental results demonstrate the power of our approach.

ei

PDF DOI [BibTex]

2005


PDF DOI [BibTex]


no image
RASE: recognition of alternatively spliced exons in C.elegans

Rätsch, G., Sonnenburg, S., Schölkopf, B.

Bioinformatics, 21(Suppl. 1):i369-i377, June 2005 (article)

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection

Tsuda, K., Rätsch, G., Warmuth, M.

Journal of Machine Learning Research, 6, pages: 995-1018, June 2005 (article)

Abstract
We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.

ei

PDF [BibTex]

PDF [BibTex]


no image
Protein function prediction via graph kernels

Borgwardt, KM., Ong, CS., Schönauer, S., Vishwanathan, ., Smola, AJ., Kriegel, H-P.

Bioinformatics, 21(Suppl. 1: ISMB 2005 Proceedings):i47-i56, June 2005 (article)

Abstract
Motivation: Computational approaches to protein function prediction infer protein function by finding proteins with similar sequence, structure, surface clefts, chemical properties, amino acid motifs, interaction partners or phylogenetic profiles. We present a new approach that combines sequential, structural and chemical information into one graph model of proteins. We predict functional class membership of enzymes and non-enzymes using graph kernels and support vector machine classification on these protein graphs. Results: Our graph model, derivable from protein sequence and structure only, is competitive with vector models that require additional protein information, such as the size of surface pockets. If we include this extra information into our graph model, our classifier yields significantly higher accuracy levels than the vector models. Hyperkernels allow us to select and to optimally combine the most relevant node attributes in our protein graphs. We have laid the foundation for a protein function prediction system that integrates protein information from various sources efficiently and effectively.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Texture and haptic cues in slant discrimination: Reliability-based cue weighting without statistically optimal cue combination

Rosas, P., Wagemans, J., Ernst, M., Wichmann, F.

Journal of the Optical Society of America A, 22(5):801-809, May 2005 (article)

Abstract
A number of models of depth cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum variance unbiased estimator that can be constructed from the available information. Here we test such models using visual and haptic depth information. Different texture types produce differences in slant discrimination performance, providing a means for testing a reliability-sensitive cue combination model using texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability, but fell short of statistically optimal combination—we find reliability-based re-weighting, but not statistically optimal cue combination.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Motor Skill Learning for Humanoid Robots

Peters, J.

First Conference Undergraduate Computer Sciences and Informations Sciences (CS/IS), May 2005 (talk)

ei

[BibTex]

[BibTex]


no image
Bayesian inference for psychometric functions

Kuss, M., Jäkel, F., Wichmann, F.

Journal of Vision, 5(5):478-492, May 2005 (article)

Abstract
In psychophysical studies, the psychometric function is used to model the relation between physical stimulus intensity and the observer’s ability to detect or discriminate between stimuli of different intensities. In this study, we propose the use of Bayesian inference to extract the information contained in experimental data to estimate the parameters of psychometric functions. Because Bayesian inference cannot be performed analytically, we describe how a Markov chain Monte Carlo method can be used to generate samples from the posterior distribution over parameters. These samples are used to estimate Bayesian confidence intervals and other characteristics of the posterior distribution. In addition, we discuss the parameterization of psychometric functions and the role of prior distributions in the analysis. The proposed approach is exemplified using artificially generated data and in a case study for real experimental data. Furthermore, we compare our approach with traditional methods based on maximum likelihood parameter estimation combined with bootstrap techniques for confidence interval estimation and find the Bayesian approach to be superior.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Classification of natural scenes using global image statistics

Drewes, J., Wichmann, F., Gegenfurtner, K.

47, pages: 88, 47. Tagung Experimentell Arbeitender Psychologen, April 2005 (poster)

ei

[BibTex]

[BibTex]


no image
A gene expression map of Arabidopsis thaliana development

Schmid, M., Davison, T., Henz, S., Pape, U., Demar, M., Vingron, M., Schölkopf, B., Weigel, D., Lohmann, J.

Nature Genetics, 37(5):501-506, April 2005 (article)

Abstract
Regulatory regions of plant genes tend to be more compact than those of animal genes, but the complement of transcription factors encoded in plant genomes is as large or larger than that found in those of animals. Plants therefore provide an opportunity to study how transcriptional programs control multicellular development. We analyzed global gene expression during development of the reference plant Arabidopsis thaliana in samples covering many stages, from embryogenesis to senescence, and diverse organs. Here, we provide a first analysis of this data set, which is part of the AtGenExpress expression atlas. We observed that the expression levels of transcription factor genes and signal transduction components are similar to those of metabolic genes. Examining the expression patterns of large gene families, we found that they are often more similar than would be expected by chance, indicating that many gene families have been co-opted for specific developmental processes.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Morphological characterization of molecular complexes present in the synaptic cleft

Lucic, V., Yang, T., Schweikert, G., Förster, F., Baumeister, W.

Structure, 13(3):423-434, March 2005 (article)

Abstract
We obtained tomograms of isolated mammalian excitatory synapses by cryo-electron tomography. This method allows the investigation of biological material in the frozen-hydrated state, without staining, and can therefore provide reliable structural information at the molecular level. We developed an automated procedure for the segmentation of molecular complexes present in the synaptic cleft based on thresholding and connectivity, and calculated several morphological characteristics of these complexes. Extensive lateral connections along the synaptic cleft are shown to form a highly connected structure with a complex topology. Our results are essentially parameter-free, i.e., they do not depend on the choice of certain parameter values (such as threshold). In addition, the results are not sensitive to noise; the same conclusions can be drawn from the analysis of both nondenoised and denoised tomograms.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Experimentally optimal v in support vector regression for different noise models and parameter settings

Chalimourda, A., Schölkopf, B., Smola, A.

Neural Networks, 18(2):205-205, March 2005 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Classification of Natural Scenes using Global Image Statistics

Drewes, J., Wichmann, F., Gegenfurtner, K.

8, pages: 88, 8th T{\"u}bingen Perception Conference (TWK), February 2005 (poster)

Abstract
The algorithmic classification of complex, natural scenes is generally considered a difficult task due to the large amount of information conveyed by natural images. Work by Simon Thorpe and colleagues showed that humans are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. This suggests that the relevant information for classification can be extracted at comparatively limited computational cost. One hypothesis is that global image statistics such as the amplitude spectrum could underly fast image classification (Johnson & Olshausen, Journal of Vision, 2003; Torralba & Oliva, Network: Comput. Neural Syst., 2003). We used linear discriminant analysis to classify a set of 11.000 images into animal and nonanimal images. After applying a DFT to the image, we put the Fourier spectrum of each image into 48 bins (8 orientations with 6 frequency bands). Using all of these bins, classification performance on the Fourier spectrum reached 70%. In an iterative procedure, we then removed the bins whose absence caused the smallest damage to the classification performance (one bin per iteration). Notably, performance stayed at about 70% until less then 6 bins were left. A detailed analysis of the classification weights showed that a comparatively high level of performance (67%) could also be obtained when only 2 bins were used, namely the vertical orientations at the highest spatial frequency band. When using only a single frequency band (8 bins) we found that 67% classification performance could be reached when only the high spatial frequency information was used, which decreased steadily at lower spatial frequencies, reaching a minimum (50%) for the low spatial frequency information. Similar results were obtained when all bins were used on spatially pre-filtered images. Our results show that in the absence of sophisticated machine learning techniques, animal detection in natural scenes is limited to rather modest levels of performance, far below those of human observers. If limiting oneself to global image statistics such as the DFT then mostly information at the highest spatial frequencies is useful for the task. This is analogous to the results obtained with human observers on filtered images (Kirchner et al, VSS 2004).

ei

Web [BibTex]

Web [BibTex]


no image
Efficient Adaptive Sampling of the Psychometric Function by Maximizing Information Gain

Tanner, T., Hill, N., Rasmussen, C., Wichmann, F.

8, pages: 109, (Editors: Bülthoff, H. H., H. A. Mallot, R. Ulrich and F. A. Wichmann), 8th T{\"u}bingen Perception Conference (TWK), February 2005 (poster)

Abstract
A psychometric function can be described by its shape and four parameters: position or threshold, slope or width, false alarm rate or chance level, and miss or lapse rate. Depending on the parameters of interest some points on the psychometric function may be more informative than others. Adaptive methods attempt to place trials on the most informative points based on the data collected in previous trials. We introduce a new adaptive bayesian psychometric method which collects data for any set of parameters with high efficency. It places trials by minimizing the expected entropy [1] of the posterior pdf over a set of possible stimuli. In contrast to most other adaptive methods it is neither limited to threshold measurement nor to forced-choice designs. Nuisance parameters can be included in the estimation and lead to less biased estimates. The method supports block designs which do not harm the performance when a sufficient number of trials are performed. Block designs are useful for control of response bias and short term performance shifts such as adaptation. We present the results of evaluations of the method by computer simulations and experiments with human observers. In the simulations we investigated the role of parametric assumptions, the quality of different point estimates, the effect of dynamic termination criteria and many other settings. [1] Kontsevich, L.L. and Tyler, C.W. (1999): Bayesian adaptive estimation of psychometric slope and threshold. Vis. Res. 39 (16), 2729-2737.

ei

Web [BibTex]

Web [BibTex]


no image
Automatic Classification of Plankton from Digital Images

Sieracki, M., Riseman, E., Balch, W., Benfield, M., Hanson, A., Pilskaln, C., Schultz, H., Sieracki, C., Utgoff, P., Blaschko, M., Holness, G., Mattar, M., Lisin, D., Tupper, B.

ASLO Aquatic Sciences Meeting, 1, pages: 1, February 2005 (poster)

ei

[BibTex]

[BibTex]


no image
Bayesian Inference for Psychometric Functions

Kuss, M., Jäkel, F., Wichmann, F.

8, pages: 106, (Editors: Bülthoff, H. H., H. A. Mallot, R. Ulrich and F. A. Wichmann), 8th T{\"u}bingen Perception Conference (TWK), February 2005 (poster)

Abstract
In psychophysical studies of perception the psychometric function is used to model the relation between the physical stimulus intensity and the observer's ability to detect or discriminate between stimuli of different intensities. We propose the use of Bayesian inference to extract the information contained in experimental data to learn about the parameters of psychometric functions. Since Bayesian inference cannot be performed analytically we use a Markov chain Monte Carlo method to generate samples from the posterior distribution over parameters. These samples can be used to estimate Bayesian confidence intervals and other characteristics of the posterior distribution. We compare our approach with traditional methods based on maximum-likelihood parameter estimation combined with parametric bootstrap techniques for confidence interval estimation. Experiments indicate that Bayesian inference methods are superior to bootstrap-based methods and are thus the method of choice for estimating the psychometric function and its confidence-intervals.

ei

Web [BibTex]

Web [BibTex]


no image
Kernel Constrained Covariance for Dependence Measurement

Gretton, A., Smola, A., Bousquet, O., Herbrich, R., Belitski, A., Augath, M., Murayama, Y., Schölkopf, B., Logothetis, N.

AISTATS, January 2005 (talk)

Abstract
We discuss reproducing kernel Hilbert space (RKHS)-based measures of statistical dependence, with emphasis on constrained covariance (COCO), a novel criterion to test dependence of random variables. We show that COCO is a test for independence if and only if the associated RKHSs are universal. That said, no independence test exists that can distinguish dependent and independent random variables in all circumstances. Dependent random variables can result in a COCO which is arbitrarily close to zero when the source densities are highly non-smooth. All current kernel-based independence tests share this behaviour. We demonstrate exponential convergence between the population and empirical COCO. Finally, we use COCO as a measure of joint neural activity between voxels in MRI recordings of the macaque monkey, and compare the results to the mutual information and the correlation. We also show the effect of removing breathing artefacts from the MRI recording.

ei

PostScript [BibTex]

PostScript [BibTex]


no image
Semi-supervised protein classification using cluster kernels

Weston, J., Leslie, C., Ie, E., Zhou, D., Elisseeff, A., Noble, W.

Bioinformatics, 21(15):3241-3247, 2005 (article)

ei

[BibTex]

[BibTex]


no image
Invariance of Neighborhood Relation under Input Space to Feature Space Mapping

Shin, H., Cho, S.

Pattern Recognition Letters, 26(6):707-718, 2005 (article)

Abstract
If the training pattern set is large, it takes a large memory and a long time to train support vector machine (SVM). Recently, we proposed neighborhood property based pattern selection algorithm (NPPS) which selects only the patterns that are likely to be near the decision boundary ahead of SVM training [Proc. of the 7th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), Lecture Notes in Artificial Intelligence (LNAI 2637), Seoul, Korea, pp. 376–387]. NPPS tries to identify those patterns that are likely to become support vectors in feature space. Preliminary reports show its effectiveness: SVM training time was reduced by two orders of magnitude with almost no loss in accuracy for various datasets. It has to be noted, however, that decision boundary of SVM and support vectors are all defined in feature space while NPPS described above operates in input space. If neighborhood relation in input space is not preserved in feature space, NPPS may not always be effective. In this paper, we sh ow that the neighborhood relation is invariant under input to feature space mapping. The result assures that the patterns selected by NPPS in input space are likely to be located near decision boundary in feature space.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Global image statistics of natural scenes

Drewes, J., Wichmann, F., Gegenfurtner, K.

Bioinspired Information Processing, 08, pages: 1, 2005 (poster)

ei

[BibTex]

[BibTex]


no image
Graph Kernels for Chemical Informatics

Ralaivola, L., Swamidass, J., Saigo, H., Baldi, P.

Neural Networks, 18(8):1093-1110, 2005 (article)

Abstract
Increased availability of large repositories of chemical compounds is creating new challenges and opportunities for the application of machine learning methods to problems in computational chemistry and chemical informatics. Because chemical compounds are often represented by the graph of their covalent bonds, machine learning methods in this domain must be capable of processing graphical structures with variable size. Here we first briefly review the literature on graph kernels and then introduce three new kernels (Tanimoto, MinMax, Hybrid) based on the idea of molecular fingerprints and counting labeled paths of depth up to d using depthfirst search from each possible vertex. The kernels are applied to three classification problems to predict mutagenicity, toxicity, and anti-cancer activity on three publicly available data sets. The kernels achieve performances at least comparable, and most often superior, to those previously reported in the literature reaching accuracies of 91.5% on the Mutag dataset, 65-67% on the PTC (Predictive Toxicology Challenge) dataset, and 72% on the NCI (National Cancer Institute) dataset. Properties and tradeoffs of these kernels, as well as other proposed kernels that leverage 1D or 3D representations of molecules, are briefly discussed.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Extended Gaussianization Method for Blind Separation of Post-Nonlinear Mixtures

Zhang, K., Chan, L.

Neural Computation, 17(2):425-452, 2005 (article)

Abstract
The linear mixture model has been investigated in most articles tackling the problem of blind source separation. Recently, several articles have addressed a more complex model: blind source separation (BSS) of post-nonlinear (PNL) mixtures. These mixtures are assumed to be generated by applying an unknown invertible nonlinear distortion to linear instantaneous mixtures of some independent sources. The gaussianization technique for BSS of PNL mixtures emerged based on the assumption that the distribution of the linear mixture of independent sources is gaussian. In this letter, we review the gaussianization method and then extend it to apply to PNL mixture in which the linear mixture is close to gaussian. Our proposed method approximates the linear mixture using the Cornish-Fisher expansion. We choose the mutual information as the independence measurement to develop a learning algorithm to separate PNL mixtures. This method provides better applicability and accuracy. We then discuss the sufficient condition for the method to be valid. The characteristics of the nonlinearity do not affect the performance of this method. With only a few parameters to tune, our algorithm has a comparatively low computation. Finally, we present experiments to illustrate the efficiency of our method.

ei

Web DOI [BibTex]


no image
Theory of Classification: A Survey of Some Recent Advances

Boucheron, S., Bousquet, O., Lugosi, G.

ESAIM: Probability and Statistics, 9, pages: 323 , 2005 (article)

Abstract
The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have lead to these important recent developments.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Support Vector Machines and Kernel Algorithms

Schölkopf, B., Smola, A.

In Encyclopedia of Biostatistics (2nd edition), Vol. 8, 8, pages: 5328-5335, (Editors: P Armitage and T Colton), John Wiley & Sons, NY USA, 2005 (inbook)

ei

[BibTex]

[BibTex]


no image
Moment Inequalities for Functions of Independent Random Variables

Boucheron, S., Bousquet, O., Lugosi, G., Massart, P.

To appear in Annals of Probability, 33, pages: 514-560, 2005 (article)

Abstract
A general method for obtaining moment inequalities for functions of independent random variables is presented. It is a generalization of the entropy method which has been used to derive concentration inequalities for such functions cite{BoLuMa01}, and is based on a generalized tensorization inequality due to Lata{l}a and Oleszkiewicz cite{LaOl00}. The new inequalities prove to be a versatile tool in a wide range of applications. We illustrate the power of the method by showing how it can be used to effortlessly re-derive classical inequalities including Rosenthal and Kahane-Khinchine-type inequalities for sums of independent random variables, moment inequalities for suprema of empirical processes, and moment inequalities for Rademacher chaos and $U$-statistics. Some of these corollaries are apparently new. In particular, we generalize Talagrands exponential inequality for Rademacher chaos of order two to any order. We also discuss applications for other complex functions of independent random variables, such as suprema of boolean polynomials which include, as special cases, subgraph counting problems in random graphs.

ei

PDF [BibTex]

PDF [BibTex]


no image
Visual perception I: Basic principles

Wagemans, J., Wichmann, F., de Beeck, H.

In Handbook of Cognition, pages: 3-47, (Editors: Lamberts, K. , R. Goldstone), Sage, London, 2005 (inbook)

ei

[BibTex]

[BibTex]


no image
Kernel-Methods, Similarity, and Exemplar Theories of Categorization

Jäkel, F., Wichmann, F.

ASIC, 4, 2005 (poster)

Abstract
Kernel-methods are popular tools in machine learning and statistics that can be implemented in a simple feed-forward neural network. They have strong connections to several psychological theories. For example, Shepard‘s universal law of generalization can be given a kernel interpretation. This leads to an inner product and a metric on the psychological space that is different from the usual Minkowski norm. The metric has psychologically interesting properties: It is bounded from above and does not have additive segments. As categorization models often rely on Shepard‘s law as a model for psychological similarity some of them can be recast as kernel-methods. In particular, ALCOVE is shown to be closely related to kernel logistic regression. The relationship to the Generalized Context Model is also discussed. It is argued that functional analysis which is routinely used in machine learning provides valuable insights also for psychology.

ei

Web [BibTex]


no image
Rapid animal detection in natural scenes: critical features are local

Wichmann, F., Rosas, P., Gegenfurtner, K.

Experimentelle Psychologie. Beitr{\"a}ge zur 47. Tagung experimentell arbeitender Psychologen, 47, pages: 225, 2005 (poster)

ei

[BibTex]

[BibTex]


no image
A novel representation of protein sequences for prediction of subcellular location using support vector machines

Matsuda, S., Vert, J., Saigo, H., Ueda, N., Toh, H., Akutsu, T.

Protein Science, 14, pages: 2804-2813, 2005 (article)

Abstract
As the number of complete genomes rapidly increases, accurate methods to automatically predict the subcellular location of proteins are increasingly useful to help their functional annotation. In order to improve the predictive accuracy of the many prediction methods developed to date, a novel representation of protein sequences is proposed. This representation involves local compositions of amino acids and twin amino acids, and local frequencies of distance between successive (basic, hydrophobic, and other) amino acids. For calculating the local features, each sequence is split into three parts: N-terminal, middle, and C-terminal. The N-terminal part is further divided into four regions to consider ambiguity in the length and position of signal sequences. We tested this representation with support vector machines on two data sets extracted from the SWISS-PROT database. Through fivefold cross-validation tests, overall accuracies of more than 87% and 91% were obtained for eukaryotic and prokaryotic proteins, respectively. It is concluded that considering the respective features in the N-terminal, middle, and C-terminal parts is helpful to predict the subcellular location. Keywords: subcellular location; signal sequence; amino acid composition; distance frequency; support vector machine; predictive accuracy

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The human brain as large margin classifier

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

Proceedings of the Computational & Systems Neuroscience Meeting (COSYNE), 2, pages: 1, 2005 (poster)

ei

[BibTex]

[BibTex]


no image
A tutorial on v-support vector machines

Chen, P., Lin, C., Schölkopf, B.

Applied Stochastic Models in Business and Industry, 21(2):111-136, 2005 (article)

Abstract
We briefly describe the main ideas of statistical learning theory, support vector machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the so-called -SVM, including details of the algorithm and its implementation, theoretical results, and practical applications. Copyright © 2005 John Wiley & Sons, Ltd.

ei

PDF [BibTex]

PDF [BibTex]


no image
Robust EEG Channel Selection Across Subjects for Brain Computer Interfaces

Schröder, M., Lal, T., Hinterberger, T., Bogdan, M., Hill, J., Birbaumer, N., Rosenstiel, W., Schölkopf, B.

EURASIP Journal on Applied Signal Processing, 2005(19, Special Issue: Trends in Brain Computer Interfaces):3103-3112, (Editors: Vesin, J. M., T. Ebrahimi), 2005 (article)

Abstract
Most EEG-based Brain Computer Interface (BCI) paradigms come along with specific electrode positions, e.g.~for a visual based BCI electrode positions close to the primary visual cortex are used. For new BCI paradigms it is usually not known where task relevant activity can be measured from the scalp. For individual subjects Lal et.~al showed that recording positions can be found without the use of prior knowledge about the paradigm used. However it remains unclear to what extend their method of Recursive Channel Elimination (RCE) can be generalized across subjects. In this paper we transfer channel rankings from a group of subjects to a new subject. For motor imagery tasks the results are promising, although cross-subject channel selection does not quite achieve the performance of channel selection on data of single subjects. Although the RCE method was not provided with prior knowledge about the mental task, channels that are well known to be important (from a physiological point of view) were consistently selected whereas task-irrelevant channels were reliably disregarded.

ei

Web DOI [BibTex]

Web DOI [BibTex]

2001


no image
Perception of Planar Shapes in Depth

Wichmann, F., Willems, B., Rosas, P., Wagemans, J.

Journal of Vision, 1(3):176, First Annual Meeting of the Vision Sciences Society (VSS), December 2001 (poster)

Abstract
We investigated the influence of the perceived 3D-orientation of planar elliptical shapes on the perception of the shapes themselves. Ellipses were projected onto the surface of a sphere and subjects were asked to indicate if the projected shapes looked as if they were a circle on the surface of the sphere. The image of the sphere was obtained from a real, (near) perfect sphere using a highly accurate digital camera (real sphere diameter 40 cm; camera-to-sphere distance 320 cm; for details see Willems et al., Perception 29, S96, 2000; Photometrics SenSys 400 digital camera with Rodenstock lens, 12-bit linear luminance resolution). Stimuli were presented monocularly on a carefully linearized Sony GDM-F500 monitor keeping the scene geometry as in the real case (sphere diameter on screen 8.2 cm; viewing distance 66 cm). Experiments were run in a darkened room using a viewing tube to minimize, as far as possible, extraneous monocular cues to depth. Three different methods were used to obtain subjects' estimates of 3D-shape: the method of adjustment, temporal 2-alternative forced choice (2AFC) and yes/no. Several results are noteworthy. First, mismatch between perceived and objective slant tended to decrease with increasing objective slant. Second, the variability of the settings, too, decreased with increasing objective slant. Finally, we comment on the results obtained using different psychophysical methods and compare our results to those obtained using a real sphere and binocular vision (Willems et al.).

ei

Web DOI [BibTex]

2001


Web DOI [BibTex]


no image
Anabolic and Catabolic Gene Expression Pattern Analysis in Normal Versus Osteoarthritic Cartilage Using Complementary DNA-Array Technology

Aigner, T., Zien, A., Gehrsitz, A., Gebhard, P., McKenna, L.

Arthritis and Rheumatism, 44(12):2777-2789, December 2001 (article)

ei

Web [BibTex]

Web [BibTex]


no image
Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators

Williamson, R., Smola, A., Schölkopf, B.

IEEE Transactions on Information Theory, 47(6):2516-2532, September 2001 (article)

Abstract
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite-dimensional unit ball in feature space into a finite-dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines.

ei

DOI [BibTex]

DOI [BibTex]


no image
Centralization: A new method for the normalization of gene expression data

Zien, A., Aigner, T., Zimmer, R., Lengauer, T.

Bioinformatics, 17, pages: S323-S331, June 2001, Mathematical supplement available at http://citeseer.ist.psu.edu/574280.html (article)

Abstract
Microarrays measure values that are approximately proportional to the numbers of copies of different mRNA molecules in samples. Due to technical difficulties, the constant of proportionality between the measured intensities and the numbers of mRNA copies per cell is unknown and may vary for different arrays. Usually, the data are normalized (i.e., array-wise multiplied by appropriate factors) in order to compensate for this effect and to enable informative comparisons between different experiments. Centralization is a new two-step method for the computation of such normalization factors that is both biologically better motivated and more robust than standard approaches. First, for each pair of arrays the quotient of the constants of proportionality is estimated. Second, from the resulting matrix of pairwise quotients an optimally consistent scaling of the samples is computed.

ei

PDF PostScript Web [BibTex]

PDF PostScript Web [BibTex]


no image
Regularized principal manifolds

Smola, A., Mika, S., Schölkopf, B., Williamson, R.

Journal of Machine Learning Research, 1, pages: 179-209, June 2001 (article)

Abstract
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding. We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.

ei

PDF [BibTex]

PDF [BibTex]


no image
Failure Diagnosis of Discrete Event Systems

Son, HI., Kim, KW., Lee, S.

Journal of Control, Automation and Systems Engineering, 7(5):375-383, May 2001, In Korean (article)

ei

[BibTex]

[BibTex]


no image
Plaid maskers revisited: asymmetric plaids

Wichmann, F.

pages: 57, 4. T{\"u}binger Wahrnehmungskonferenz (TWK), March 2001 (poster)

Abstract
A large number of psychophysical and physiological experiments suggest that luminance patterns are independently analysed in channels responding to different bands of spatial frequency. There are, however, interactions among stimuli falling well outside the usual estimates of channels' bandwidths. Derrington & Henning (1989) first reported that, in 2-AFC sinusoidal-grating detection, plaid maskers, whose components are oriented symmetrically about the signal orientation, cause a substantially larger threshold elevation than would be predicted from their sinusoidal constituents alone. Wichmann & Tollin (1997a,b) and Wichmann & Henning (1998) confirmed and extended the original findings, measuring masking as a function of presentation time and plaid mask contrast. Here I investigate masking using plaid patterns whose components are asymmetrically positioned about the signal orientation. Standard temporal 2-AFC pattern discrimination experiments were conducted using plaid patterns and oblique sinusoidal gratings as maskers, and horizontally orientated sinusoidal gratings as signals. Signal and maskers were always interleaved on the display (refresh rate 152 Hz). As in the case of the symmetrical plaid maskers, substantial masking was observed for many of the asymmetrical plaids. Masking is neither a straightforward function of the plaid's constituent sinusoidal components nor of the periodicity of the luminance beats between components. These results cause problems for the notion that, even for simple stimuli, detection and discrimination are based on the outputs of channels tuned to limited ranges of spatial frequency and orientation, even if a limited set of nonlinear interactions between these channels is allowed.

ei

Web [BibTex]

Web [BibTex]


no image
Pattern Selection Using the Bias and Variance of Ensemble

Shin, H., Cho, S.

Journal of the Korean Institute of Industrial Engineers, 28(1):112-127, March 2001 (article)

Abstract
[Abstract]: A useful pattern is a pattern that contributes much to learning. For a classification problem those patterns near the class boundary surfaces carry more information to the classifier. For a regression problem the ones near the estimated surface carry more information. In both cases, the usefulness is defined only for those patterns either without error or with negligible error. Using only the useful patterns gives several benefits. First, computational complexity in memory and time for learning is decreased. Second, overfitting is avoided even when the learner is over-sized. Third, learning results in more stable learners. In this paper, we propose a pattern “utility index” that measures the utility of an individual pattern. The utility index is based on the bias and variance of a pattern trained by a network ensemble. In classification, the pattern with a low bias and a high variance gets a high score. In regression, on the other hand, the one with a low bias and a low variance gets a high score. Based on the distribution of the utility index, the original training set is divided into a high-score group and a low-score group. Only the high-score group is then used for training. The proposed method is tested on synthetic and real-world benchmark datasets. The proposed approach gives a better or at least similar performance.

ei

[BibTex]

[BibTex]


no image
Structure and Functionality of a Designed p53 Dimer.

Davison, TS., Nie, X., Ma, W., Lin, Y., Kay, C., Benchimol, S., Arrowsmith, C.

Journal of Molecular Biology, 307(2):605-617, March 2001 (article)

Abstract
P53 is a homotetrameric tumor suppressor protein involved in transcriptional control of genes that regulate cell proliferation and death. In order to probe the role that oligomerization plays in this capacity, we have previously designed and characterized a series of p53 proteins with altered oligomeric states through hydrophilc substitution of residues Met340 or Leu344 in the normally tetrameric oligomerization domain. Although such mutations have little effect on the overall secondary structural content of the oligomerization domain, both solubility and the resistance to thermal denaturation are substantially reduced relative to that of the wild-type domain. Here, we report the design and characterization of a double-mutant p53 with alterations of residues at positions Met340 and Leu344. The double-mutations Met340Glu/Leu344Lys and Met340Gln/Leu344Arg resulted in distinct dimeric forms of the protein. Furthermore, we have verified by NMR structure determination that the double-mutant Met340Gln/Leu344Arg is essentially a "half-tetramer". Analysis of the in vivo activities of full-length p53 oligomeric mutants reveals that while cell-cycle arrest requires tetrameric p53, transcriptional transactivation activity of monomers and dimers retain roughly background and half of the wild-type activity, respectively.

ei

Web [BibTex]

Web [BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

IEEE Transactions on Neural Networks, 12(2):181-201, March 2001 (article)

Abstract
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis

ei

DOI [BibTex]

DOI [BibTex]


no image
Estimating the support of a high-dimensional distribution.

Schölkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., Williamson, R.

Neural Computation, 13(7):1443-1471, March 2001 (article)

Abstract
Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a “simple” subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The psychometric function: II. Bootstrap-based confidence intervals and sampling

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1314-1329, 2001 (article)

ei

PDF [BibTex]

PDF [BibTex]


no image
The psychometric function: I. Fitting, sampling and goodness-of-fit

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1293-1313, 2001 (article)

Abstract
The psychometric function relates an observer'sperformance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function'sparameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (or lapses ). We show that failure to account for this can lead to serious biases in estimates of the psychometric function'sparameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditional X^2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods

ei

PDF [BibTex]

PDF [BibTex]


no image
Extracting egomotion from optic flow: limits of accuracy and neural matched filters

Dahmen, H-J., Franz, MO., Krapp, HG.

In pages: 143-168, Springer, Berlin, 2001 (inbook)

ei

[BibTex]

[BibTex]


no image
The pedestal effect with a pulse train and its constituent sinusoids

Henning, G., Wichmann, F., Bird, C.

Twenty-Sixth Annual Interdisciplinary Conference, 2001 (poster)

Abstract
Curves showing "threshold" contrast for detecting a signal grating as a function of the contrast of a masking grating of the same orientation, spatial frequency, and phase show a characteristic improvement in performance at masker contrasts near the contrast threshold of the unmasked signal. Depending on the percentage of correct responses used to define the threshold, the best performance can be as much as a factor of three better than the unmasked threshold obtained in the absence of any masking grating. The result is called the pedestal effect (sometimes, the dipper function). We used a 2AFC procedure to measure the effect with harmonically related sinusoids ranging from 2 to 16 c/deg - all with maskers of the same orientation, spatial frequency and phase - and with masker contrasts ranging from 0 to 50%. The curves for different spatial frequencies are identical if both the vertical axis (showing the threshold signal contrast) and the horizontal axis (showing the masker contrast) are scaled by the threshold contrast of the signal obtained with no masker. Further, a pulse train with a fundamental frequency of 2 c/deg produces a curve that is indistinguishable from that of a 2-c/deg sinusoid despite the fact that at higher masker contrasts, the pulse train contains at least 8 components all of them equally detectable. The effect of adding 1-D spatial noise is also discussed.

ei

[BibTex]

[BibTex]


no image
The control structure of artificial creatures

Zhou, D., Dai, R.

Artificial Life and Robotics, 5(3), 2001, invited article (article)

ei

Web [BibTex]

Web [BibTex]


no image
Markovian domain fingerprinting: statistical segmentation of protein sequences

Bejerano, G., Seldin, Y., Margalit, H., Tishby, N.

Bioinformatics, 17(10):927-934, 2001 (article)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Modeling the Dynamics of Individual Neurons of the Stomatogastric Networks with Support Vector Machines

Frontzek, T., Gutzen, C., Lal, TN., Heinzel, H-G., Eckmiller, R., Böhm, H.

Abstract Proceedings of the 6th International Congress of Neuroethology (ICN'2001) Bonn, abstract 404, 2001 (poster)

Abstract
In small rhythmic active networks timing of individual neurons is crucial for generating different spatial-temporal motor patterns. Switching of one neuron between different rhythms can cause transition between behavioral modes. In order to understand the dynamics of rhythmically active neurons we analyzed the oscillatory membranpotential of a pacemaker neuron and used different neural network models to predict dynamics of its time series. In a first step we have trained conventional RBF networks and Support Vector Machines (SVMs) using gaussian kernels with intracellulary recordings of the pyloric dilatator neuron in the Australian crayfish, Cherax destructor albidus. As a rule SVMs were able to learn the nonlinear dynamics of pyloric neurons faster (e.g. 15s) than RBF networks (e.g. 309s) under the same hardware conditions. After training SVMs performed a better iterated one-step-ahead prediction of time series in the pyloric dilatator neuron with regard to test error and error sum. The test error decreased with increasing number of support vectors. The best SVM used 196 support vectors and produced a test error of 0.04622 as opposed to the best RBF with 0.07295 using 26 RBF-neurons. In pacemaker neuron PD the timepoint at which the membranpotential will cross threshold for generation of its oscillatory peak is most important for determination of the test error. Interestingly SVMs are especially better in predicting this important part of the membranpotential which is superimposed by various synaptic inputs, which drive the membranpotential to its threshold.

ei

[BibTex]

[BibTex]

2000


no image
Knowledge Discovery in Databases: An Information Retrieval Perspective

Ong, CS.

Malaysian Journal of Computer Science, 13(2):54-63, December 2000 (article)

Abstract
The current trend of increasing capabilities in data generation and collection has resulted in an urgent need for data mining applications, also called knowledge discovery in databases. This paper identifies and examines the issues involved in extracting useful grains of knowledge from large amounts of data. It describes a framework to categorise data mining systems. The author also gives an overview of the issues pertaining to data pre processing, as well as various information gathering methodologies and techniques. The paper covers some popular tools such as classification, clustering, and generalisation. A summary of statistical and machine learning techniques used currently is also provided.

ei

PDF [BibTex]

2000


PDF [BibTex]