Header logo is de


2003


no image
m-Alternative Forced Choice—Improving the Efficiency of the Method of Constant Stimuli

Jäkel, F.

Biologische Kybernetik, Graduate School for Neural and Behavioural Sciences, Tübingen, 2003 (diplomathesis)

ei

[BibTex]

2003


[BibTex]


no image
New Approaches to Statistical Learning Theory

Bousquet, O.

Annals of the Institute of Statistical Mathematics, 55(2):371-389, 2003 (article)

Abstract
We present new tools from probability theory that can be applied to the analysis of learning algorithms. These tools allow to derive new bounds on the generalization performance of learning algorithms and to propose alternative measures of the complexity of the learning task, which in turn can be used to derive new learning algorithms.

ei

PostScript [BibTex]

PostScript [BibTex]


no image
Distance-based classification with Lipschitz functions

von Luxburg, U., Bousquet, O.

In Learning Theory and Kernel Machines, Proceedings of the 16th Annual Conference on Computational Learning Theory, pages: 314-328, (Editors: Schölkopf, B. and M.K. Warmuth), Learning Theory and Kernel Machines, Proceedings of the 16th Annual Conference on Computational Learning Theory, 2003 (inproceedings)

Abstract
The goal of this article is to develop a framework for large margin classification in metric spaces. We want to find a generalization of linear decision functions for metric spaces and define a corresponding notion of margin such that the decision function separates the training points with a large margin. It will turn out that using Lipschitz functions as decision functions, the inverse of the Lipschitz constant can be interpreted as the size of a margin. In order to construct a clean mathematical setup we isometrically embed the given metric space into a Banach space and the space of Lipschitz functions into its dual space. Our approach leads to a general large margin algorithm for classification in metric spaces. To analyze this algorithm, we first prove a representer theorem. It states that there exists a solution which can be expressed as linear combination of distances to sets of training points. Then we analyze the Rademacher complexity of some Lipschitz function classes. The generality of the Lipschitz approach can be seen from the fact that several well-known algorithms are special cases of the Lipschitz algorithm, among them the support vector machine, the linear programming machine, and the 1-nearest neighbor classifier.

ei

PDF PostScript [BibTex]

PDF PostScript [BibTex]


no image
An Introduction to Support Vector Machines

Schölkopf, B.

In Recent Advances and Trends in Nonparametric Statistics , pages: 3-17, (Editors: MG Akritas and DN Politis), Elsevier, Amsterdam, The Netherlands, 2003 (inbook)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Statistical Learning and Kernel Methods in Bioinformatics

Schölkopf, B., Guyon, I., Weston, J.

In Artificial Intelligence and Heuristic Methods in Bioinformatics, 183, pages: 1-21, 3, (Editors: P Frasconi und R Shamir), IOS Press, Amsterdam, The Netherlands, 2003 (inbook)

ei

[BibTex]

[BibTex]


no image
Interactive Images

Toyama, K., Schölkopf, B.

(MSR-TR-2003-64), Microsoft Research, Cambridge, UK, 2003 (techreport)

Abstract
Interactive Images are a natural extension of three recent developments: digital photography, interactive web pages, and browsable video. An interactive image is a multi-dimensional image, displayed two dimensions at a time (like a standard digital image), but with which a user can interact to browse through the other dimensions. One might consider a standard video sequence viewed with a video player as a simple interactive image with time as the third dimension. Interactive images are a generalization of this idea, in which the third (and greater) dimensions may be focus, exposure, white balance, saturation, and other parameters. Interaction is handled via a variety of modes including those we call ordinal, pixel-indexed, cumulative, and comprehensive. Through exploration of three novel forms of interactive images based on color, exposure, and focus, we will demonstrate the compelling nature of interactive images.

ei

Web [BibTex]

Web [BibTex]


no image
A Short Introduction to Learning with Kernels

Schölkopf, B., Smola, A.

In Proceedings of the Machine Learning Summer School, Lecture Notes in Artificial Intelligence, Vol. 2600, pages: 41-64, LNAI 2600, (Editors: S Mendelson and AJ Smola), Springer, Berlin, Heidelberg, Germany, 2003 (inbook)

ei

[BibTex]

[BibTex]


no image
Bayesian Kernel Methods

Smola, A., Schölkopf, B.

In Advanced Lectures on Machine Learning, Machine Learning Summer School 2002, Lecture Notes in Computer Science, Vol. 2600, LNAI 2600, pages: 65-117, 0, (Editors: S Mendelson and AJ Smola), Springer, Berlin, Germany, 2003 (inbook)

ei

DOI [BibTex]

DOI [BibTex]


no image
Stability of ensembles of kernel machines

Elisseeff, A., Pontil, M.

In 190, pages: 111-124, NATO Science Series III: Computer and Systems Science, (Editors: Suykens, J., G. Horvath, S. Basu, C. Micchelli and J. Vandewalle), IOS press, Netherlands, 2003 (inbook)

ei

[BibTex]

[BibTex]


no image
Models of contrast transfer as a function of presentation time and spatial frequency.

Wichmann, F.

2003 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Using standard 2AFC contrast discrimination experiments conducted using a carefully calibrated display we previously showed that the shape of the threshold versus (pedestal) contrast (TvC) curve changes with presentation time and the performance level defined as threshold (Wichmann, 1999; Wichmann & Henning, 1999). Additional experiments looked at the change of the TvC curve with spatial frequency (Bird, Henning & Wichmann, 2002), and at how to constrain the parameters of models of contrast processing (Wichmann, 2002). Here I report modelling results both across spatial frequency and presentation time. An extensive model-selection exploration was performed using Bayesian confidence regions for the fitted parameters as well as cross-validation methods. Bird, C.M., G.B. Henning and F.A. Wichmann (2002). Contrast discrimination with sinusoidal gratings of different spatial frequency. Journal of the Optical Society of America A, 19, 1267-1273. Wichmann, F.A. (1999). Some aspects of modelling human spatial vision: contrast discrimination. Unpublished doctoral dissertation, The University of Oxford. Wichmann, F.A. & Henning, G.B. (1999). Implications of the Pedestal Effect for Models of Contrast-Processing and Gain-Control. OSA Annual Meeting Program, 62. Wichmann, F.A. (2002). Modelling Contrast Transfer in Spatial Vision [Abstract]. Journal of Vision, 2, 7a.

ei

[BibTex]

2001


no image
Regularized principal manifolds

Smola, A., Mika, S., Schölkopf, B., Williamson, R.

Journal of Machine Learning Research, 1, pages: 179-209, June 2001 (article)

Abstract
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding. We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.

ei

PDF [BibTex]

2001


PDF [BibTex]


no image
Variationsverfahren zur Untersuchung von Grundzustandseigenschaften des Ein-Band Hubbard-Modells

Eichhorn, J.

Biologische Kybernetik, Technische Universität Dresden, Dresden/Germany, May 2001 (diplomathesis)

Abstract
Using different modifications of a new variational approach, statical groundstate properties of the one-band Hubbard model such as energy and staggered magnetisation are calculated. By taking into account additional fluctuations, the method ist gradually improved so that a very good description of the energy in one and two dimensions can be achieved. After a detailed discussion of the application in one dimension, extensions for two dimensions are introduced. By use of a modified version of the variational ansatz in particular a description of the quantum phase transition for the magnetisation should be possible.

ei

PostScript [BibTex]

PostScript [BibTex]


no image
The psychometric function: II. Bootstrap-based confidence intervals and sampling

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1314-1329, 2001 (article)

ei

PDF [BibTex]

PDF [BibTex]


no image
The psychometric function: I. Fitting, sampling and goodness-of-fit

Wichmann, F., Hill, N.

Perception and Psychophysics, 63 (8), pages: 1293-1313, 2001 (article)

Abstract
The psychometric function relates an observer'sperformance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function'sparameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (or lapses ). We show that failure to account for this can lead to serious biases in estimates of the psychometric function'sparameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditional X^2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods

ei

PDF [BibTex]

PDF [BibTex]


no image
Unsupervised Segmentation and Classification of Mixtures of Markovian Sources

Seldin, Y., Bejerano, G., Tishby, N.

In The 33rd Symposium on the Interface of Computing Science and Statistics (Interface 2001 - Frontiers in Data Mining and Bioinformatics), pages: 1-15, 33rd Symposium on the Interface of Computing Science and Statistics (Interface - Frontiers in Data Mining and Bioinformatics), 2001 (inproceedings)

Abstract
We describe a novel algorithm for unsupervised segmentation of sequences into alternating Variable Memory Markov sources, first presented in [SBT01]. The algorithm is based on competitive learning between Markov models, when implemented as Prediction Suffix Trees [RST96] using the MDL principle. By applying a model clustering procedure, based on rate distortion theory combined with deterministic annealing, we obtain a hierarchical segmentation of sequences between alternating Markov sources. The method is applied successfully to unsupervised segmentation of multilingual texts into languages where it is able to infer correctly both the number of languages and the language switching points. When applied to protein sequence families (results of the [BSMT01] work), we demonstrate the method‘s ability to identify biologically meaningful sub-sequences within the proteins, which correspond to signatures of important functional sub-units called domains. Our approach to proteins classification (through the obtained signatures) is shown to have both conceptual and practical advantages over the currently used methods.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Markovian domain fingerprinting: statistical segmentation of protein sequences

Bejerano, G., Seldin, Y., Margalit, H., Tishby, N.

Bioinformatics, 17(10):927-934, 2001 (article)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Unsupervised Sequence Segmentation by a Mixture of Switching Variable Memory Markov Sources

Seldin, Y., Bejerano, G., Tishby, N.

In In the proceeding of the 18th International Conference on Machine Learning (ICML 2001), pages: 513-520, 18th International Conference on Machine Learning (ICML), 2001 (inproceedings)

Abstract
We present a novel information theoretic algorithm for unsupervised segmentation of sequences into alternating Variable Memory Markov sources. The algorithm is based on competitive learning between Markov models, when implemented as Prediction Suffix Trees (Ron et al., 1996) using the MDL principle. By applying a model clustering procedure, based on rate distortion theory combined with deterministic annealing, we obtain a hierarchical segmentation of sequences between alternating Markov sources. The algorithm seems to be self regulated and automatically avoids over segmentation. The method is applied successfully to unsupervised segmentation of multilingual texts into languages where it is able to infer correctly both the number of languages and the language switching points. When applied to protein sequence families, we demonstrate the method‘s ability to identify biologically meaningful sub-sequences within the proteins, which correspond to important functional sub-units called domains.

ei

PDF [BibTex]

PDF [BibTex]


no image
Inference Principles and Model Selection

Buhmann, J., Schölkopf, B.

(01301), Dagstuhl Seminar, 2001 (techreport)

ei

Web [BibTex]

Web [BibTex]

2000


no image
Advances in Large Margin Classifiers

Smola, A., Bartlett, P., Schölkopf, B., Schuurmans, D.

pages: 422, Neural Information Processing, MIT Press, Cambridge, MA, USA, October 2000 (book)

Abstract
The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.

ei

Web [BibTex]

2000


Web [BibTex]


no image
An Introduction to Kernel-Based Learning Algorithms

Müller, K., Mika, S., Rätsch, G., Tsuda, K., Schölkopf, B.

In Handbook of Neural Network Signal Processing, 4, (Editors: Yu Hen Hu and Jang-Neng Hwang), CRC Press, 2000 (inbook)

ei

[BibTex]

[BibTex]


no image
Choosing nu in support vector regression with different noise models — theory and experiments

Chalimourda, A., Schölkopf, B., Smola, A.

In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, International Joint Conference on Neural Networks, 2000 (inproceedings)

ei

[BibTex]

[BibTex]