Header logo is


2007


no image
Overcoming the Dipolar Disorder in Dense CoFe Nanoparticle Ensembles: Superferromagnetism

Bedanta, S., Eimüller, T., Kleemann, W., Rhensius, J., Stromberg, F., Amaladass, E., Cardoso, S., Freitas, P. P.

{Physical Review Letters}, 98, 2007 (article)

mms

DOI [BibTex]

2007


DOI [BibTex]


no image
Ultrafast nanomagnetic toggle switching of vortex cores

Hertel, R., Gliga, S., Fähnle, M., Schneider, C. M.

{Physical Review Letters}, 98, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Element-specific spin and orbital momentum dynamics of Fe/Gd multilayers

Bartelt, A. F., Comin, A., Feng, J., Nasiatka, J. R., Eimüller, T., Ludescher, B., Schütz, G., Padmore, H. A., Young, A. T., Scholl, A.

{Applied Physics Letters}, 90, 2007 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Slow relaxation of spin reorientation following ultrafast optical excitation

Eimüller, T., Scholl, A., Ludescher, B., Schütz, G., Thiele, J.

{Applied Physics Letters}, 91, 2007 (article)

mms

[BibTex]

[BibTex]


no image
One-pot synthesis of core-shell FeRh nanoparticles

Ciuculescu, D., Amiens, C., Respaud, M., Falqui, A., Lecante, P., Benfield, R. E., Jiang, L., Fauth, K., Chaudret, B.

{Chemistry of Materials}, 19(19):4624-4626, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Spin-polarized quasiparticles injection effects in the normal state of YBCO thin films

Soltan, S., Albrecht, J., Habermeier, H.-U.

{Physica C}, 460-462, pages: 1088-1089, 2007 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Direct observation of the vortex core magnetization and its dynamics

Chou, K. W., Puzic, A., Stoll, H., Dolgos, D., Schütz, G., Van Waeyenberge, B., Vansteenkiste, A., Tyliszczak, T., Woltersdorf, G., Back, C. H.

{Applied Physics Letters}, 90, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Superparamagnetism in small Fe clusters on Cu(111)

Ballentine, G., He\ssler, M., Kinza, M., Fauth, K.

{The European Physical Journal D}, 45, pages: 535-537, 2007 (article)

mms

DOI [BibTex]

DOI [BibTex]

1997


no image
Comparing support vector machines with Gaussian kernels to radial basis function classifiers

Schölkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V.

IEEE Transactions on Signal Processing, 45(11):2758-2765, November 1997 (article)

Abstract
The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by X-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically well-founded but also superior in a practical application.

ei

Web DOI [BibTex]

1997


Web DOI [BibTex]


no image
ATM-dependent telomere loss in aging human diploid fibroblasts and DNA damage lead to the post-translational activation of p53 protein involving poly(ADP-ribose) polymerase.

Vaziri, H., MD, .., RC, .., Davison, T., YS, .., CH, .., GG, .., Benchimol, S.

The European Molecular Biology Organization Journal, 16(19):6018-6033, 1997 (article)

ei

Web [BibTex]

Web [BibTex]


Thumb xl yasersmile
Recognizing facial expressions in image sequences using local parameterized models of image motion

Black, M. J., Yacoob, Y.

Int. Journal of Computer Vision, 25(1):23-48, 1997 (article)

Abstract
This paper explores the use of local parametrized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust, and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performed with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences.

ps

pdf pdf from publisher abstract video [BibTex]


no image
Locally weighted learning

Atkeson, C. G., Moore, A. W., Schaal, S.

Artificial Intelligence Review, 11(1-5):11-73, 1997, clmc (article)

Abstract
This paper surveys locally weighted learning, a form of lazy learning and memory-based learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, interference between old and new data, implementing locally weighted learning efficiently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control. Keywords: locally weighted regression, LOESS, LWR, lazy learning, memory-based learning, least commitment learning, distance functions, smoothing parameters, weighting functions, global tuning, local tuning, interference.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Locally weighted learning for control

Atkeson, C. G., Moore, A. W., Schaal, S.

Artificial Intelligence Review, 11(1-5):75-113, 1997, clmc (article)

Abstract
Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control. Keywords: locally weighted regression, LOESS, LWR, lazy learning, memory-based learning, least commitment learning, forward models, inverse models, linear quadratic regulation (LQR), shifting setpoint algorithm, dynamic programming.

am

link (url) [BibTex]

link (url) [BibTex]

1995


no image
View-Based Cognitive Mapping and Path Planning

Schölkopf, B., Mallot, H.

Adaptive Behavior, 3(3):311-348, January 1995 (article)

Abstract
This article presents a scheme for learning a cognitive map of a maze from a sequence of views and movement decisions. The scheme is based on an intermediate representation called the view graph, whose nodes correspond to the views whereas the labeled edges represent the movements leading from one view to another. By means of a graph theoretical reconstruction method, the view graph is shown to carry complete information on the topological and directional structure of the maze. Path planning can be carried out directly in the view graph without actually performing this reconstruction. A neural network is presented that learns the view graph during a random exploration of the maze. It is based on an unsupervised competitive learning rule translating temporal sequence (rather than similarity) of views into connectedness in the network. The network uses its knowledge of the topological and directional structure of the maze to generate expectations about which views are likely to be encountered next, improving the view-recognition performance. Numerical simulations illustrate the network's ability for path planning and the recognition of views degraded by random noise. The results are compared to findings of behavioral neuroscience.

ei

Web DOI [BibTex]

1995


Web DOI [BibTex]


no image
Suppression and creation of chaos in a periodically forced Lorenz system.

Franz, MO., Zhang, MH.

Physical Review, E 52, pages: 3558-3565, 1995 (article)

Abstract
Periodic forcing is introduced into the Lorenz model to study the effects of time-dependent forcing on the behavior of the system. Such a nonautonomous system stays dissipative and has a bounded attracting set which all trajectories finally enter. The possible kinds of attracting sets are restricted to periodic orbits and strange attractors. A large-scale survey of parameter space shows that periodic forcing has mainly three effects in the Lorenz system depending on the forcing frequency: (i) Fixed points are replaced by oscillations around them; (ii) resonant periodic orbits are created both in the stable and the chaotic region; (iii) chaos is created in the stable region near the resonance frequency and in periodic windows. A comparison to other studies shows that part of this behavior has been observed in simulations of higher truncations and real world experiments. Since very small modulations can already have a considerable effect, this suggests that periodic processes such as annual or diurnal cycles should not be omitted even in simple climate models.

ei

[BibTex]

[BibTex]


no image
Memory-based neural networks for robot learning

Atkeson, C. G., Schaal, S.

Neurocomputing, 9, pages: 1-27, 1995, clmc (article)

Abstract
This paper explores a memory-based approach to robot learning, using memory-based neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a set of nearest neighbors. This network design is equivalent to a statistical approach known as locally weighted regression, in which a local model is formed to answer each query, using a weighted regression in which nearby points (similar experiences) are weighted more than distant points (less relevant experiences). We illustrate this approach by describing how it has been used to enable a robot to learn a difficult juggling task. Keywords: memory-based, robot learning, locally weighted regression, nearest neighbor, local models.

am

link (url) [BibTex]

link (url) [BibTex]