How do we see? We seek mathematical and computational models that formalize the principles of perception. Can we make computers that see? We combine insights from neuroscience with statistical models, machine learning, and computer graphics to derive new computer vision algorithms that, one day, may enable computers to understand the visual world of surfaces, materials, light and movement
Led by Prof. Katherine J. Kuchenbecker, the newly established "Haptic Intelligence" department focuses on incorporating the sense of touch into robotic systems. Scientists in this group seek to endow robots with astute haptic perception and invent methods for delivering realistic haptic feedback to users of telerobotic and virtual reality systems.
We are interested in understanding, how autonomous movement systems can bootstrap themselves into competent behavior by starting from a relatively simple set of algorithms and pre-structuring, and then learning from interacting with the environment.
A brain-machine interface allows humans to interact with their environment without the aid of muscle power. For example, paralyzed patients can spell out words and form sentences using just their thoughts. However, this requires costly data analysis of the complex multidimensional signals that the brain generates to control this process.
Preparation and analysis of modern magnetic materials, thin-film and nanostructures, dynamics of magnetism, development and application of microscopic, spectroscopic and time-resolved techniques, Magneto-optic at high-temperature superconductors electron-theoretical methods to describe relevant (magnetic) phenomena, correlation of magnetic, structural, electronic and chemical properties. [more]