Header logo is


2020


Label Efficient Visual Abstractions for Autonomous Driving
Label Efficient Visual Abstractions for Autonomous Driving

Behl, A., Chitta, K., Prakash, A., Ohn-Bar, E., Geiger, A.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, ie, the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (eg, object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

avg

pdf slides video Project Page [BibTex]

2020


pdf slides video Project Page [BibTex]


Convolutional Occupancy Networks
Convolutional Occupancy Networks

Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fully-connected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


Category Level Object Pose Estimation via Neural Analysis-by-Synthesis
Category Level Object Pose Estimation via Neural Analysis-by-Synthesis

Chen, X., Dong, Z., Song, J., Geiger, A., Hilliges, O.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Many object pose estimation algorithms rely on the analysis-by-synthesis framework which requires explicit representations of individual object instances. In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module that is capable of implicitly representing the appearance, shape and pose of entire object categories, thus rendering the need for explicit CAD models per object instance unnecessary. The image synthesis network is designed to efficiently span the pose configuration space so that model capacity can be used to capture the shape and local appearance (i.e., texture) variations jointly. At inference time the synthesized images are compared to the target via an appearance based loss and the error signal is backpropagated through the network to the input parameters. Keeping the network parameters fixed, this allows for iterative optimization of the object pose, shape and appearance in a joint manner and we experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone. When provided with depth measurements, to overcome scale ambiguities, the method can accurately recover the full 6DOF pose successfully.

avg

Project Page pdf suppmat [BibTex]

Project Page pdf suppmat [BibTex]


no image
Where Does It End? - Reasoning About Hidden Surfaces by Object Intersection Constraints

Strecke, M., Stückler, J.

In Proceedings IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, June 2020 (inproceedings)

ev

preprint project page Code DOI [BibTex]

preprint project page Code DOI [BibTex]


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]


GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

Schwarz, K., Liao, Y., Niemeyer, M., Geiger, A.

In Advances in Neural Information Processing Systems (NeurIPS), 2020 (inproceedings)

Abstract
While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxel-based representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, eg, the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]


no image
Learning to Identify Physical Parameters from Video Using Differentiable Physics

Kandukuri, R., Achterhold, J., Moeller, M., Stueckler, J.

Accepted for publication at the 42th German Conference on Pattern Recognition (GCPR), 2020, GCPR 2020 Honorable Mention (conference) Accepted

ev

link (url) [BibTex]

link (url) [BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]


no image
Planning from Images with Deep Latent Gaussian Process Dynamics

Bosch, N., Achterhold, J., Leal-Taixe, L., Stückler, J.

Proceedings of the 2nd Conference on Learning for Dynamics and Control (L4DC), 120, pages: 640-650, Proceedings of Machine Learning Research (PMLR), (Editors: Alexandre M. Bayen and Ali Jadbabaie and George Pappas and Pablo A. Parrilo and Benjamin Recht and Claire Tomlin and Melanie Zeilinger), 2020, arXiv:2005.03770 (conference)

ev

Ppreprint Project page Code poster [BibTex]

Ppreprint Project page Code poster [BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page Slides Video Poster [BibTex]

pdf Project Page Slides Video Poster [BibTex]


Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition
Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition

Hassan Alhaija, Siva Mustikovela, Varun Jampani, Justus Thies, Matthias Niessner, Andreas Geiger, Carsten Rother

In International Conference on 3D Vision (3DV), 2020 (inproceedings)

Abstract
Neural rendering techniques promise efficient photo-realistic image synthesis while providing rich control over scene parameters by learning the physical image formation process. While several supervised methods have been pro-posed for this task, acquiring a dataset of images with accurately aligned 3D models is very difficult. The main contribution of this work is to lift this restriction by training a neural rendering algorithm from unpaired data. We pro-pose an auto encoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties. In contrast to a traditional graphics pipeline, our approach does not require to specify all scene properties, such as material parameters and lighting by hand.Instead, we learn photo-realistic deferred rendering from a small set of 3D models and a larger set of unaligned real images, both of which are easy to acquire in practice. Simultaneously, we obtain accurate intrinsic decompositions of real images while not requiring paired ground truth. Our experiments confirm that a joint treatment of rendering and de-composition is indeed beneficial and that our approach out-performs state-of-the-art image-to-image translation base-lines both qualitatively and quantitatively.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]


no image
DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation

Wang, R., Yang, N., Stückler, J., Cremers, D.

In Proceedings of the IEEE international Conference on Robotics and Automation (ICRA), 2020, arXiv:1904.10097 (inproceedings)

ev

[BibTex]

[BibTex]


no image
Learning to Adapt Multi-View Stereo by Self-Supervision

Mallick, A., Stückler, J., Lensch, H.

Proceedings of the British Machine Vision Conference (BMVC), 2020, to appear (conference) To be published

ev

link (url) [BibTex]

link (url) [BibTex]


Learning Implicit Surface Light Fields
Learning Implicit Surface Light Fields

Oechsle, M., Niemeyer, M., Reiser, C., Mescheder, L., Strauss, T., Geiger, A.

In International Conference on 3D Vision (3DV), 2020 (inproceedings)

Abstract
Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.

avg

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]

2016


Patches, Planes and Probabilities: A Non-local Prior for Volumetric {3D} Reconstruction
Patches, Planes and Probabilities: A Non-local Prior for Volumetric 3D Reconstruction

Ulusoy, A. O., Black, M. J., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.

avg ps

YouTube pdf poster suppmat Project Page [BibTex]

2016


YouTube pdf poster suppmat Project Page [BibTex]


Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Xie, J., Kiefel, M., Sun, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a probabilistic model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.

avg ps

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


no image
Robust calibration marker detection in powder bed images from laser beam melting processes

zur Jacobsmühlen, J., Achterhold, J., Kleszczynski, S., Witt, G., Merhof, D.

In 2016 IEEE International Conference on Industrial Technology (ICIT), pages: 910-915, March 2016 (inproceedings)

ev

DOI [BibTex]

DOI [BibTex]


no image
Phase transitions and optimal algorithms in high-dimensional Gaussian mixture clustering

Lesieur, T., De Bacco, C., Banks, J., Krzakala, F., Moore, C., Zdeborová, L.

In Communication, Control, and Computing (Allerton), 2016 54th Annual Allerton Conference on, pages: 601-608, 2016 (inproceedings)

pio

Preprint link (url) [BibTex]

Preprint link (url) [BibTex]


Deep Discrete Flow
Deep Discrete Flow

Güney, F., Geiger, A.

Asian Conference on Computer Vision (ACCV), 2016 (conference) Accepted

avg ps

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]


no image
Direct Visual-Inertial Odometry with Stereo Cameras

Usenko, V., Engel, J., Stueckler, J., Cremers, D.

In IEEE International Conference on Robotics and Automation (ICRA), 2016 (inproceedings)

ev

[BibTex]

[BibTex]


no image
CPA-SLAM: Consistent Plane-Model Alignment for Direct RGB-D SLAM

Ma, L., Kerl, C., Stueckler, J., Cremers, D.

In IEEE International Conference on Robotics and Automation (ICRA), 2016 (inproceedings)

ev

[BibTex]

[BibTex]


no image
Unsupervised Learning of Shape-Motion Patterns for Objects in Urban Street Scenes

Klostermann, D., Osep, A., Stueckler, J., Leibe, B.

In British Machine Vision Conference (BMVC), 2016 (inproceedings)

ev

[BibTex]

[BibTex]


no image
Scene Flow Propagation for Semantic Mapping and Object Discovery in Dynamic Street Scenes

Kochanov, D., Osep, A., Stueckler, J., Leibe, B.

In IEEE/RSJ Int. Conference on Intelligent Robots and Systems, IROS, 2016 (inproceedings)

ev

[BibTex]

[BibTex]


no image
Joint Object Pose Estimation and Shape Reconstruction in Urban Street Scenes Using 3D Shape Priors

Engelmann, F., Stueckler, J., Leibe, B.

In Proc. of the German Conference on Pattern Recognition (GCPR), 2016 (inproceedings)

ev

[BibTex]

[BibTex]

2007


no image

no image
Hierarchical reactive control for a team of humanoid soccer robots

Behnke, S., Stueckler, J., Schreiber, M., Schulz, H., Böhnert, M., Meier, K.

In Proc. of the IEEE-RAS Int. Conf. on Humanoid Robots (Humanoids), pages: 622-629, November 2007 (inproceedings)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2006


no image
See, walk, and kick: Humanoid robots start to play soccer

Behnke, S., Schreiber, M., Stueckler, J., Renner, R., Strasdat, H.

In Proc. of the IEEE-RAS Int. Conf. on Humanoid Robots (Humanoids), pages: 497-503, December 2006 (inproceedings)

ev

link (url) DOI [BibTex]

2006


link (url) DOI [BibTex]


no image
Ab-initio calculations: I. Basic principles of the density functional electron theory and combination with phenomenological theories

Fähnle, M.

In Structural defects in ordered alloys and intermetallics. Characterization and modelling, pages: IX-1-IX-10, COST and CNRS, Bonascre [Ariege, France], 2006 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Hard magnetic FePt thin films and nanostructures in L1(0) phases

Goll, D., Breitling, A., Goo, N. H., Sigle, W., Hirscher, M., Schütz, G.

In 13, pages: 97-101, Beijing, PR China, 2006 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Ab-initio calculations: II. Application to atomic defects, phase diagrams, dislocations

Fähnle, M.

In Structural defects in ordered alloys and intermetallics. Characterization and modelling, pages: XIV-1-XIV-11, COST and CNRS, Bonascre [Ariege, France], 2006 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Residual stress analysis in reed pipe brass tongues of historic organs

Manescu, A., Giuliani, A., Fiori, F., Baretzky, B.

In Residual Stresses VII. 7th Europen Conference on Residual Stresses (ECRS7), pages: 969-974, Trans Tech, Berlin [Germany], 2006 (inproceedings)

mms

[BibTex]

[BibTex]


no image
High-pressure influence on the kinetics of grain boundary segregation in the Cu-Bi system

Chang, L.-S., Straumal, B., Rabkin, E., Lojkowski, W., Gust, W.

In 258-260, pages: 390-396, Aveiro (Portugal), 2006 (inproceedings)

mms

[BibTex]

[BibTex]

2004


no image
High-speed dynamics of magnetization processes in hard magnetic particles and thin platelets

Goll, D., Kronmüller, H.

In Proceedings of the 18th International Workshop on Rare-Earth Magnets and their Applications, pages: 465-469, Laboratoire de Cristallographie/Laboratoire Louis Neel, CNRS, Grenoble, 2004 (inproceedings)

mms

[BibTex]

2004


[BibTex]


no image
High-speed dynamics of magnetization processes in hard magnetic particles and thin platelets

Goll, D., Kronmüller, H.

In Proceedings of the 18th International Workshop on Rare-Earth Magnets and their Applications, pages: 465-469, Laboratoire de Cristallographie/Laboratoire Louis Neel, CNRS, Grenoble, 2004 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Modern nanocrystalline/nanostructured hard magnetic materials

Kronmüller, H., Goll, D.

In 272-276, pages: e319-e320, Rome [Italy], 2004 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Modern nanostructured high-temperature permanent magnets

Goll, D., Kronmüller, H., Stadelmaier, H. H.

In Proceedings of the 18th International Workshop on Rare-Earth Magnets and their Applications, pages: 578-583, Laboratoire de Cristallographie/Laboratoire Louis Néel, CNRS, Grenoble, 2004 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Imaging sub-ns spin dynamics in magnetic nanostructures with magnetic transmission X-ray microscopy

Fischer, P., Stoll, H., Puzic, A., Van Waeyenberge, B., Raabe, J., Haug, T., Denbeaux, G., Pearson, A., Höllinger, R., Back, C. H., Weiss, D., Schütz, G.

In Synchrotron Radiation Instrumentation, 705, pages: 1291-1294, AIP Conference Proceedings, American Institute of Physics, San Francisco, California (USA), 2004 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Modern nanostructured high-temperature permanent magnets

Goll, D., Kronmüller, H., Stadelmaier, H. H.

In Proceedings of the 18th International Workshop on Rare-Earth Magnets and their Applications, pages: 578-583, Laboratoire de Cristallographie/Laboratoire Louis Néel, CNRS, Grenoble, 2004 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Existence of transient temperature spike induced by SHI: evidence by ion beam analysis

Avasthi, D. K., Ghosh, S., Srivastava, S. K., Assmann, W.

In 219-220, pages: 206-214, Albuquerque, NM [USA], 2004 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Hard magnetic hollow nanospheres

Goll, D., Berkowitz, A. E., Bertram, H. N.

In Proceedings of the 18th International Workshop on Rare-Earth Magnets and their Applications, pages: 704-707, Laboratoire de Cristallographie/Laboratoire Louis Neel, CNRS, Grenoble, 2004 (inproceedings)

mms

[BibTex]

[BibTex]

2000


no image
High-performance nanocrystalline PrFeB-based bonded permanent magnets

Goll, D., Kleinschroth, I., Kronmüller, H.

In Proceedings of the 16th International Workshop on Rare-Earth Magnets and Their Applications, pages: 641-650, Japan Institute of Metals, 2000 (inproceedings)

mms

[BibTex]

2000


[BibTex]


no image
Experimental and theoretical study of the Verwey transition in magnetite

Brabers, V. A. M., Brabers, J. H. V. J., Walz, F., Kronmüller, H.

In Proceedings 8th International Conference on Ferrites, pages: 123-125, Japan Society of Powder and Powder Metallurgy, 2000 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Evolution of microstructure and microchemistry in the high-temperature Sm(Co, Fe, Cu, Zr)z magnets

Zhang, Y. W., Hadjipanayis, G. C., Goll, D., Kronmüller, H., Chen, C., Nelson, C., Krishnan, K.

In Proceedings of the 16th International Workshop on Rare-Earth Magnets and Their Applications, pages: 169-178, Sendai, Japan, 2000 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Fundamental investigations and industrial applications of magnetostriction

Hirscher, M., Fischer, S. F., Reininger, T.

In Modern Trends in Magnetostriction Study and Application. Proceedings of the NATO Advanced Study Institute on Modern Trends in Magnetostriction, 5, pages: 307-329, NATO Science Series: II: Mathematics, Physics and Chemistry, Kluwer Academic Publishers, Kyiv, Ukraine, 2000 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Micromagnetic and microstructural analysis of the temperature dependence of the coercive field of Sm2(Co, Cu, Fe, Zr)17 permanent magnets

Goll, D., Sigle, W., Hadjipanayis, G. C., Kronmüller, H.

In Proceedings of the 16th International Workshop on Rare-Earth Magnets and Their Applications, pages: 61-70, Kaneko, H.; Homma, M.; Okada, M., 2000 (inproceedings)

mms

[BibTex]

[BibTex]