Header logo is


2017


Thumb xl larsnips
The Numerics of GANs

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

Abstract
In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train.

avg

pdf Project Page [BibTex]

2017


pdf Project Page [BibTex]


Thumb xl teaser iccv2017
Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Behl, A., Jafari, O. H., Mustikovela, S. K., Alhaija, H. A., Rother, C., Geiger, A.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Thumb xl jonas teaser
Sparsity Invariant CNNs

Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Thumb xl gernot teaser
OctNetFusion: Learning Depth Fusion from Data

Riegler, G., Ulusoy, A. O., Bischof, H., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.

avg

pdf Video 1 Video 2 Project Page Project Page [BibTex]

pdf Video 1 Video 2 Project Page Project Page [BibTex]


Thumb xl andreas teaser
Direct Visual Odometry for a Fisheye-Stereo Camera

Liu, P., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

Abstract
We present a direct visual odometry algorithm for a fisheye-stereo camera. Our algorithm performs simultaneous camera motion estimation and semi-dense reconstruction. The pipeline consists of two threads: a tracking thread and a mapping thread. In the tracking thread, we estimate the camera pose via semi-dense direct image alignment. To have a wider field of view (FoV) which is important for robotic perception, we use fisheye images directly without converting them to conventional pinhole images which come with a limited FoV. To address the epipolar curve problem, plane-sweeping stereo is used for stereo matching and depth initialization. Multiple depth hypotheses are tracked for selected pixels to better capture the uncertainty characteristics of stereo matching. Temporal motion stereo is then used to refine the depth and remove false positive depth hypotheses. Our implementation runs at an average of 20 Hz on a low-end PC. We run experiments in outdoor environments to validate our algorithm, and discuss the experimental results. We experimentally show that we are able to estimate 6D poses with low drift, and at the same time, do semi-dense 3D reconstruction with high accuracy.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Thumb xl hassan paper teasere
Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes

Alhaija, H. A., Mustikovela, S. K., Mescheder, L., Geiger, A., Rother, C.

In Proceedings of the British Machine Vision Conference 2017, Proceedings of the British Machine Vision Conference, September 2017 (inproceedings)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. This allows us to create realistic composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D shapes of the target object category. We demonstrate the utility of the proposed approach for training a state-of-the-art high-capacity deep model for semantic instance segmentation. In particular, we consider the task of segmenting car instances on the KITTI dataset which we have annotated with pixel-accurate ground truth. Our experiments demonstrate that models trained on augmented imagery generalize better than those trained on synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Thumb xl img01
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings of the 34th International Conference on Machine Learning, 70, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (inproceedings)

Abstract
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.

avg

pdf suppmat Project Page arxiv-version Project Page [BibTex]

pdf suppmat Project Page arxiv-version Project Page [BibTex]


Thumb xl joel slow flow crop
Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data

Janai, J., Güney, F., Wulff, J., Black, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages: 1406-1416, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Existing optical flow datasets are limited in size and variability due to the difficulty of capturing dense ground truth. In this paper, we tackle this problem by tracking pixels through densely sampled space-time volumes recorded with a high-speed video camera. Our model exploits the linearity of small motions and reasons about occlusions from multiple frames. Using our technique, we are able to establish accurate reference flow fields outside the laboratory in natural environments. Besides, we show how our predictions can be used to augment the input images with realistic motion blur. We demonstrate the quality of the produced flow fields on synthetic and real-world datasets. Finally, we collect a novel challenging optical flow dataset by applying our technique on data from a high-speed camera and analyze the performance of the state-of-the-art in optical flow under various levels of motion blur.

avg ps

pdf suppmat Project page Video DOI Project Page [BibTex]

pdf suppmat Project page Video DOI Project Page [BibTex]


Thumb xl img03
OctNet: Learning Deep 3D Representations at High Resolutions

Riegler, G., Ulusoy, O., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.

avg ps

pdf suppmat Project Page Video Project Page [BibTex]

pdf suppmat Project Page Video Project Page [BibTex]


Thumb xl schoeps2017cvpr
A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos

Schöps, T., Schönberger, J. L., Galliani, S., Sattler, T., Schindler, K., Pollefeys, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Motivated by the limitations of existing multi-view stereo benchmarks, we present a novel dataset for this task. Towards this goal, we recorded a variety of indoor and outdoor scenes using a high-precision laser scanner and captured both high-resolution DSLR imagery as well as synchronized low-resolution stereo videos with varying fields-of-view. To align the images with the laser scans, we propose a robust technique which minimizes photometric errors conditioned on the geometry. In contrast to previous datasets, our benchmark provides novel challenges and covers a diverse set of viewpoints and scene types, ranging from natural scenes to man-made indoor and outdoor environments. Furthermore, we provide data at significantly higher temporal and spatial resolution. Our benchmark is the first to cover the important use case of hand-held mobile devices while also providing high-resolution DSLR camera images. We make our datasets and an online evaluation server available at http://www.eth3d.net.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Thumb xl camposeco2017cvpr
Toroidal Constraints for Two Point Localization Under High Outlier Ratios

Camposeco, F., Sattler, T., Cohen, A., Geiger, A., Pollefeys, M.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Localizing a query image against a 3D model at large scale is a hard problem, since 2D-3D matches become more and more ambiguous as the model size increases. This creates a need for pose estimation strategies that can handle very low inlier ratios. In this paper, we draw new insights on the geometric information available from the 2D-3D matching process. As modern descriptors are not invariant against large variations in viewpoint, we are able to find the rays in space used to triangulate a given point that are closest to a query descriptor. It is well known that two correspondences constrain the camera to lie on the surface of a torus. Adding the knowledge of direction of triangulation, we are able to approximate the position of the camera from \emphtwo matches alone. We derive a geometric solver that can compute this position in under 1 microsecond. Using this solver, we propose a simple yet powerful outlier filter which scales quadratically in the number of matches. We validate the accuracy of our solver and demonstrate the usefulness of our method in real world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page pdf Project Page [BibTex]


Thumb xl cvpr2017 landpsace
Semantic Multi-view Stereo: Jointly Estimating Objects and Voxels

Ulusoy, A. O., Black, M. J., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Dense 3D reconstruction from RGB images is a highly ill-posed problem due to occlusions, textureless or reflective surfaces, as well as other challenges. We propose object-level shape priors to address these ambiguities. Towards this goal, we formulate a probabilistic model that integrates multi-view image evidence with 3D shape information from multiple objects. Inference in this model yields a dense 3D reconstruction of the scene as well as the existence and precise 3D pose of the objects in it. Our approach is able to recover fine details not captured in the input shapes while defaulting to the input models in occluded regions where image evidence is weak. Due to its probabilistic nature, the approach is able to cope with the approximate geometry of the 3D models as well as input shapes that are not present in the scene. We evaluate the approach quantitatively on several challenging indoor and outdoor datasets.

avg ps

YouTube pdf suppmat Project Page [BibTex]

YouTube pdf suppmat Project Page [BibTex]


Thumb xl screen shot 2017 06 14 at 2.38.22 pm
Scalable Pneumatic and Tendon Driven Robotic Joint Inspired by Jumping Spiders

Sproewitz, A., Göttler, C., Sinha, A., Caer, C., Öztekin, M. U., Petersen, K., Sitti, M.

In Proceedings 2017 IEEE International Conference on Robotics and Automation (ICRA), pages: 64-70, IEEE, Piscataway, NJ, USA, IEEE International Conference on Robotics and Automation (ICRA), May 2017 (inproceedings)

dlg

Video link (url) DOI Project Page [BibTex]

Video link (url) DOI Project Page [BibTex]


Thumb xl screen shot 2017 06 14 at 2.58.42 pm
Spinal joint compliance and actuation in a simulated bounding quadruped robot

Pouya, S., Khodabakhsh, M., Sproewitz, A., Ijspeert, A.

{Autonomous Robots}, pages: 437–452, Kluwer Academic Publishers, Springer, Dordrecht, New York, NY, Febuary 2017 (article)

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Thumb xl screen shot 2018 02 08 at 12.58.55 pm
Linking Mechanics and Learning

Heim, S., Grimminger, F., Özge, D., Spröwitz, A.

In Proceedings of Dynamic Walking 2017, 2017 (inproceedings)

dlg

[BibTex]

[BibTex]


Thumb xl screen shot 2018 02 08 at 12.58.55 pm
Is Growing Good for Learning?

Heim, S., Spröwitz, A.

Proceedings of the 8th International Symposium on Adaptive Motion of Animals and Machines AMAM2017, 2017 (conference)

dlg

[BibTex]

[BibTex]


Thumb xl passat small
Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art

Janai, J., Güney, F., Behl, A., Geiger, A.

Arxiv, 2017 (article)

Abstract
Recent years have witnessed amazing progress in AI related fields such as computer vision, machine learning and autonomous vehicles. As with any rapidly growing field, however, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several topic specific survey papers have been written, to date no general survey on problems, datasets and methods in computer vision for autonomous vehicles exists. This paper attempts to narrow this gap by providing a state-of-the-art survey on this topic. Our survey includes both the historically most relevant literature as well as the current state-of-the-art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding and end-to-end learning. Towards this goal, we first provide a taxonomy to classify each approach and then analyze the performance of the state-of-the-art on several challenging benchmarking datasets including KITTI, ISPRS, MOT and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we will also provide an interactive platform which allows to navigate topics and methods, and provides additional information and project links for each paper.

avg

pdf Project Page Project Page [BibTex]


Thumb xl screen shot 2018 02 08 at 1.12.35 pm
Evaluation of the passive dynamics of compliant legs with inertia

Györfi, B.

University of Applied Science Pforzheim, Germany, 2017 (mastersthesis)

dlg

[BibTex]

[BibTex]


no image
Momentum-Centered Control of Contact Interactions

Righetti, L., Herzog, A.

In Geometric and Numerical Foundations of Movements, 117, pages: 339-359, Springer Tracts in Advanced Robotics, Springer, Cham, 2017 (incollection)

mg

link (url) [BibTex]

link (url) [BibTex]


no image
Pattern Generation for Walking on Slippery Terrains

Khadiv, M., Moosavian, S. A. A., Herzog, A., Righetti, L.

In 2017 5th International Conference on Robotics and Mechatronics (ICROM), Iran, August 2017 (inproceedings)

Abstract
In this paper, we extend state of the art Model Predictive Control (MPC) approaches to generate safe bipedal walking on slippery surfaces. In this setting, we formulate walking as a trade off between realizing a desired walking velocity and preserving robust foot-ground contact. Exploiting this for- mulation inside MPC, we show that safe walking on various flat terrains can be achieved by compromising three main attributes, i. e. walking velocity tracking, the Zero Moment Point (ZMP) modulation, and the Required Coefficient of Friction (RCoF) regulation. Simulation results show that increasing the walking velocity increases the possibility of slippage, while reducing the slippage possibility conflicts with reducing the tip-over possibility of the contact and vice versa.

mg

link (url) [BibTex]

link (url) [BibTex]

2008


no image
Pattern generators with sensory feedback for the control of quadruped locomotion

Righetti, L., Ijspeert, A.

In 2008 IEEE International Conference on Robotics and Automation, pages: 819-824, IEEE, Pasadena, USA, 2008 (inproceedings)

Abstract
Central pattern generators (CPGs) are becoming a popular model for the control of locomotion of legged robots. Biological CPGs are neural networks responsible for the generation of rhythmic movements, especially locomotion. In robotics, a systematic way of designing such CPGs as artificial neural networks or systems of coupled oscillators with sensory feedback inclusion is still missing. In this contribution, we present a way of designing CPGs with coupled oscillators in which we can independently control the ascending and descending phases of the oscillations (i.e. the swing and stance phases of the limbs). Using insights from dynamical system theory, we construct generic networks of oscillators able to generate several gaits under simple parameter changes. Then we introduce a systematic way of adding sensory feedback from touch sensors in the CPG such that the controller is strongly coupled with the mechanical system it controls. Finally we control three different simulated robots (iCub, Aibo and Ghostdog) using the same controller to show the effectiveness of the approach. Our simulations prove the importance of independent control of swing and stance duration. The strong mutual coupling between the CPG and the robot allows for more robust locomotion, even under non precise parameters and non-flat environment.

mg

link (url) DOI [BibTex]

2008


link (url) DOI [BibTex]


no image
Experimental Study of Limit Cycle and Chaotic Controllers for the Locomotion of Centipede Robots

Matthey, L., Righetti, L., Ijspeert, A.

In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages: 1860-1865, IEEE, Nice, France, sep 2008 (inproceedings)

Abstract
In this contribution we present a CPG (central pattern generator) controller based on coupled Rossler systems. It is able to generate both limit cycle and chaotic behaviors through bifurcation. We develop an experimental test bench to measure quantitatively the performance of different controllers on unknown terrains of increasing difficulty. First, we show that for flat terrains, open loop limit cycle systems are the most efficient (in terms of speed of locomotion) but that they are quite sensitive to environmental changes. Second, we show that sensory feedback is a crucial addition for unknown terrains. Third, we show that the chaotic controller with sensory feedback outperforms the other controllers in very difficult terrains and actually promotes the emergence of short synchronized movement patterns. All that is done using an unified framework for the generation of limit cycle and chaotic behaviors, where a simple parameter change can switch from one behavior to the other through bifurcation. Such flexibility would allow the automatic adaptation of the robot locomotion strategy to the terrain uncertainty.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A Dynamical System for Online Learning of Periodic Movements of Unknown Waveform and Frequency

Gams, A., Righetti, L., Ijspeert, A., Lenarčič, J.

In 2008 2nd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, pages: 85-90, IEEE, Scottsdale, USA, October 2008 (inproceedings)

Abstract
The paper presents a two-layered system for learning and encoding a periodic signal onto a limit cycle without any knowledge on the waveform and the frequency of the signal, and without any signal processing. The first dynamical system is responsible for extracting the main frequency of the input signal. It is based on adaptive frequency phase oscillators in a feedback structure, enabling us to extract separate frequency components without any signal processing, as all of the processing is embedded in the dynamics of the system itself. The second dynamical system is responsible for learning of the waveform. It has a built-in learning algorithm based on locally weighted regression, which adjusts the weights according to the amplitude of the input signal. By combining the output of the first system with the input of the second system we can rapidly teach new trajectories to robots. The systems works online for any periodic signal and can be applied in parallel to multiple dimensions. Furthermore, it can adapt to changes in frequency and shape, e.g. to non-stationary signals, and is computationally inexpensive. Results using simulated and hand-generated input signals, along with applying the algorithm to a HOAP-2 humanoid robot are presented.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Passive compliant quadruped robot using central pattern generators for locomotion control

Rutishauser, S., Sproewitz, A., Righetti, L., Ijspeert, A.

In 2008 IEEE International Conference on Biomedical Robotics and Biomechatronics, pages: 710-715, IEEE, Scottsdale, USA, October 2008 (inproceedings)

Abstract
We present a new quadruped robot, ldquoCheetahrdquo, featuring three-segment pantographic legs with passive compliant knee joints. Each leg has two degrees of freedom - knee and hip joint can be actuated using proximal mounted RC servo motors, force transmission to the knee is achieved by means of a bowden cable mechanism. Simple electronics to command the actuators from a desktop computer have been designed in order to test the robot. A Central Pattern Generator (CPG) network has been implemented to generate different gaits. A parameter space search was performed and tested on the robot to optimize forward velocity.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Frequency analysis with coupled nonlinear oscillators

Buchli, J., Righetti, L., Ijspeert, A.

Physica D: Nonlinear Phenomena, 237(13):1705-1718, August 2008 (article)

Abstract
We present a method to obtain the frequency spectrum of a signal with a nonlinear dynamical system. The dynamical system is composed of a pool of adaptive frequency oscillators with negative mean-field coupling. For the frequency analysis, the synchronization and adaptation properties of the component oscillators are exploited. The frequency spectrum of the signal is reflected in the statistics of the intrinsic frequencies of the oscillators. The frequency analysis is completely embedded in the dynamics of the system. Thus, no pre-processing or additional parameters, such as time windows, are needed. Representative results of the numerical integration of the system are presented. It is shown, that the oscillators tune to the correct frequencies for both discrete and continuous spectra. Due to its dynamic nature the system is also capable to track non-stationary spectra. Further, we show that the system can be modeled in a probabilistic manner by means of a nonlinear Fokker–Planck equation. The probabilistic treatment is in good agreement with the numerical results, and provides a useful tool to understand the underlying mechanisms leading to convergence.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A modular bio-inspired architecture for movement generation for the infant-like robot iCub

Degallier, S., Righetti, L., Natale, L., Nori, F., Metta, G., Ijspeert, A.

In 2008 2nd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, pages: 795-800, IEEE, Scottsdale, USA, October 2008 (inproceedings)

Abstract
Movement generation in humans appears to be processed through a three-layered architecture, where each layer corresponds to a different level of abstraction in the representation of the movement. In this article, we will present an architecture reflecting this organization and based on a modular approach to human movement generation. We will show that our architecture is well suited for the online generation and modulation of motor behaviors, but also for switching between motor behaviors. This will be illustrated respectively through an interactive drumming task and through switching between reaching and crawling.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2003


no image
Evolution of Fault-tolerant Self-replicating Structures

Righetti, L., Shokur, S., Capcarre, M.

In Advances in Artificial Life, pages: 278-288, Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2003 (inproceedings)

Abstract
Designed and evolved self-replicating structures in cellular automata have been extensively studied in the past as models of Artificial Life. However, CAs, unlike their biological counterpart, are very brittle: any faulty cell usually leads to the complete destruction of any emerging structures, let alone self-replicating structures. A way to design fault-tolerant structures based on error-correcting-code has been presented recently [1], but it required a cumbersome work to be put into practice. In this paper, we get back to the original inspiration for these works, nature, and propose a way to evolve self-replicating structures, faults here being only an idiosyncracy of the environment.

mg

link (url) DOI [BibTex]

2003


link (url) DOI [BibTex]