Header logo is


2018


no image
Deep Reinforcement Learning for Resource-aware Control

Baumann, D., Zhu, J., Martius, G., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control, Miami, Fl, USA, December 2018 (inproceedings) Accepted

al ics

[BibTex]

2018


[BibTex]


no image
Minimum Information Exchange in Distributed Systems

Solowjow, F., Mehrjou, A., Schölkopf, B., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), Miami, Fl, USA, December 2018 (inproceedings) Accepted

ei ics

arXiv [BibTex]

arXiv [BibTex]


Thumb xl universal custom complex magnetic spring design methodology
Universal Custom Complex Magnetic Spring Design Methodology

Woodward, M. A., Sitti, M.

IEEE Transactions on Magnetics, 54(1):1-13, October 2018 (article)

Abstract
A design methodology is presented for creating custom complex magnetic springs through the design of force-displacement curves. This methodology results in a magnet configuration, which will produce a desired force-displacement relationship. Initially, the problem is formulated and solved as a system of linear equations. Then, given the limited likelihood of a single solution being feasibly manufactured, key parameters of the solution are extracted and varied to create a family of solutions. Finally, these solutions are refined using numerical optimization. Given the properties of magnets, this methodology can create any well-defined function of force versus displacement and is model-independent. To demonstrate this flexibility, a number of example magnetic springs are designed; one of which, designed for use in a jumping-gliding robot's shape memory alloy actuated clutch, is manufactured and experimentally characterized. Due to the scaling of magnetic forces, the displacement region which these magnetic springs are most applicable is that of millimeters and below. However, this region is well situated for miniature robots and smart material actuators, where a tailored magnetic spring, designed to compliment a component, can enhance its performance while adding new functionality. The methodology is also expendable to variable interactions and multi-dimensional magnetic field design.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl iros18
Towards Robust Visual Odometry with a Multi-Camera System

Liu, P., Geppert, M., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In International Conference on Intelligent Robots and Systems (IROS) 2018, International Conference on Intelligent Robots and Systems, October 2018 (inproceedings)

Abstract
We present a visual odometry (VO) algorithm for a multi-camera system and robust operation in challenging environments. Our algorithm consists of a pose tracker and a local mapper. The tracker estimates the current pose by minimizing photometric errors between the most recent keyframe and the current frame. The mapper initializes the depths of all sampled feature points using plane-sweeping stereo. To reduce pose drift, a sliding window optimizer is used to refine poses and structure jointly. Our formulation is flexible enough to support an arbitrary number of stereo cameras. We evaluate our algorithm thoroughly on five datasets. The datasets were captured in different conditions: daytime, night-time with near-infrared (NIR) illumination and night-time without NIR illumination. Experimental results show that a multi-camera setup makes the VO more robust to challenging environments, especially night-time conditions, in which a single stereo configuration fails easily due to the lack of features.

avg

pdf [BibTex]

pdf [BibTex]


Thumb xl cover
Deep Neural Network-based Cooperative Visual Tracking through Multiple Micro Aerial Vehicles

Price, E., Lawless, G., Ludwig, R., Martinovic, I., Buelthoff, H. H., Black, M. J., Ahmad, A.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, 3, pages: 3193-3200, IEEE, October 2018 (article)

Abstract
Multi-camera tracking of humans and animals in outdoor environments is a relevant and challenging problem. Our approach to it involves a team of cooperating micro aerial vehicles (MAVs) with on-board cameras only. DNNs often fail at objects with small scale or far away from the camera, which are typical characteristics of a scenario with aerial robots. Thus, the core problem addressed in this paper is how to achieve on-board, online, continuous and accurate vision-based detections using DNNs for visual person tracking through MAVs. Our solution leverages cooperation among multiple MAVs and active selection of most informative regions of image. We demonstrate the efficiency of our approach through simulations with up to 16 robots and real robot experiments involving two aerial robots tracking a person, while maintaining an active perception-driven formation. ROS-based source code is provided for the benefit of the community.

ps

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Thumb xl coma faces
Generating 3D Faces using Convolutional Mesh Autoencoders

Ranjan, A., Bolkart, T., Sanyal, S., Black, M. J.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://coma.is.tue.mpg.de/.

ps

code paper supplementary link (url) [BibTex]


Thumb xl ianeccv18
Learning Priors for Semantic 3D Reconstruction

Cherabier, I., Schönberger, J., Oswald, M., Pollefeys, M., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
We present a novel semantic 3D reconstruction framework which embeds variational regularization into a neural network. Our network performs a fixed number of unrolled multi-scale optimization iterations with shared interaction weights. In contrast to existing variational methods for semantic 3D reconstruction, our model is end-to-end trainable and captures more complex dependencies between the semantic labels and the 3D geometry. Compared to previous learning-based approaches to 3D reconstruction, we integrate powerful long-range dependencies using variational coarse-to-fine optimization. As a result, our network architecture requires only a moderate number of parameters while keeping a high level of expressiveness which enables learning from very little data. Experiments on real and synthetic datasets demonstrate that our network achieves higher accuracy compared to a purely variational approach while at the same time requiring two orders of magnitude less iterations to converge. Moreover, our approach handles ten times more semantic class labels using the same computational resources.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Thumb xl person reid.001
Part-Aligned Bilinear Representations for Person Re-identification

Suh, Y., Wang, J., Tang, S., Mei, T., Lee, K. M.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Comparing the appearance of corresponding body parts is essential for person re-identification. However, body parts are frequently misaligned be- tween detected boxes, due to the detection errors and the pose/viewpoint changes. In this paper, we propose a network that learns a part-aligned representation for person re-identification. Our model consists of a two-stream network, which gen- erates appearance and body part feature maps respectively, and a bilinear-pooling layer that fuses two feature maps to an image descriptor. We show that it results in a compact descriptor, where the inner product between two image descriptors is equivalent to an aggregation of the local appearance similarities of the cor- responding body parts, and thereby significantly reduces the part misalignment problem. Our approach is advantageous over other pose-guided representations by learning part descriptors optimal for person re-identification. Training the net- work does not require any part annotation on the person re-identification dataset. Instead, we simply initialize the part sub-stream using a pre-trained sub-network of an existing pose estimation network and train the whole network to minimize the re-identification loss. We validate the effectiveness of our approach by demon- strating its superiority over the state-of-the-art methods on the standard bench- mark datasets including Market-1501, CUHK03, CUHK01 and DukeMTMC, and standard video dataset MARS.

ps

pdf supplementary [BibTex]

pdf supplementary [BibTex]


Thumb xl persondetect  copy
Learning Human Optical Flow

Ranjan, A., Romero, J., Black, M. J.

In 29th British Machine Vision Conference, September 2018 (inproceedings)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Given this, we devise an optical flow algorithm specifically for human motion and show that it is superior to generic flow methods. Designing a method by hand is impractical, so we develop a new training database of image sequences with ground truth optical flow. For this we use a 3D model of the human body and motion capture data to synthesize realistic flow fields. We then train a convolutional neural network to estimate human flow fields from pairs of images. Since many applications in human motion analysis depend on speed, and we anticipate mobile applications, we base our method on SpyNet with several modifications. We demonstrate that our trained network is more accurate than a wide range of top methods on held-out test data and that it generalizes well to real image sequences. When combined with a person detector/tracker, the approach provides a full solution to the problem of 2D human flow estimation. Both the code and the dataset are available for research.

ps

video code pdf link (url) [BibTex]

video code pdf link (url) [BibTex]


Thumb xl bmvc pic
Human Motion Parsing by Hierarchical Dynamic Clustering

Zhang, Y., Tang, S., Sun, H., Neumann, H.

British Machine Vision Conference, September 2018 (conference)

Abstract
Parsing continuous human motion into meaningful segments plays an essential role in various applications. In this work, we propose a hierarchical dynamic clustering framework to derive action clusters from a sequence of local features in an unsuper- vised bottom-up manner. We systematically investigate the modules in this framework and particularly propose diverse temporal pooling schemes, in order to realize accurate temporal action localization. We demonstrate our method on two motion parsing tasks: temporal action segmentation and abnormal behavior detection. The experimental results indicate that the proposed framework is significantly more effective than the other related state-of-the-art methods on several datasets.

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl joeleccv18
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

Janai, J., Güney, F., Ranjan, A., Black, M. J., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

avg ps

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Thumb xl sample3 merge black
Learning an Infant Body Model from RGB-D Data for Accurate Full Body Motion Analysis

Hesse, N., Pujades, S., Romero, J., Black, M. J., Bodensteiner, C., Arens, M., Hofmann, U. G., Tacke, U., Hadders-Algra, M., Weinberger, R., Muller-Felber, W., Schroeder, A. S.

In Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI), September 2018 (inproceedings)

Abstract
Infant motion analysis enables early detection of neurodevelopmental disorders like cerebral palsy (CP). Diagnosis, however, is challenging, requiring expert human judgement. An automated solution would be beneficial but requires the accurate capture of 3D full-body movements. To that end, we develop a non-intrusive, low-cost, lightweight acquisition system that captures the shape and motion of infants. Going beyond work on modeling adult body shape, we learn a 3D Skinned Multi-Infant Linear body model (SMIL) from noisy, low-quality, and incomplete RGB-D data. We demonstrate the capture of shape and motion with 37 infants in a clinical environment. Quantitative experiments show that SMIL faithfully represents the data and properly factorizes the shape and pose of the infants. With a case study based on general movement assessment (GMA), we demonstrate that SMIL captures enough information to allow medical assessment. SMIL provides a new tool and a step towards a fully automatic system for GMA.

ps

pdf Project page [BibTex]

pdf Project page [BibTex]


Thumb xl beneccv18
SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images

Coors, B., Condurache, A. P., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Omnidirectional cameras offer great benefits over classical cameras wherever a wide field of view is essential, such as in virtual reality applications or in autonomous robots. Unfortunately, standard convolutional neural networks are not well suited for this scenario as the natural projection surface is a sphere which cannot be unwrapped to a plane without introducing significant distortions, particularly in the polar regions. In this work, we present SphereNet, a novel deep learning framework which encodes invariance against such distortions explicitly into convolutional neural networks. Towards this goal, SphereNet adapts the sampling locations of the convolutional filters, effectively reversing distortions, and wraps the filters around the sphere. By building on regular convolutions, SphereNet enables the transfer of existing perspective convolutional neural network models to the omnidirectional case. We demonstrate the effectiveness of our method on the tasks of image classification and object detection, exploiting two newly created semi-synthetic and real-world omnidirectional datasets.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Thumb xl vip
Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera

Marcard, T. V., Henschel, R., Black, M. J., Rosenhahn, B., Pons-Moll, G.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
In this work, we propose a method that combines a single hand-held camera and a set of Inertial Measurement Units (IMUs) attached at the body limbs to estimate accurate 3D poses in the wild. This poses many new challenges: the moving camera, heading drift, cluttered background, occlusions and many people visible in the video. We associate 2D pose detections in each image to the corresponding IMU-equipped persons by solving a novel graph based optimization problem that forces 3D to 2D coherency within a frame and across long range frames. Given associations, we jointly optimize the pose of a statistical body model, the camera pose and heading drift using a continuous optimization framework. We validated our method on the TotalCapture dataset, which provides video and IMU synchronized with ground truth. We obtain an accuracy of 26mm, which makes it accurate enough to serve as a benchmark for image-based 3D pose estimation in the wild. Using our method, we recorded 3D Poses in the Wild (3DPW ), a new dataset consisting of more than 51; 000 frames with accurate 3D pose in challenging sequences, including walking in the city, going up-stairs, having co ffee or taking the bus. We make the reconstructed 3D poses, video, IMU and 3D models available for research purposes at http://virtualhumans.mpi-inf.mpg.de/3DPW.

ps

pdf SupMat data/code [BibTex]

pdf SupMat data/code [BibTex]


Thumb xl eccv pascal results  thumbnail
Deep Directional Statistics: Pose Estimation with Uncertainty Quantification

Prokudin, S., Gehler, P., Nowozin, S.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Modern deep learning systems successfully solve many perception tasks such as object pose estimation when the input image is of high quality. However, in challenging imaging conditions such as on low resolution images or when the image is corrupted by imaging artifacts, current systems degrade considerably in accuracy. While a loss in performance is unavoidable we would like our models to quantify their uncertainty in order to achieve robustness against images of varying quality. Probabilistic deep learning models combine the expressive power of deep learning with uncertainty quantification. In this paper, we propose a novel probabilistic deep learning model for the task of angular regression. Our model uses von Mises distributions to predict a distribution over object pose angle. Whereas a single von Mises distribution is making strong assumptions about the shape of the distribution, we extend the basic model to predict a mixture of von Mises distributions. We show how to learn a mixture model using a finite and infinite number of mixture components. Our model allow for likelihood-based training and efficient inference at test time. We demonstrate on a number of challenging pose estimation datasets that our model produces calibrated probability predictions and competitive or superior point estimates compared to the current state-of-the-art.

ps

code pdf [BibTex]

code pdf [BibTex]


Thumb xl toc image
Uphill production of dihydrogen by enzymatic oxidation of glucose without an external energy source

Suraniti, E., Merzeau, P., Roche, J., Gounel, S., Mark, A. G., Fischer, P., Mano, N., Kuhn, A.

Nature Communications, 9(1):3229, August 2018 (article)

Abstract
Chemical systems do not allow the coupling of energy from several simple reactions to drive a subsequent reaction, which takes place in the same medium and leads to a product with a higher energy than the one released during the first reaction. Gibbs energy considerations thus are not favorable to drive e.g., water splitting by the direct oxidation of glucose as a model reaction. Here, we show that it is nevertheless possible to carry out such an energetically uphill reaction, if the electrons released in the oxidation reaction are temporarily stored in an electromagnetic system, which is then used to raise the electrons' potential energy so that they can power the electrolysis of water in a second step. We thereby demonstrate the general concept that lower energy delivering chemical reactions can be used to enable the formation of higher energy consuming reaction products in a closed system.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning-Based Robust Model Predictive Control with State-Dependent Uncertainty

Soloperto, R., Müller, M. A., Trimpe, S., Allgöwer, F.

In IFAC Conference on Nonlinear Model Predictive Control, Madison, Wisconsin, USA, 6th IFAC Conference on Nonlinear Model Predictive Control, August 2018 (inproceedings) Accepted

ics

[BibTex]

[BibTex]


Thumb xl aircap ca 3
Decentralized MPC based Obstacle Avoidance for Multi-Robot Target Tracking Scenarios

Tallamraju, R., Rajappa, S., Black, M. J., Karlapalem, K., Ahmad, A.

The 16th IEEE International Symposium on Safety, Security, and Rescue Robotics, August 2018 (conference) Accepted

ps

Project Page [BibTex]

Project Page [BibTex]


Thumb xl toc image
Gait learning for soft microrobots controlled by light fields

Rohr, A. V., Trimpe, S., Marco, A., Fischer, P., Palagi, S.

In Proceeding of the International Conference on Intelligent Robots and Systems, International Conference on Intelligent Robots and Systems (IROS), July 2018 (inproceedings)

Abstract
Soft microrobots based on photoresponsive materials and controlled by light fields can generate a variety of different gaits. This inherent flexibility can be exploited to maximize their locomotion performance in a given environment and used to adapt them to changing environments. However, because of the lack of accurate locomotion models, and given the intrinsic variability among microrobots, analytical control design is not possible. Common data-driven approaches, on the other hand, require running prohibitive numbers of experiments and lead to very sample-specific results. Here we propose a probabilistic learning approach for light-controlled soft microrobots based on Bayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach results in a learning scheme that is highly data-efficient, enabling gait optimization with a limited experimental budget, and robust against differences among microrobot samples. These features are obtained by designing the learning scheme through the comparison of different GP priors and BO settings on a semisynthetic data set. The developed learning scheme is validated in microrobot experiments, resulting in a 115% improvement in a microrobot’s locomotion performance with an experimental budget of only 20 tests. These encouraging results lead the way toward self-adaptive microrobotic systems based on lightcontrolled soft microrobots and probabilistic learning control.

ics pf

[BibTex]

[BibTex]


Thumb xl toc image
A machine from machines

Fischer, P.

Nature Physics, July 2018 (article)

Abstract
Building spinning microrotors that self-assemble and synchronize to form a gear sounds like an impossible feat. However, it has now been achieved using only a single type of building block -- a colloid that self-propels.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl toc image
Chemotaxis of Active Janus Nanoparticles

Popescu, M. N., Uspal, W. E., Bechinger, C., Fischer, P.

Nano Letters, July 2018, PMID: 30047271 (article)

Abstract
While colloids and molecules in solution exhibit passive Brownian motion, particles that are partially covered with a catalyst, which promotes the transformation of a fuel dissolved in the solution, can actively move. These active Janus particles are known as “chemical nanomotors” or self-propelling “swimmers” and have been realized with a range of catalysts, sizes, and particle geometries. Because their active translation depends on the fuel concentration, one expects that active colloidal particles should also be able to swim toward a fuel source. Synthesizing and engineering nanoparticles with distinct chemotactic properties may enable important developments, such as particles that can autonomously swim along a pH gradient toward a tumor. Chemotaxis requires that the particles possess an active coupling of their orientation to a chemical gradient. In this Perspective we provide a simple, intuitive description of the underlying mechanisms for chemotaxis, as well as the means to analyze and classify active particles that can show positive or negative chemotaxis. The classification provides guidance for engineering a specific response and is a useful organizing framework for the quantitative analysis and modeling of chemotactic behaviors. Chemotaxis is emerging as an important focus area in the field of active colloids and promises a number of fascinating applications for nanoparticles and particle-based delivery.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Kernel Recursive ABC: Point Estimation with Intractable Likelihood

Kajihara, T., Kanagawa, M., Yamazaki, K., Fukumizu, K.

Proceedings of the 35th International Conference on Machine Learning, pages: 2405-2414, PMLR, July 2018 (proceedings)

Abstract
We propose a novel approach to parameter estimation for simulator-based statistical models with intractable likelihood. Our proposed method involves recursive application of kernel ABC and kernel herding to the same observed data. We provide a theoretical explanation regarding why the approach works, showing (for the population setting) that, under a certain assumption, point estimates obtained with this method converge to the true parameter, as recursion proceeds. We have conducted a variety of numerical experiments, including parameter estimation for a real-world pedestrian flow simulator, and show that in most cases our method outperforms existing approaches.

pn

Paper [BibTex]

Paper [BibTex]


no image
Intrinsic disentanglement: an invariance view for deep generative models

Besserve, M., Sun, R., Schölkopf, B.

Workshop on Theoretical Foundations and Applications of Deep Generative Models at ICML, July 2018 (conference)

ei

PDF [BibTex]

PDF [BibTex]


Thumb xl ar
Robust Visual Augmented Reality in Robot-Assisted Surgery

Forte, M. P.

Politecnico di Milano, July 2018 (mastersthesis)

Abstract
The broader research objective of this line of research is to test the hypothesis that real-time stereo video analysis and augmented reality can increase safety and task efficiency in robot-assisted surgery. This master’s thesis aims to solve the first step needed to achieve this goal: the creation of a robust system that delivers the envisioned feedback to a surgeon while he or she controls a surgical robot that is identical to those used on human patients. Several approaches for applying augmented reality to da Vinci Surgical Systems have been proposed, but none of them entirely rely on a clinical robot; specifically, they require additional sensors, depend on access to the da Vinci API, are designed for a very specific task, or were tested on systems that are starkly different from those in clinical use. There has also been prior work that presents the real-world camera view and the computer graphics on separate screens, or not in real time. In other scenarios, the digital information is overlaid manually by the surgeons themselves or by computer scientists, rather than being generated automatically in response to the surgeon’s actions. We attempted to overcome the aforementioned constraints by acquiring input signals from the da Vinci stereo endoscope and providing augmented reality to the console in real time (less than 150 ms delay, including the 62 ms of inherent latency of the da Vinci). The potential benefits of the resulting system are broad because it was built to be general, rather than customized for any specific task. The entire platform is compatible with any generation of the da Vinci System and does not require a dVRK (da Vinci Research Kit) or access to the API. Thus, it can be applied to existing da Vinci Systems in operating rooms around the world.

hi

[BibTex]

[BibTex]


no image
Learning an Approximate Model Predictive Controller with Guarantees

Hertneck, M., Koehler, J., Trimpe, S., Allgöwer, F.

IEEE Control Systems Letters, 2(3):543-548, July 2018 (article)

Abstract
A supervised learning framework is proposed to approximate a model predictive controller (MPC) with reduced computational complexity and guarantees on stability and constraint satisfaction. The framework can be used for a wide class of nonlinear systems. Any standard supervised learning technique (e.g. neural networks) can be employed to approximate the MPC from samples. In order to obtain closed-loop guarantees for the learned MPC, a robust MPC design is combined with statistical learning bounds. The MPC design ensures robustness to inaccurate inputs within given bounds, and Hoeffding’s Inequality is used to validate that the learned MPC satisfies these bounds with high confidence. The result is a closed-loop statistical guarantee on stability and constraint satisfaction for the learned MPC. The proposed learning-based MPC framework is illustrated on a nonlinear benchmark problem, for which we learn a neural network controller with guarantees.

ics

PDF DOI [BibTex]

PDF DOI [BibTex]


Thumb xl mazen
Robust Physics-based Motion Retargeting with Realistic Body Shapes

Borno, M. A., Righetti, L., Black, M. J., Delp, S. L., Fiume, E., Romero, J.

Computer Graphics Forum, 37, pages: 6:1-12, July 2018 (article)

Abstract
Motion capture is often retargeted to new, and sometimes drastically different, characters. When the characters take on realistic human shapes, however, we become more sensitive to the motion looking right. This means adapting it to be consistent with the physical constraints imposed by different body shapes. We show how to take realistic 3D human shapes, approximate them using a simplified representation, and animate them so that they move realistically using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The approach automatically adapts the motion of the mocap subject to the body shape of a target subject. This motion respects the physical properties of the new body and every body shape results in a different and appropriate movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters. In an interactive environment, successful retargeting requires adapting the motion to unexpected external forces. We achieve robustness to such forces using a novel LQR-tree formulation. We show that the simulated motions look appropriate to each character’s anatomy and their actions are robust to perturbations.

ps

pdf video [BibTex]

pdf video [BibTex]


no image
Comparison-Based Random Forests

Haghiri, S., Garreau, D., Luxburg, U. V.

In ICML, 35th International Conference on Machine Learning, July 2018 (inproceedings)

slt

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser image
Probabilistic Recurrent State-Space Models

Doerr, A., Daniel, C., Schiegg, M., Nguyen-Tuong, D., Schaal, S., Toussaint, M., Trimpe, S.

In Proceedings of the International Conference on Machine Learning, International Conference on Machine Learning (ICML), July 2018 (inproceedings) Accepted

Abstract
State-space models (SSMs) are a highly expressive model class for learning patterns in time series data and for system identification. Deterministic versions of SSMs (e.g., LSTMs) proved extremely successful in modeling complex time-series data. Fully probabilistic SSMs, however, unfortunately often prove hard to train, even for smaller problems. To overcome this limitation, we propose a scalable initialization and training algorithm based on doubly stochastic variational inference and Gaussian processes. In the variational approximation we propose in contrast to related approaches to fully capture the latent state temporal correlations to allow for robust training.

am ics

arXiv pdf Project Page [BibTex]

arXiv pdf Project Page [BibTex]


Thumb xl octo turned
Real-time Perception meets Reactive Motion Generation

Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.

IEEE Robotics and Automation Letters, 3(3):1864-1871, July 2018 (article)

Abstract
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.

am

arxiv video video link (url) DOI Project Page [BibTex]


Thumb xl screen shot 2018 06 29 at 4.24.39 pm
Innate turning preference of leaf-cutting ants in the absence of external orientation cues

Endlein, T., Sitti, M.

Journal of Experimental Biology, The Company of Biologists Ltd, June 2018 (article)

Abstract
Many ants use a combination of cues for orientation but how do ants find their way when all external cues are suppressed? Do they walk in a random way or are their movements spatially oriented? Here we show for the first time that leaf-cutting ants (Acromyrmex lundii) have an innate preference of turning counter-clockwise (left) when external cues are precluded. We demonstrated this by allowing individual ants to run freely on the water surface of a newly-developed treadmill. The surface tension supported medium-sized workers but effectively prevented ants from reaching the wall of the vessel, important to avoid wall-following behaviour (thigmotaxis). Most ants ran for minutes on the spot but also slowly turned counter-clockwise in the absence of visual cues. Reconstructing the effectively walked path revealed a looping pattern which could be interpreted as a search strategy. A similar turning bias was shown for groups of ants in a symmetrical Y-maze where twice as many ants chose the left branch in the absence of optical cues. Wall-following behaviour was tested by inserting a coiled tube before the Y-fork. When ants traversed a left-coiled tube, more ants chose the left box and vice versa. Adding visual cues in form of vertical black strips either outside the treadmill or on one branch of the Y-maze led to oriented walks towards the strips. It is suggested that both, the turning bias and the wall-following are employed as search strategies for an unknown environment which can be overridden by visual cues.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 1
Motility and chemotaxis of bacteria-driven microswimmers fabricated using antigen 43-mediated biotin display

Schauer, O., Mostaghaci, B., Colin, R., Hürtgen, D., Kraus, D., Sitti, M., Sourjik, V.

Scientific Reports, 8(1):9801, Nature Publishing Group, June 2018 (article)

Abstract
Bacteria-driven biohybrid microswimmers (bacteriabots) combine synthetic cargo with motile living bacteria that enable propulsion and steering. Although fabrication and potential use of such bacteriabots have attracted much attention, existing methods of fabrication require an extensive sample preparation that can drastically decrease the viability and motility of bacteria. Moreover, chemotactic behavior of bacteriabots in a liquid medium with chemical gradients has remained largely unclear. To overcome these shortcomings, we designed Escherichia coli to autonomously display biotin on its cell surface via the engineered autotransporter antigen 43 and thus to bind streptavidin-coated cargo. We show that the cargo attachment to these bacteria is greatly enhanced by motility and occurs predominantly at the cell poles, which is greatly beneficial for the fabrication of motile bacteriabots. We further performed a systemic study to understand and optimize the ability of these bacteriabots to follow chemical gradients. We demonstrate that the chemotaxis of bacteriabots is primarily limited by the cargo-dependent reduction of swimming speed and show that the fabrication of bacteriabots using elongated E. coli cells can be used to overcome this limitation.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 41586 2018 250 fig1 html
Multifunctional ferrofluid-infused surfaces with reconfigurable multiscale topography

Wang, W., Timonen, J. V. I., Carlson, A., Drotlef, D., Zhang, C. T., Kolle, S., Grinthal, A., Wong, T., Hatton, B., Kang, S. H., Kennedy, S., Chi, J., Blough, R. T., Sitti, M., Mahadevan, L., Aizenberg, J.

Nature, June 2018 (article)

Abstract
Developing adaptive materials with geometries that change in response to external stimuli provides fundamental insights into the links between the physical forces involved and the resultant morphologies and creates a foundation for technologically relevant dynamic systems1,2. In particular, reconfigurable surface topography as a means to control interfacial properties 3 has recently been explored using responsive gels 4 , shape-memory polymers 5 , liquid crystals6-8 and hybrid composites9-14, including magnetically active slippery surfaces12-14. However, these designs exhibit a limited range of topographical changes and thus a restricted scope of function. Here we introduce a hierarchical magneto-responsive composite surface, made by infiltrating a ferrofluid into a microstructured matrix (termed ferrofluid-containing liquid-infused porous surfaces, or FLIPS). We demonstrate various topographical reconfigurations at multiple length scales and a broad range of associated emergent behaviours. An applied magnetic-field gradient induces the movement of magnetic nanoparticles suspended in the ferrofluid, which leads to microscale flow of the ferrofluid first above and then within the microstructured surface. This redistribution changes the initially smooth surface of the ferrofluid (which is immobilized by the porous matrix through capillary forces) into various multiscale hierarchical topographies shaped by the size, arrangement and orientation of the confining microstructures in the magnetic field. We analyse the spatial and temporal dynamics of these reconfigurations theoretically and experimentally as a function of the balance between capillary and magnetic pressures15-19 and of the geometric anisotropy of the FLIPS system. Several interesting functions at three different length scales are demonstrated: self-assembly of colloidal particles at the micrometre scale; regulated flow of liquid droplets at the millimetre scale; and switchable adhesion and friction, liquid pumping and removal of biofilms at the centimetre scale. We envision that FLIPS could be used as part of integrated control systems for the manipulation and transport of matter, thermal management, microfluidics and fouling-release materials.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Event-triggered Learning for Resource-efficient Networked Control

Solowjow, F., Baumann, D., Garcke, J., Trimpe, S.

In Proceedings of the 2018 American Control Conference (ACC), June 2018 (inproceedings)

ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Pisa, Italy, June 2018, Hands-on demonstration presented at EuroHaptics (misc)

Abstract
In this demonstration, you will hold two pen-shaped modules: an in-pen and an out-pen. The in-pen is instrumented with a high-bandwidth three-axis accelerometer, and the out-pen contains a one-axis voice coil actuator. Use the in-pen to interact with different surfaces; the measured 3D accelerations are continually converted into 1D vibrations and rendered with the out-pen for you to feel. You can test conversion methods that range from simply selecting a single axis to applying a discrete Fourier transform or principal component analysis for realistic and brisk real-time conversion.

hi

[BibTex]

[BibTex]


Thumb xl screen shot 2018 03 22 at 10.40.47 am
Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs

Spröwitz, A., Tuleu, A., Ajallooeian, M., Vespignani, M., Moeckel, R., Eckert, P., D’Haene, M., Degrave, J., Nordmann, A., Schrauwen, B., Steil, J., Ijspeert, A. J.

Frontiers in Robotics and AI, 5(67), June 2018, arXiv: 1803.06259 (article)

Abstract
We present Oncilla robot, a novel mobile, quadruped legged locomotion machine. This large-cat sized, 5.1 robot is one of a kind of a recent, bioinspired legged robot class designed with the capability of model-free locomotion control. Animal legged locomotion in rough terrain is clearly shaped by sensor feedback systems. Results with Oncilla robot show that agile and versatile locomotion is possible without sensory signals to some extend, and tracking becomes robust when feedback control is added (Ajaoolleian 2015). By incorporating mechanical and control blueprints inspired from animals, and by observing the resulting robot locomotion characteristics, we aim to understand the contribution of individual components. Legged robots have a wide mechanical and control design parameter space, and a unique potential as research tools to investigate principles of biomechanics and legged locomotion control. But the hardware and controller design can be a steep initial hurdle for academic research. To facilitate the easy start and development of legged robots, Oncilla-robot's blueprints are available through open-source. [...]

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl koala
Designing a Haptic Empathetic Robot Animal for Children with Autism

Burns, R., Kuchenbecker, K. J.

Workshop paper (4 pages) at the RSS Workshop on Robot-Mediated Autism Intervention: Hardware, Software and Curriculum, June 2018 (misc)

hi

link (url) [BibTex]

link (url) [BibTex]


Thumb xl screen shot 2018 04 18 at 11.01.27 am
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fail with Grace

Heim, S., Spröwitz, A.

Proceedings of SIMPAR 2018, pages: 55-61, IEEE, 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), May 2018 (conference)

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl selfsensing
Self-Sensing Paper Actuators Based on Graphite–Carbon Nanotube Hybrid Films

Morteza, A., Metin, S.

Advanced Science, pages: 1800239, May 2018 (article)

Abstract
Abstract Soft actuators have demonstrated potential in a range of applications, including soft robotics, artificial muscles, and biomimetic devices. However, the majority of current soft actuators suffer from the lack of real-time sensory feedback, prohibiting their effective sensing and multitask function. Here, a promising strategy is reported to design bilayer electrothermal actuators capable of simultaneous actuation and sensation (i.e., self-sensing actuators), merely through two input electric terminals. Decoupled electrothermal stimulation and strain sensation is achieved by the optimal combination of graphite microparticles and carbon nanotubes (CNTs) in the form of hybrid films. By finely tuning the charge transport properties of hybrid films, the signal-to-noise ratio (SNR) of self-sensing actuators is remarkably enhanced to over 66. As a result, self-sensing actuators can actively track their displacement and distinguish the touch of soft and hard objects.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl propultion. of helical m
Bioinspired microrobots

Palagi, S., Fischer, P.

Nature Reviews Materials, 3, pages: 113–124, May 2018 (article)

Abstract
Microorganisms can move in complex media, respond to the environment and self-organize. The field of microrobotics strives to achieve these functions in mobile robotic systems of sub-millimetre size. However, miniaturization of traditional robots and their control systems to the microscale is not a viable approach. A promising alternative strategy in developing microrobots is to implement sensing, actuation and control directly in the materials, thereby mimicking biological matter. In this Review, we discuss design principles and materials for the implementation of robotic functionalities in microrobots. We examine different biological locomotion strategies, and we discuss how they can be artificially recreated in magnetic microrobots and how soft materials improve control and performance. We show that smart, stimuli-responsive materials can act as on-board sensors and actuators and that ‘active matter’ enables autonomous motion, navigation and collective behaviours. Finally, we provide a critical outlook for the field of microrobotics and highlight the challenges that need to be overcome to realize sophisticated microrobots, which one day might rival biological machines.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
On Time Optimization of Centroidal Momentum Dynamics

Ponton, B., Herzog, A., Prete, A. D., Schaal, S., Righetti, L.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
Recently, the centroidal momentum dynamics has received substantial attention to plan dynamically consistent motions for robots with arms and legs in multi-contact scenarios. However, it is also non convex which renders any optimization approach difficult and timing is usually kept fixed in most trajectory optimization techniques to not introduce additional non convexities to the problem. But this can limit the versatility of the algorithms. In our previous work, we proposed a convex relaxation of the problem that allowed to efficiently compute momentum trajectories and contact forces. However, our approach could not minimize a desired angular momentum objective which seriously limited its applicability. Noticing that the non-convexity introduced by the time variables is of similar nature as the centroidal dynamics one, we propose two convex relaxations to the problem based on trust regions and soft constraints. The resulting approaches can compute time-optimized dynamically consistent trajectories sufficiently fast to make the approach realtime capable. The performance of the algorithm is demonstrated in several multi-contact scenarios for a humanoid robot. In particular, we show that the proposed convex relaxation of the original problem finds solutions that are consistent with the original non-convex problem and illustrate how timing optimization allows to find motion plans that would be difficult to plan with fixed timing.

am

video paper [BibTex]

video paper [BibTex]


Thumb xl teaser results
Adversarial Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

Ranjan, A., Jampani, V., Kim, K., Sun, D., Wulff, J., Black, M. J.

May 2018 (article)

Abstract
We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled and, consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other by exploiting known geometric constraints. In order to model geometric constraints, we introduce Adversarial Collaboration, a framework that facilitates competition and collaboration between neural networks. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. Adversarial Collaboration works much like expectation-maximization but with neural networks that act as adversaries, competing to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state of the art results amongst unsupervised methods.

ps

pdf link (url) [BibTex]


Thumb xl andrease teaser 2
Robust Dense Mapping for Large-Scale Dynamic Environments

Barsan, I. A., Liu, P., Pollefeys, M., Geiger, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work.

avg

pdf Video Project Page [BibTex]

pdf Video Project Page [BibTex]


Thumb xl cover tro paper
An Online Scalable Approach to Unified Multirobot Cooperative Localization and Object Tracking

Ahmad, A., Lawless, G., Lima, P.

In IEEE International Conference on Robotics and Automation (ICRA) 2018, Journal Track., ICRA 2018, May 2018 (inproceedings)

ps

Project Page [BibTex]

Project Page [BibTex]


Thumb xl meta learning overview
Online Learning of a Memory for Learning Rates

(nominated for best paper award)

Meier, F., Kappler, D., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018, accepted (inproceedings)

Abstract
The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.

am

pdf video code [BibTex]

pdf video code [BibTex]


Thumb xl screenshot 2018 05 18 16 38 40
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

Arxiv, May 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with fully supervised baselines and outperforms data-driven approaches, while requiring less supervision and being significantly faster.

avg

PDF Project Page Project Page [BibTex]


Thumb xl screen shot 2018 02 03 at 9.09.06 am
Shaping in Practice: Training Wheels to Learn Fast Hopping Directly in Hardware

Heim, S., Ruppert, F., Sarvestani, A., Spröwitz, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, pages: 5076-5081, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
Learning instead of designing robot controllers can greatly reduce engineering effort required, while also emphasizing robustness. Despite considerable progress in simulation, applying learning directly in hardware is still challenging, in part due to the necessity to explore potentially unstable parameters. We explore the of concept shaping the reward landscape with training wheels; temporary modifications of the physical hardware that facilitate learning. We demonstrate the concept with a robot leg mounted on a boom learning to hop fast. This proof of concept embodies typical challenges such as instability and contact, while being simple enough to empirically map out and visualize the reward landscape. Based on our results we propose three criteria for designing effective training wheels for learning in robotics.

dlg

Video Youtube link (url) [BibTex]

Video Youtube link (url) [BibTex]


Thumb xl learning ct w asm block diagram detailed
Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

Sutanto, G., Su, Z., Schaal, S., Meier, F.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

am

pdf video [BibTex]

pdf video [BibTex]


no image
Nonlinear decoding of a complex movie from the mammalian retina

Botella-Soler, V., Deny, S., Martius, G., Marre, O., Tkačik, G.

PLOS Computational Biology, 14(5):1-27, Public Library of Science, May 2018 (article)

Abstract
Author summary Neurons in the retina transform patterns of incoming light into sequences of neural spikes. We recorded from ∼100 neurons in the rat retina while it was stimulated with a complex movie. Using machine learning regression methods, we fit decoders to reconstruct the movie shown from the retinal output. We demonstrated that retinal code can only be read out with a low error if decoders make use of correlations between successive spikes emitted by individual neurons. These correlations can be used to ignore spontaneous spiking that would, otherwise, cause even the best linear decoders to “hallucinate” nonexistent stimuli. This work represents the first high resolution single-trial full movie reconstruction and suggests a new paradigm for separating spontaneous from stimulus-driven neural activity.

al

DOI [BibTex]

DOI [BibTex]


Thumb xl graphene silver hybrid
Graphene-silver hybrid devices for sensitive photodetection in the ultraviolet

Paria, D., Jeong, H., Vadakkumbatt, V., Deshpande, P., Fischer, P., Ghosh, A., Ghosh, A.

Nanoscale, 10, pages: 7685-7693, The Royal Society of Chemistry, April 2018 (article)

Abstract
The weak light-matter interaction in graphene can be enhanced with a number of strategies{,} among which sensitization with plasmonic nanostructures is particularly attractive. This has resulted in the development of graphene-plasmonic hybrid systems with strongly enhanced photodetection efficiencies in the visible and the IR{,} but none in the UV. Here{,} we describe a silver nanoparticle-graphene stacked optoelectronic device that shows strong enhancement of its photoresponse across the entire UV spectrum. The device fabrication strategy is scalable and modular. Self-assembly techniques are combined with physical shadow growth techniques to fabricate a regular large-area array of 50 nm silver nanoparticles onto which CVD graphene is transferred. The presence of the silver nanoparticles resulted in a plasmonically enhanced photoresponse as high as 3.2 A W-1 in the wavelength range from 330 nm to 450 nm. At lower wavelengths{,} close to the Van Hove singularity of the density of states in graphene{,} we measured an even higher responsivity of 14.5 A W-1 at 280 nm{,} which corresponds to a more than 10 000-fold enhancement over the photoresponse of native graphene.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl focus cover
Nanoparticles on the move for medicine

Fischer, P.

Physics World Focus on Nanotechnology, pages: 26028, (Editors: Margaret Harris), IOP Publishing Ltd and individual contributors, April 2018 (article)

Abstract
Peer Fischer outlines the prospects for creating “nanoswimmers” that can be steered through the body to deliver drugs directly to their targets Molecules don’t move very fast on their own. If they had to rely solely on diffusion – a slow and inefficient process linked to the Brownian motion of small particles and molecules in solution – then a protein mole­cule, for instance, would take around three weeks to travel a single centimetre down a nerve fibre. This is why active transport mechanisms exist in cells and in the human body: without them, all the processes of life would happen at a pace that would make snails look speedy.

pf

link (url) [BibTex]

link (url) [BibTex]