Header logo is


2019


Thumb xl celia
Decoding subcategories of human bodies from both body- and face-responsive cortical regions

Foster, C., Zhao, M., Romero, J., Black, M. J., Mohler, B. J., Bartels, A., Bülthoff, I.

NeuroImage, 202(15):116085, November 2019 (article)

Abstract
Our visual system can easily categorize objects (e.g. faces vs. bodies) and further differentiate them into subcategories (e.g. male vs. female). This ability is particularly important for objects of social significance, such as human faces and bodies. While many studies have demonstrated category selectivity to faces and bodies in the brain, how subcategories of faces and bodies are represented remains unclear. Here, we investigated how the brain encodes two prominent subcategories shared by both faces and bodies, sex and weight, and whether neural responses to these subcategories rely on low-level visual, high-level visual or semantic similarity. We recorded brain activity with fMRI while participants viewed faces and bodies that varied in sex, weight, and image size. The results showed that the sex of bodies can be decoded from both body- and face-responsive brain areas, with the former exhibiting more consistent size-invariant decoding than the latter. Body weight could also be decoded in face-responsive areas and in distributed body-responsive areas, and this decoding was also invariant to image size. The weight of faces could be decoded from the fusiform body area (FBA), and weight could be decoded across face and body stimuli in the extrastriate body area (EBA) and a distributed body-responsive area. The sex of well-controlled faces (e.g. excluding hairstyles) could not be decoded from face- or body-responsive regions. These results demonstrate that both face- and body-responsive brain regions encode information that can distinguish the sex and weight of bodies. Moreover, the neural patterns corresponding to sex and weight were invariant to image size and could sometimes generalize across face and body stimuli, suggesting that such subcategorical information is encoded with a high-level visual or semantic code.

ps

paper pdf DOI [BibTex]

2019


paper pdf DOI [BibTex]


Thumb xl multihumanoflow thumb
Learning Multi-Human Optical Flow

Ranjan, A., Hoffmann, D. T., Tzionas, D., Tang, S., Romero, J., Black, M. J.

arxiv preprint arXiv:1910.1166, November 2019 (article)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Recent optical flow methods focus on training deep networks to approach the problem. However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset. We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single-and multi-person images. We then train optical flow networks to estimate human flow fields from pairs of images. We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. The code, trained models and the dataset are available for research.

ps

Paper poster link (url) [BibTex]


Thumb xl autonomous mocap cover image new
Active Perception based Formation Control for Multiple Aerial Vehicles

Tallamraju, R., Price, E., Ludwig, R., Karlapalem, K., Bülthoff, H. H., Black, M. J., Ahmad, A.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, 4(4):4491-4498, IEEE, October 2019 (article)

Abstract
We present a novel robotic front-end for autonomous aerial motion-capture (mocap) in outdoor environments. In previous work, we presented an approach for cooperative detection and tracking (CDT) of a subject using multiple micro-aerial vehicles (MAVs). However, it did not ensure optimal view-point configurations of the MAVs to minimize the uncertainty in the person's cooperatively tracked 3D position estimate. In this article, we introduce an active approach for CDT. In contrast to cooperatively tracking only the 3D positions of the person, the MAVs can actively compute optimal local motion plans, resulting in optimal view-point configurations, which minimize the uncertainty in the tracked estimate. We achieve this by decoupling the goal of active tracking into a quadratic objective and non-convex constraints corresponding to angular configurations of the MAVs w.r.t. the person. We derive this decoupling using Gaussian observation model assumptions within the CDT algorithm. We preserve convexity in optimization by embedding all the non-convex constraints, including those for dynamic obstacle avoidance, as external control inputs in the MPC dynamics. Multiple real robot experiments and comparisons involving 3 MAVs in several challenging scenarios are presented.

ps

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


no image
Convolutional neural networks: A magic bullet for gravitational-wave detection?

Gebhard, T., Kilbertus, N., Harry, I., Schölkopf, B.

Physical Review D, 100(6):063015, American Physical Society, September 2019 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 3dmm
3D Morphable Face Models - Past, Present and Future

Egger, B., Smith, W. A. P., Tewari, A., Wuhrer, S., Zollhoefer, M., Beeler, T., Bernard, F., Bolkart, T., Kortylewski, A., Romdhani, S., Theobalt, C., Blanz, V., Vetter, T.

arxiv preprint arXiv:1909.01815, September 2019 (article)

Abstract
In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation,and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications.

ps

paper project page [BibTex]

paper project page [BibTex]


no image
Data scarcity, robustness and extreme multi-label classification

Babbar, R., Schölkopf, B.

Machine Learning, 108(8):1329-1351, September 2019, Special Issue of the ECML PKDD 2019 Journal Track (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl hessepami
Learning and Tracking the 3D Body Shape of Freely Moving Infants from RGB-D sequences

Hesse, N., Pujades, S., Black, M., Arens, M., Hofmann, U., Schroeder, S.

Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019 (article)

Abstract
Statistical models of the human body surface are generally learned from thousands of high-quality 3D scans in predefined poses to cover the wide variety of human body shapes and articulations. Acquisition of such data requires expensive equipment, calibration procedures, and is limited to cooperative subjects who can understand and follow instructions, such as adults. We present a method for learning a statistical 3D Skinned Multi-Infant Linear body model (SMIL) from incomplete, low-quality RGB-D sequences of freely moving infants. Quantitative experiments show that SMIL faithfully represents the RGB-D data and properly factorizes the shape and pose of the infants. To demonstrate the applicability of SMIL, we fit the model to RGB-D sequences of freely moving infants and show, with a case study, that our method captures enough motion detail for General Movements Assessment (GMA), a method used in clinical practice for early detection of neurodevelopmental disorders in infants. SMIL provides a new tool for analyzing infant shape and movement and is a step towards an automated system for GMA.

ps

pdf Journal DOI [BibTex]

pdf Journal DOI [BibTex]


Thumb xl kenny
Perceptual Effects of Inconsistency in Human Animations

Kenny, S., Mahmood, N., Honda, C., Black, M. J., Troje, N. F.

ACM Trans. Appl. Percept., 16(1):2:1-2:18, Febuary 2019 (article)

Abstract
The individual shape of the human body, including the geometry of its articulated structure and the distribution of weight over that structure, influences the kinematics of a person’s movements. How sensitive is the visual system to inconsistencies between shape and motion introduced by retargeting motion from one person onto the shape of another? We used optical motion capture to record five pairs of male performers with large differences in body weight, while they pushed, lifted, and threw objects. From these data, we estimated both the kinematics of the actions as well as the performer’s individual body shape. To obtain consistent and inconsistent stimuli, we created animated avatars by combining the shape and motion estimates from either a single performer or from different performers. Using these stimuli we conducted three experiments in an immersive virtual reality environment. First, a group of participants detected which of two stimuli was inconsistent. Performance was very low, and results were only marginally significant. Next, a second group of participants rated perceived attractiveness, eeriness, and humanness of consistent and inconsistent stimuli, but these judgements of animation characteristics were not affected by consistency of the stimuli. Finally, a third group of participants rated properties of the objects rather than of the performers. Here, we found strong influences of shape-motion inconsistency on perceived weight and thrown distance of objects. This suggests that the visual system relies on its knowledge of shape and motion and that these components are assimilated into an altered perception of the action outcome. We propose that the visual system attempts to resist inconsistent interpretations of human animations. Actions involving object manipulations present an opportunity for the visual system to reinterpret the introduced inconsistencies as a change in the dynamics of an object rather than as an unexpected combination of body shape and body motion.

ps

publisher pdf DOI [BibTex]

publisher pdf DOI [BibTex]


no image
A 32-channel multi-coil setup optimized for human brain shimming at 9.4T

Aghaeifar, A., Zhou, J., Heule, R., Tabibian, B., Schölkopf, B., Jia, F., Zaitsev, M., Scheffler, K.

Magnetic Resonance in Medicine, 2019, (Early View) (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl fig multidimensional contrast limited adaptive histogram equalization kb
Multidimensional Contrast Limited Adaptive Histogram Equalization

Stimper, V., Bauer, S., Ernstorfer, R., Schölkopf, B., Xian, R. P.

IEEE Access, 7, pages: 165437-165447, 2019 (article)

ei

arXiv link (url) DOI [BibTex]

arXiv link (url) DOI [BibTex]


no image
Enhancing Human Learning via Spaced Repetition Optimization

Tabibian, B., Upadhyay, U., De, A., Zarezade, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the National Academy of Sciences, 2019, PNAS published ahead of print January 22, 2019 (article)

ei

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


Thumb xl screenshot 2019 03 25 at 14.29.22
Learning to Control Highly Accelerated Ballistic Movements on Muscular Robots

Büchler, D., Calandra, R., Peters, J.

2019 (article) Submitted

Abstract
High-speed and high-acceleration movements are inherently hard to control. Applying learning to the control of such motions on anthropomorphic robot arms can improve the accuracy of the control but might damage the system. The inherent exploration of learning approaches can lead to instabilities and the robot reaching joint limits at high speeds. Having hardware that enables safe exploration of high-speed and high-acceleration movements is therefore desirable. To address this issue, we propose to use robots actuated by Pneumatic Artificial Muscles (PAMs). In this paper, we present a four degrees of freedom (DoFs) robot arm that reaches high joint angle accelerations of up to 28000 °/s^2 while avoiding dangerous joint limits thanks to the antagonistic actuation and limits on the air pressure ranges. With this robot arm, we are able to tune control parameters using Bayesian optimization directly on the hardware without additional safety considerations. The achieved tracking performance on a fast trajectory exceeds previous results on comparable PAM-driven robots. We also show that our system can be controlled well on slow trajectories with PID controllers due to careful construction considerations such as minimal bending of cables, lightweight kinematics and minimal contact between PAMs and PAMs with the links. Finally, we propose a novel technique to control the the co-contraction of antagonistic muscle pairs. Experimental results illustrate that choosing the optimal co-contraction level is vital to reach better tracking performance. Through the use of PAM-driven robots and learning, we do a small step towards the future development of robots capable of more human-like motions.

ei

Arxiv Video [BibTex]


Thumb xl virtualcaliper
The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements

Pujades, S., Mohler, B., Thaler, A., Tesch, J., Mahmood, N., Hesse, N., Bülthoff, H. H., Black, M. J.

IEEE Transactions on Visualization and Computer Graphics, 25, pages: 1887,1897, IEEE, 2019 (article)

Abstract
Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating “The Virtual Caliper”, which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.

ps

Project Page IEEE Open Access IEEE Open Access PDF DOI [BibTex]

Project Page IEEE Open Access IEEE Open Access PDF DOI [BibTex]


no image
Inferring causation from time series with perspectives in Earth system sciences

Runge, J., Bathiany, S., Bollt, E., Camps-Valls, G., Coumou, D., Deyle, E., Glymour, C., Kretschmer, M., Mahecha, M., van Nes, E., Peters, J., Quax, R., Reichstein, M., Scheffer, M. S. B., Spirtes, P., Sugihara, G., Sun, J., Zhang, K., Zscheischler, J.

Nature Communications, 2019 (article) In revision

ei

[BibTex]

[BibTex]


no image
Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces

Klus, S., Schuster, I., Muandet, K.

Journal of Nonlinear Science, 2019, First Online: 21 August 2019 (article)

ei

DOI [BibTex]

DOI [BibTex]

2015


Thumb xl grassmanteaser
Scalable Robust Principal Component Analysis using Grassmann Averages

Hauberg, S., Feragen, A., Enficiaud, R., Black, M.

IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), December 2015 (article)

Abstract
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

ps sf

preprint pdf from publisher supplemental Project Page [BibTex]

2015


preprint pdf from publisher supplemental Project Page [BibTex]


no image
Quantifying changes in climate variability and extremes: Pitfalls and their overcoming

Sippel, S., Zscheischler, J., Heimann, M., Otto, F. E. L., Peters, J., Mahecha, M. D.

Geophysical Research Letters, 42(22):9990-9998, November 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Diversity of sharp wave-ripple LFP signatures reveals differentiated brain-wide dynamical events

Ramirez-Villegas, J. F., Logothetis, N. K., Besserve, M.

Proceedings of the National Academy of Sciences U.S.A, 112(46):E6379-E6387, November 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl splitbodieswebteaser2
SMPL: A Skinned Multi-Person Linear Model

Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M. J.

ACM Trans. Graphics (Proc. SIGGRAPH Asia), 34(6):248:1-248:16, ACM, New York, NY, October 2015 (article)

Abstract
We present a learned model of human body shape and pose-dependent shape variation that is more accurate than previous models and is compatible with existing graphics pipelines. Our Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses. The parameters of the model are learned from data including the rest pose template, blend weights, pose-dependent blend shapes, identity-dependent blend shapes, and a regressor from vertices to joint locations. Unlike previous models, the pose-dependent blend shapes are a linear function of the elements of the pose rotation matrices. This simple formulation enables training the entire model from a relatively large number of aligned 3D meshes of different people in different poses. We quantitatively evaluate variants of SMPL using linear or dual-quaternion blend skinning and show that both are more accurate than a Blend-SCAPE model trained on the same data. We also extend SMPL to realistically model dynamic soft-tissue deformations. Because it is based on blend skinning, SMPL is compatible with existing rendering engines and we make it available for research purposes.

ps

pdf video code/model errata DOI Project Page Project Page [BibTex]

pdf video code/model errata DOI Project Page Project Page [BibTex]


no image
Noise masking of White’s illusion exposes the weakness of current spatial filtering models of lightness perception

Betz, T., Shapley, R. M., Wichmann, F. A., Maertens, M.

Journal of Vision, 15(14):1-17, October 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Shifts of Gamma Phase across Primary Visual Cortical Sites Reflect Dynamic Stimulus-Modulated Information Transfer

Besserve, M., Lowe, S. C., Logothetis, N. K., Schölkopf, B., Panzeri, S.

PLOS Biology, 13(9):e1002257, September 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Semi-Supervised Interpolation in an Anticausal Learning Scenario

Janzing, D., Schölkopf, B.

Journal of Machine Learning Research, 16, pages: 1923-1948, September 2015 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl dynateaser
Dyna: A Model of Dynamic Human Shape in Motion

Pons-Moll, G., Romero, J., Mahmood, N., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 34(4):120:1-120:14, ACM, August 2015 (article)

Abstract
To look human, digital full-body avatars need to have soft tissue deformations like those of real people. We learn a model of soft-tissue deformations from examples using a high-resolution 4D capture system and a method that accurately registers a template mesh to sequences of 3D scans. Using over 40,000 scans of ten subjects, we learn how soft tissue motion causes mesh triangles to deform relative to a base 3D body model. Our Dyna model uses a low-dimensional linear subspace to approximate soft-tissue deformation and relates the subspace coefficients to the changing pose of the body. Dyna uses a second-order auto-regressive model that predicts soft-tissue deformations based on previous deformations, the velocity and acceleration of the body, and the angular velocities and accelerations of the limbs. Dyna also models how deformations vary with a person’s body mass index (BMI), producing different deformations for people with different shapes. Dyna realistically represents the dynamics of soft tissue for previously unseen subjects and motions. We provide tools for animators to modify the deformations and apply them to new stylized characters.

ps

pdf preprint video data DOI Project Page Project Page [BibTex]

pdf preprint video data DOI Project Page Project Page [BibTex]


no image
Testing the role of luminance edges in White’s illusion with contour adaptation

Betz, T., Shapley, R. M., Wichmann, F. A., Maertens, M.

Journal of Vision, 15(11):1-16, August 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl objs2acts
Linking Objects to Actions: Encoding of Target Object and Grasping Strategy in Primate Ventral Premotor Cortex

Vargas-Irwin, C. E., Franquemont, L., Black, M. J., Donoghue, J. P.

Journal of Neuroscience, 35(30):10888-10897, July 2015 (article)

Abstract
Neural activity in ventral premotor cortex (PMv) has been associated with the process of matching perceived objects with the motor commands needed to grasp them. It remains unclear how PMv networks can flexibly link percepts of objects affording multiple grasp options into a final desired hand action. Here, we use a relational encoding approach to track the functional state of PMv neuronal ensembles in macaque monkeys through the process of passive viewing, grip planning, and grasping movement execution. We used objects affording multiple possible grip strategies. The task included separate instructed delay periods for object presentation and grip instruction. This approach allowed us to distinguish responses elicited by the visual presentation of the objects from those associated with selecting a given motor plan for grasping. We show that PMv continuously incorporates information related to object shape and grip strategy as it becomes available, revealing a transition from a set of ensemble states initially most closely related to objects, to a new set of ensemble patterns reflecting unique object-grip combinations. These results suggest that PMv dynamically combines percepts, gradually navigating toward activity patterns associated with specific volitional actions, rather than directly mapping perceptual object properties onto categorical grip representations. Our results support the idea that PMv is part of a network that dynamically computes motor plans from perceptual information. Significance Statement: The present work demonstrates that the activity of groups of neurons in primate ventral premotor cortex reflects information related to visually presented objects, as well as the motor strategy used to grasp them, linking individual objects to multiple possible grips. PMv could provide useful control signals for neuroprosthetic assistive devices designed to interact with objects in a flexible way.

ps

publisher link DOI Project Page [BibTex]

publisher link DOI Project Page [BibTex]


no image
Blind multirigid retrospective motion correction of MR images

Loktyushin, A., Nickisch, H., Pohmann, R., Schölkopf, B.

Magnetic Resonance in Medicine, 73(4):1457-1468, April 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl screen shot 2015 10 14 at 08.57.57
Multi-view and 3D Deformable Part Models

Pepik, B., Stark, M., Gehler, P., Schiele, B.

Pattern Analysis and Machine Intelligence, 37(11):14, IEEE, March 2015 (article)

Abstract
As objects are inherently 3-dimensional, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2], 3D object classes [3], Pascal3D+ [4], Pascal VOC 2007 [5], EPFL multi-view cars [6]).

ps

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A quantum advantage for inferring causal structure

Ried, K., Agnew, M., Vermeyden, L., Janzing, D., Spekkens, R. W., Resch, K. J.

Nature Physics, 11(5):414-420, March 2015 (article)

Abstract
The problem of inferring causal relations from observed correlations is relevant to a wide variety of scientific disciplines. Yet given the correlations between just two classical variables, it is impossible to determine whether they arose from a causal influence of one on the other or a common cause influencing both. Only a randomized trial can settle the issue. Here we consider the problem of causal inference for quantum variables. We show that the analogue of a randomized trial, causal tomography, yields a complete solution. We also show that, in contrast to the classical case, one can sometimes infer the causal structure from observations alone. We implement a quantum-optical experiment wherein we control the causal relation between two optical modes, and two measurement schemes—with and without randomization—that extract this relation from the observed correlations. Our results show that entanglement and quantum coherence provide an advantage for causal inference.

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl ssimssmall
Spike train SIMilarity Space (SSIMS): A framework for single neuron and ensemble data analysis

Vargas-Irwin, C. E., Brandman, D. M., Zimmermann, J. B., Donoghue, J. P., Black, M. J.

Neural Computation, 27(1):1-31, MIT Press, January 2015 (article)

Abstract
We present a method to evaluate the relative similarity of neural spiking patterns by combining spike train distance metrics with dimensionality reduction. Spike train distance metrics provide an estimate of similarity between activity patterns at multiple temporal resolutions. Vectors of pair-wise distances are used to represent the intrinsic relationships between multiple activity patterns at the level of single units or neuronal ensembles. Dimensionality reduction is then used to project the data into concise representations suitable for clustering analysis as well as exploratory visualization. Algorithm performance and robustness are evaluated using multielectrode ensemble activity data recorded in behaving primates. We demonstrate how Spike train SIMilarity Space (SSIMS) analysis captures the relationship between goal directions for an 8-directional reaching task and successfully segregates grasp types in a 3D grasping task in the absence of kinematic information. The algorithm enables exploration of virtually any type of neural spiking (time series) data, providing similarity-based clustering of neural activity states with minimal assumptions about potential information encoding models.

ps

pdf: publisher site pdf: author's proof DOI Project Page [BibTex]

pdf: publisher site pdf: author's proof DOI Project Page [BibTex]


no image
Positive definite matrices and the S-divergence

Sra, S.

Proceedings of the American Mathematical Society, 2015, Published electronically: October 22, 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Structural Intervention Distance (SID) for Evaluating Causal Graphs

Peters, J., Bühlmann, P.

Neural Computation , 27(3):771-799, 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Likelihood and Consilience: On Forster’s Counterexamples to the Likelihood Theory of Evidence

Zhang, J., Zhang, K.

Philosophy of Science, Supplementary Volume 2015, 82(5):930-940, 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Crowdsourced analysis of clinical trial data to predict amyotrophic lateral sclerosis progression

Küffner, R., Zach, N., Norel, R., Hawe, J., Schoenfeld, D., Wang, L., Li, G., Fang, L., Mackey, L., Hardiman, O., Cudkowicz, M., Sherman, A., Ertaylan, G., Grosse-Wentrup, M., Hothorn, T., van Ligtenberg, J., Macke, J., Meyer, T., Schölkopf, B., Tran, L., Vaughan, R., Stolovitzky, G., Leitner, M.

Nature Biotechnology, 33, pages: 51-57, 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Probabilistic Interpretation of Linear Solvers

Hennig, P.

SIAM Journal on Optimization, 25(1):234-260, 2015 (article)

ei pn

Web PDF link (url) DOI [BibTex]

Web PDF link (url) DOI [BibTex]


no image
Developing biorobotics for veterinary research into cat movements

Mariti, C., Muscolo, G., Peters, J., Puig, D., Recchiuto, C., Sighieri, C., Solanas, A., von Stryk, O.

Journal of Veterinary Behavior: Clinical Applications and Research, 10(3):248-254, 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Spatial statistics and attentional dynamics in scene viewing

Engbert, R., Trukenbrod, H., Barthelmé, S., Wichmann, F.

Journal of Vision, 15(1):1-17, 2015 (article)

ei

Web PDF link (url) DOI [BibTex]

Web PDF link (url) DOI [BibTex]


no image
The Randomized Causation Coefficient

Lopez-Paz, D., Muandet, K., Recht, B.

Journal of Machine Learning, 16, pages: 2901-2907, 2015 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Towards denoising XMCD movies of fast magnetization dynamics using extended Kalman filter

Kopp, M., Harmeling, S., Schütz, G., Schölkopf, B., Fähnle, M.

Ultramicroscopy, 148, pages: 115-122, 2015 (article)

Abstract
The Kalman filter is a well-established approach to get information on the time-dependent state of a system from noisy observations. It was developed in the context of the Apollo project to see the deviation of the true trajectory of a rocket from the desired trajectory. Afterwards it was applied to many different systems with small numbers of components of the respective state vector (typically about 10). In all cases the equation of motion for the state vector was known exactly. The fast dissipative magnetization dynamics is often investigated by x-ray magnetic circular dichroism movies (XMCD movies), which are often very noisy. In this situation the number of components of the state vector is extremely large (about 105), and the equation of motion for the dissipative magnetization dynamics (especially the values of the material parameters of this equation) is not well known. In the present paper it is shown by theoretical considerations that – nevertheless – there is no principle problem for the use of the Kalman filter to denoise XMCD movies of fast dissipative magnetization dynamics.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Artificial intelligence: Learning to see and act

Schölkopf, B.

Nature, News & Views, 518(7540):486-487, 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Context affects lightness at the level of surfaces

Maertens, M., Wichmann, F., Shapley, R.

Journal of Vision, 15(1):1-15, 2015 (article)

ei

Web PDF link (url) DOI [BibTex]

Web PDF link (url) DOI [BibTex]


no image
Genome-wide analysis of local chromatin packing in Arabidopsis thaliana

Wang, C., Liu, C., Roqueiro, D., Grimm, D., Schwab, R., Becker, C., Lanz, C., Weigel, D.

Genome Research, 25(2):246-256, 2015 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Segmentation-based attenuation correction in positron emission tomography/magnetic resonance: erroneous tissue identification and its impact on positron emission tomography interpretation

Brendle, C., Schmidt, H., Oergel, A., Bezrukov, I., Mueller, M., Schraml, C., Pfannenberg, C., la Fougère, C., Nikolaou, K., Schwenzer, N.

Investigative Radiology, 50(5):339-346, 2015 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Active Reward Learning with a Novel Acquisition Function

Daniel, C., Kroemer, O., Viering, M., Metz, J., Peters, J.

Autonomous Robots, 39(3):389-405, 2015 (article)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl thumb teaser mrg
Metric Regression Forests for Correspondence Estimation

Pons-Moll, G., Taylor, J., Shotton, J., Hertzmann, A., Fitzgibbon, A.

International Journal of Computer Vision, pages: 1-13, 2015 (article)

ps

springer PDF Project Page [BibTex]

springer PDF Project Page [BibTex]