Header logo is
Institute Talks

Self-Supervised Representation Learning for Visual Behavior Analysis and Synthesis

Talk
  • 14 December 2018 • 12:00 13:00
  • Prof. Dr. Björn Ommer
  • PS Aquarium

Understanding objects and their behavior from images and videos is a difficult inverse problem. It requires learning a metric in image space that reflects object relations in real world. This metric learning problem calls for large volumes of training data. While images and videos are easily available, labels are not, thus motivating self-supervised metric and representation learning. Furthermore, I will present a widely applicable strategy based on deep reinforcement learning to improve the surrogate tasks underlying self-supervision. Thereafter, the talk will cover the learning of disentangled representations that explicitly separate different object characteristics. Our approach is based on an analysis-by-synthesis paradigm and can generate novel object instances with flexible changes to individual characteristics such as their appearance and pose. It nicely addresses diverse applications in human and animal behavior analysis, a topic we have intensive collaboration on with neuroscientists. Time permitting, I will discuss the disentangling of representations from a wider perspective including novel strategies to image stylization and new strategies for regularization of the latent space of generator networks.

Organizers: Joel Janai

Generating Faces & Heads: Texture, Shape and Beyond.

Talk
  • 17 December 2018 • 11:00 12:00
  • Stefanos Zafeiriou
  • PS Aquarium

The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that are needed. In this talk, I will show how, using DCNNs, we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. Finally, I will demonstrate how to create very lightweight networks for representing 3D face texture and shape structure by capitalising upon intrinsic mesh convolutions.

Organizers: Dimitris Tzionas

Deep learning on 3D face reconstruction, modelling and applications

Talk
  • 19 December 2018 • 11:00 12:00
  • Yao Feng
  • PS Aquarium

In this talk, I will present my understanding on 3D face reconstruction, modelling and applications from a deep learning perspective. In the first part of my talk, I will discuss the relationship between representations (point clouds, meshes, etc) and network layers (CNN, GCN, etc) on face reconstruction task, then present my ECCV work PRN which proposed a new representation to help achieve state-of-the-art performance on face reconstruction and dense alignment tasks. I will also introduce my open source project face3d that provides examples for generating different 3D face representations. In the second part of the talk, I will talk some publications in integrating 3D techniques into deep networks, then introduce my upcoming work which implements this. In the third part, I will present how related tasks could promote each other in deep learning, including face recognition for face reconstruction task and face reconstruction for face anti-spoofing task. Finally, with such understanding of these three parts, I will present my plans on 3D face modelling and applications.

Organizers: Timo Bolkart

Mind Games

IS Colloquium
  • 21 December 2018 • 11:00 12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.

TBA

IS Colloquium
  • 28 January 2019 • 11:15 12:15
  • Florian Marquardt

Organizers: Matthias Bauer

  • Gerard Pons-Moll
  • MPH Lecture Hall

Significant progress has been made over the last years in estimating people's shape and motion from video and nonetheless the problem still remains unsolved. This is especially true in uncontrolled environments such as people in the streets or the office where background clutter and occlusions make the problem even more challenging.
The goal of our research is to develop computational methods that enable human pose estimation from video and inertial sensors in indoor and outdoor environments. Specifically, I will focus on one of our past projects in which we introduce a hybrid Human Motion Capture system that combines video input with sparse inertial sensor input. Employing a particle-based optimization scheme, our idea is to use orientation cues derived from the inertial input to sample particles from the manifold of valid poses. Additionally, we introduce a novel sensor noise model to account for uncertainties based on the von Mises-Fisher distribution. Doing so, orientation constraints are naturally fulfilled and the number of needed particles can be kept very small. More generally, our method can be used to sample poses that fulfill arbitrary orientation or positional kinematic constraints. In the experiments, we show that our system can track even highly dynamic motions in an outdoor environment with changing illumination, background clutter, and shadows.


What Make Big Visual Data Hard?

Talk
  • 29 April 2013 • 13:15:00
  • Alexei Efros
  • MPH Lecture Hall

There are an estimated 3.5 trillion photographs in the world, of which 10% have been taken in the past 12 months. Facebook alone reports 6 billion photo uploads per month. Every minute, 72 hours of video are uploaded to YouTube. Cisco estimates that in the next few years, visual data (photos and video) will account for over 85% of total internet traffic. Yet, we currently lack effective computational methods for making sense of all this mass of visual data. Unlike easily indexed content, such as text, visual content is not routinely searched or mined; it's not even hyperlinked. Visual data is Internet's "digital dark matter" [Perona,2010] -- it's just sitting there!

In this talk, I will first discuss some of the unique challenges that make Big Visual Data difficult compared to other types of content. In particular, I will argue that the central problem is the lack a good measure of similarity for visual data. I will then present some of our recent work that aims to address this challenge in the context of visual matching, image retrieval and visual data mining. As an application of the latter, we used Google Street View data for an entire city in an attempt to answer that age-old question which has been vexing poets (and poets-turned-geeks): "What makes Paris look like Paris?"


  • Cristobal Curio

Studying the interface between artificial and biological vision has been an area of research that has been greatly promoted for a long time. It seems promising that cognitive science can provide new ideas to interface computer vision and human perception, yet no established design principles do exist. In the first part of my talk I am going to introduce the novel concept of 'object detectability'. Object detectability refers to a measure of how likely a human observer is visually aware of the location and presence of specific object types in a complex, dynamic, urban scene.

We have shown a proof of concept of how to maximize human observers' scene awareness in a dynamic driving context. Nonlinear functions are learnt from experimental samples of a combined feature vector of human gaze and visual features mapping to object detectabilities. We obtain object detectabilities through a detection experiment, simulating a proxy task of distracted real-world driving. In order to specifically enhance overall pedestrian detectability in a dynamic scene, the sum of individual detectability predictors defines a complex cost function that we seek to optimize with respect to human gaze. Results show significantly increased human scene awareness in hazardous test situations comparing optimized gaze and random fixation. Thus, our approach can potentially help a driver to save reaction time and resolve a risky maneuvre. In our framework, the remarkable ability of the human visual system to detect specific objects in the periphery has been implicitly characterized by our perceptual detectability task and has thus been taken into account.

The framework may provide a foundation for future work to determine what kind of information a Computer Vision system should process reliably, e.g. certain pose or motion features, in order to optimally alert a driver in time-critical situations. Dynamic image data was taken from the Caltech Pedestrian database. I will conclude with a brief overview of recent work, including a new circular output random regression forest for continuous object viewpoint estimation and a novel learning-based, monocular odometry approach based on robust LVMs and sensorimotor learning, offering stable 3D information integration. Last but not least, I present results of a perception experiment to quantify emotion in estimated facial movement synergy components that can be exploited to control emotional content of 3D avatars in a perceptually meaningful way.

This work was done in particular with David Engel (now a Post-Doc at M.I.T.), Christian Herdtweck (a PhD student at MPI Biol. Cybernetics), and in collaboration with Prof. Martin A. Giese and Dr. Enrico Chiovetto, Center for Integrated Neuroscience, Tübingen.


  • Oisin Mac Aodha

We present a supervised learning based method to estimate a per-pixel confidence for optical flow vectors. Regions of low texture and pixels close to occlusion boundaries are known to be difficult for optical flow algorithms. Using a spatiotemporal feature vector, we estimate if a flow algorithm is likely to fail in a given region.

Our method is not restricted to any specific class of flow algorithm, and does not make any scene specific assumptions. By automatically learning this confidence we can combine the output of several computed flow fields from different algorithms to select the best performing algorithm per pixel. Our optical flow confidence measure allows one to achieve better overall results by discarding the most troublesome pixels. We illustrate the effectiveness of our method on four different optical flow algorithms over a variety of real and synthetic sequences. For algorithm selection, we achieve the top overall results on a large test set, and at times even surpasses the results of the best algorithm among the candidates.


  • Andreas Müller

Semantic image segmentation is the task of assigning semantic labels to the pixels of a natural image. It is an important step towards general scene understanding and has lately received much attention in the computer vision community. It was found that detailed annotation of images are helpful for solving this task, but obtaining accurate and consistent annotations still proves to be difficult on a large scale. One possible way forward is to work with partial supervision and latent variable models to infer semantic annotations from the data during training.

The talk will present two approaches working with partial supervision for image segmentation. The first uses an efficient multi-instance formulation to obtain object class segmentations when trained on class labels alone. The second uses a latent CRF formulation to extract object parts based on object class segmentation.


From Particle Stereo to Scene Stereo

Talk
  • 25 February 2013
  • Carsten Rother

In this talk I will present two lines of research which are both applied to the problem of stereo matching. The first line of research tries to make progress on the very traditional problem of stereo matching. In BMVC 11 we presented the PatchmatchStereo work which achieves surprisingly good results with a simple energy function consisting of unary terms only. As optimization engine we used the PatchMatch method, which was designed for image editing purposes. In BMVC 12 we extended this work by adding to the energy function the standard pairwise smoothness terms. The main contribution of this work is the optimization technique, which we call PatchMatch-BeliefPropagation (PMBP). It is a special case of max-product Particle Belief Propagation, with a new sampling schema motivated by Patchmatch.

The method may be suitable for many energy minimization problems in computer vision, which have a non-convex, continuous and potentially high-dimensional label space. The second line of research combines the problem of stereo matching with the problem of object extracting in the scene. We show that both tasks can be solved jointly and boost the performance of each individual task. In particular, stereo matching improves since objects have to obey physical properties, e.g. they are not allowed to fly in the air. Object extracting improves, as expected, since we have additional information about depth in the scene.


  • Oren Freifeld

Three-dimensional object shape is commonly represented in terms of deformations of a triangular mesh from an exemplar shape. In particular, statistical generative models of human shape deformation are widely used in computer vision, graphics, ergonomics, and anthropometry. Existing statistical models, however, are based on a Euclidean representation of shape deformations. In contrast, we argue that shape has a manifold structure: For example, averaging the shape deformations for two people does not necessarily yield a meaningful shape deformation, nor does the Euclidean difference of these two deformations provide a meaningful measure of shape dissimilarity. Consequently, we define a novel manifold for shape representation, with emphasis on body shapes, using a new Lie group of deformations. This has several advantages.

First, we define triangle deformations exactly, removing non-physical deformations and redundant degrees of freedom common to previous methods. Second, the Riemannian structure of Lie Bodies enables a more meaningful definition of body shape similarity by measuring distance between bodies on the manifold of body shape deformations. Third, the group structure allows the valid composition of deformations.

This is important for models that factor body shape deformations into multiple causes or represent shape as a linear combination of basis shapes. Similarly, interpolation between two mesh deformations results in a meaningful third deformation. Finally body shape variation is modeled using statistics on manifolds. Instead of modeling Euclidean shape variation with Principal Component Analysis we capture shape variation on the manifold using Principal Geodesic Analysis. Our experiments show consistent visual and quantitative advantages of Lie Bodies over traditional Euclidean models of shape deformation and our representation can be easily incorporated into existing methods. This project is part of a larger effort that brings together statistics and geometry to model statistics on manifolds.

Our research on manifold-valued statistics addresses the problem of modeling statistics in curved feature spaces. We try to find the geometrically most natural representations that respect the constraints; e.g. by modeling the data as belonging to a Lie group or a Riemannian manifold. We take a geometric approach as this keeps the focus on good distance measures, which are essential for good statistics. I will also present some recent unpublished results related to statistics on manifolds with broad application.


  • Cordelia Schmid

We, first, address the problems of large scale image classification. We present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We show and interpret the importance of an appropriate vector normalization.

Furthermore, we discuss how to learn given a large number of classes and images with stochastic gradient descent and show results on ImageNet10k. We, then, present a weakly supervised approach for learning human actions modeled as interactions between humans and objects.

Our approach is human-centric: we first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated (only) with the action label.

Finally, we present work on learning object detectors from realworld web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos.


  • Pradeep Krishna Yarlagadda

The grand goal of Computer Vision is to generate an automatic description of an image based on its visual content. Category level object detection is an important building block towards such capability. The first part of this talk deals with three established object detection techniques in Computer Vision, their shortcomings and how they are improved. i) Hough Voting methods efficiently handle the high complexity of multi-scale, category-level object detection in cluttered scenes.

However, the primary weakness of this approach is that mutually dependent local observations independently vote for intrinsically global object properties such as object scale. We model the feature dependencies by presenting an objective function that combines various intimately related problems in Hough Voting. ii) Shape is a highly prominent characteristic of objects that human vision utilizes for detecting objects. However, shape poses significant challenges for object detection in cluttered scenes: Object form is an emergent property that cannot be perceived locally but becomes available only once the whole object has been detected. Thus we address the detection of objects and assembling of their shape simultaneously in a Max-Margin Multiple Instance Learning framework, while avoiding fragile bottom-up grouping in query images altogether. iii) Chamfer matching is a widely used technique for detecting objects because of its speed. However, it treats objects as being a mere sum of the distance transformation of all their contour pixels. Also, spurious matches in background clutter is a huge problem for chamfer matching. We address these two issues by a) applying a discriminative approach to distance transformation computation in chamfer matching and b) estimating the accidentalness of a foreground template match by a small dictionary of simple background contours.

The second part of the talk explores the question: what insights can automatic object detection and intra-category object relationships bring to art historians ? It turns out that techniques from Computer Vision have helped the art historians in discovering different artistic workshops within an Upper German manuscript, understanding the variations of art within a particular school of design and studying the transitions across artistic styles by 1-d ordering of objects. Obtaining such insights manually is a tedious task and Computer Vision made the job of art historians easier.

Related Publications:

1. Pradeep Yarlagadda and Björn Ommer From Meaningful Contours to Discriminative Object Shape, ECCV 2012.

2. Pradeep Yarlagadda, Angela Eigenstetter and Björn Ommer Learning Discriminative Chamfer Regularization, BMVC 2012.

3. Pradeep Yarlagadda, Antonio Monroy and Björn Ommer Voting by Grouping Dependent Parts, ECCV 2010.

4. Pradeep Yarlagadda, Antonio Monroy, Bernd Carque and Björn Ommer Recognition and Analysis of Objects in Medieval Images, ACCV (e-heritage) 2010.

5. Pradeep Yarlagadda, Antonio Monroy, Bernd Carque and Björn Ommer Top-down Analysis of Low-level Object Relatedness Leading to Semantic Understanding of Medieval Image Collections, Computer Vision and Image Analysis of art SPIE, 2010.


  • Andreas Geiger

Navigating a car safely through complex environments is considered a relatively easy task for humans. Computer algorithms, however, can't nearly match human performance and often rely on 3D laser scanners or detailed maps. The reason for this is that the level and accuracy of current computer vision and scene understanding algorithms is still far from that of a human being. In this talk I will argue that pushing these limits requires solving a set of core computer vision problems, ranging from low-level tasks (stereo, optical flow) to high-level problems (object detection, 3D scene understanding).

First, I will introduce the KITTI datasets and benchmarks with accurate ground truth for evaluating stereo, optical flow, SLAM and 3D object detection/tracking on realistic video sequences. Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.

Second, I will propose a novel generative model for 3D scene understanding that is able to reason jointly about the scene layout (topology and geometry of streets) as well as the location and orientation of objects. By using context from this model, performance of state-of-the-art object detectors in terms of estimating object orientation can be significantly increased.

Finally, I will give an outlook on how prior information in form of large-scale community-driven maps (OpenStreetMap) can be used in the context of 3D scene understanding.