Machine learning with artificial neural networks is revolutionizing science. The most advanced challenges require discovering answers autonomously. In the domain of reinforcement learning, control strategies are improved according to a reward function. The power of neural-network-based reinforcement learning has been highlighted by spectacular recent successes such as playing Go, but its benefits for physics are yet to be demonstrated. Here, we show how a network-based "agent" can discover complete quantum-error-correction strategies, protecting a collection of qubits against noise. These strategies require feedback adapted to measurement outcomes. Finding them from scratch without human guidance and tailored to different hardware resources is a formidable challenge due to the combinatorially large search space. To solve this challenge, we develop two ideas: two-stage learning with teacher and student networks and a reward quantifying the capability to recover the quantum information stored in a multiqubit system. Beyond its immediate impact on quantum computation, our work more generally demonstrates the promise of neural-network-based reinforcement learning in physics.
Organizers: Matthias Bauer
Gaussian process regression is a non-parametric Bayesian machine learning paradigm, where instead of estimating parameters of fixed-form functions, we model the whole unknown functions as Gaussian processes. Gaussian processes are also commonly used for representing uncertainties in models of dynamic systems in many applications such as tracking, navigation, and automatic control systems. The latter models are often formulated as state-space models, where the use of non-linear Kalman filter type of methods is common. The aim of this talk is to discuss connections of Kalman filtering methods and Gaussian process regression. In particular, I discuss representations of Gaussian processes as state-space models, which enable the use of computationally efficient Kalman-filter-based (or more general Bayesian-filter-based) solutions to Gaussian process regression problems. This also allows for computationally efficient inference in latent force models (LFM), which are models combining first-principles mechanical models with non-parametric Gaussian process regression models.
Organizers: Philipp Hennig
(joint work with Jan. C. Neddermeyer) A technique for online estimation of spot volatility for high-frequency data is developed. The algorithm works directly on the transaction data and updates the volatility estimate immediately after the occurrence of a new transaction. Furthermore, a nonlinear market microstructure noise model is proposed that reproduces several stylized facts of high frequency data. A computationally efficient particle filter is used that allows for the approximation of the unknown efficient prices and, in combination with a recursive EM algorithm, for the estimation of the volatility curve. We neither assume that the transaction times are equidistant nor do we use interpolated prices. We also make a distinction between volatility per time unit and volatility per transaction and provide estimators for both. More precisely we use a model with random time change where spot volatility is decomposed into spot volatility per transaction times the trading intensity - thus highlighting the influence of trading intensity on volatility.
Organizers: Michel Besserve
Our ability to understand a scene is central to how we interact with our environment and with each other. Classic research on visual scene perception has focused on how people "know what is where by looking", but this talk will explore people's ability to infer the "hows" and "whys" of their world, and in particular, how they form a physical understanding of a scene. From a glance we can know so much: not only what objects are where, but whether they are movable, fragile, slimy, or hot; whether they were made by hand, by machine, or by nature; whether they are broken and how they could be repaired; and so on. I posit that these common-sense physical intuitions are made possible by the brain's sophisticated capacity for constructing and manipulating a rich mental representation of a scene via a mechanism of approximate probabilistic simulation -- in short, a physics engine in the head. I will present a series of recent and ongoing studies that develop and test this computational model in a variety of prediction, inference, and planning tasks. Our model captures various aspects of people's experimental judgments, including the accuracy of their performance as well as several illusions and errors. These results help explain core aspects of human mental models that are instrumental to how we understand and act in our everyday world. They also open new directions for developing robotic and AI systems that can perceive, reason, and act the way people do.
Organizers: Michel Besserve
This talk will give an overview of some of the research in the Image and Video Computing Group at Boston University related to image- and video-based analysis of humans and their behavior, including: tracking humans, localizing and classifying actions in space-time, exploiting contextual cues in action classification, estimating human pose from images, analyzing the communicative behavior of children in video, and sign language recognition and retrieval.
Collaborators in this work include (in alphabetical order): Vassilis Athitsos, Qinxun Bai, Margrit Betke, R. Gokberk Cinbis, Kun He, Nazli Ikizler-Cinbis, Hao Jiang, Liliana Lo Presti, Shugao Ma, Joan Nash, Carol Neidle, Agata Rozga, Tai-peng Tian, Ashwin Thangali, Zheng Wu, and Jianming Zhang.
Organizers: Gerard Pons-Moll
This talk presents our 3D video production method by which a user can watch a real game from any free viewpoint. Players in the game are captured by 10 cameras and they are reproduced three dimensionally by billboard based representation in real time. Upon producing the 3D video, we have also worked on good user interface that can enable people move the camera intuitively. As the speaker is also working on wide variety of computer vision to augmented reality, selected recent works will be also introduced briefly.
Dr. Yoshinari Kameda started his research from human pose estimation as his Ph.D thesis, then he expands his interested topics from computer vision, human interface, and augmented reality.
He is now an associate professor at University of Tsukuba.
He is also a member of Center for Computational Science of U-Tsukuba where some outstanding super-computer s are in operation.
He served International Symposium on Mixed and Augmented Reality as a area chair for four years (2007-2010).
3D reconstruction from 2D still-images (Structure-from-Motion) has reached maturity and together with new image acquisition devices like Micro Aerial Vehicles (MAV), new interesting application scenarios arise. However, acquiring an image set which is suited for a complete and accurate reconstruction is even for expert users a non-trivial task. To overcome this problem, we propose two different methods. In the first part of the talk, we will present a SfM method that performs sparse reconstruction of 10Mpx still-images and a surface extraction from sparse and noisy 3D point clouds in real-time. We therefore developed a novel efficient image localisation method and a robust surface extraction that works in a fully incremental manner directly on sparse 3D points without a densification step. The real-time feedback of the reconstruction quality the enables the user to control the acquisition process interactively. In the second part, we will present ongoing work of a novel view planning method that is designed to deliver a set of images that can be processed by today's multi-view reconstruction pipelines.
This talk will highlight recent progress on two fronts. First, we will talk about a novel image-conditioned person model that allows for effective articulated pose estimation in realistic scenarios. Second, we describe our work towards activity recognition and the ability to describe video content with natural language.
Both efforts are part of a longer-term agenda towards visual scene understanding. While visual scene understanding has long been advocated as the "holy grail" of computer vision, we believe it is time to address this challenge again, based on the progress in recent years.
In this talk, I will show that, given probabilities of presence of people at various locations in individual time frames, finding the most likely set of trajectories amounts to solving a linear program that depends on very few parameters.
This can be done without requiring appearance information and in real-time, by using the K-Shortest Paths algorithm (KSP). However, this can result in unwarranted identity switches in complex scenes. In such cases, sparse image information can be used within the Linear Programming framework to keep track of people's identities, even when their paths come close to each other or intersect. By sparse, we mean that the appearance needs only be discriminative in a very limited number of frames, which makes our approach widely applicable.
Manifold learning techniques attempt to map a high-dimensional space onto a lower-dimensional one. From a mathematical point of view, a manifold is a topological Hausdorff space that is locally Euclidean. From Machine Learning point of view, we can interpret this embedded manifold as the underlying support of the data distribution. When dealing with high dimensional data sets, nonlinear dimensionality reduction methods can provide more faithful data representation than linear ones. However, the local geometrical distortion induced by the nonlinear mapping leads to a loss of information and affects interpretability, with a negative impact in the model visualization results.
This talk will discuss an approach which involves probabilistic nonlinear dimensionality reduction through Gaussian Process Latent Variables Models. The main focus is on the intrinsic geometry of the model itself as a tool to improve the exploration of the latent space and to recover information loss due to dimensionality reduction. We aim to analytically quantify and visualize the distortion due to dimensionality reduction in order to improve the performance of the model and to interpret data in a more faithful way.
In collaboration with: N.D. Lawrence (University of Sheffield), A. Vellido (UPC)