Our paper has been accepted at ICML: "Dissecting ADAM: The Sign, Magnitude and Variance of Stochastic Gradients" (arxiv). Code is available from github.
I am a Ph.D. student in the Probabilistic Numerics Group, where I work on optimization methods for machine learning. Optimization algorithms are the workhorse of contemporary machine learning - it's where the numbers are crunched! Intriguingly, numerical optimizers themselves are compact little "learning machines": they make decisions (where to evaluate next, how many and which data points to use) based on observations (function values and gradients, typically corrupted by noise due to mini-batch subsampling). My goal is to design smarter optimizers!
Currently, my work centers around estimating the stochastic gradient variance, a measure for how "noisy" stochastic gradients are. I believe that using this quantity to make optimizers aware of the stochasticity of their evaluations can help improve various aspects of stochastic optimization algorithms. For example, gradient variance estimates can be used to adaptively choose "good" batch sizes when performing stochastic gradient descent [ ]. Element-wise variance estimates can also be used to manipulate the search direction itself by "damping" coordinate directions with low signal-to-noise ratio [ ].
Prior to joining the Max Planck Institute as a Ph.D. student, I studied Mathematics (B.Sc.) and Scientific Computing (M.Sc.) at Heidelberg University and spent some time as a visiting student at Tsinghua University in Beijing.
Optimization problems arising in intelligent systems are similar to those studied in other fields (such as operations research, control, and computational physics). But they also have a few prominent features that are not addressed particularly well by classic optimization methods.
In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (inproceedings) Accepted
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner's toolbox for problems where ADAM fails.
In Proceedings Conference on Uncertainty in Artificial Intelligence (UAI) 2017, pages: 410-419, (Editors: Gal Elidan and Kristian Kersting), Association for Uncertainty in Artificial Intelligence (AUAI), Conference on Uncertainty in Artificial Intelligence (UAI), August 2017 (inproceedings)
Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates. This variance also changes over the optimization process; when using a constant batch size, stability and convergence is thus often enforced by means of a (manually tuned) decreasing learning rate schedule. We propose a practical method for dynamic batch size adaptation. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance proportionally to the value of the objective function, removing the need for the aforementioned learning rate decrease. In contrast to recent related work, our algorithm couples the batch size to the learning rate, directly reflecting the known relationship between the two. On three image classification benchmarks, our batch size adaptation yields faster optimization convergence, while simultaneously simplifying learning rate tuning. A TensorFlow implementation is available.
Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization. To find a good point to halt the optimizer, a common practice is to split the dataset into a training and a smaller validation set to obtain an ongoing estimate of the generalization performance. In this paper we propose a novel early stopping criterion which is based on fast-to-compute, local statistics of the computed gradients and entirely removes the need for a held-out validation set. Our experiments show that this is a viable approach in the setting of least-squares and logistic regression as well as neural networks.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems