Long Beach, California, June 11, 2019 – The Organizing Committee of the International Conference on Machine Learning (ICML) announced today that the Best Paper Award 2019 has gone to the authors of the publication "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations". The research project is the result of a collaboration between scientists from the Max Planck Institute for Intelligent Systems, ETH Zurich, and Google Research Zurich. A total of 750 contributions were accepted at this year's ICML – one of the world's leading conferences in the field of machine learning. Best Paper Awards are a high distinction: they are only awarded to those whose research work will have the greatest impact in the respective field.
"This award shows that European scientists are at the forefront of modern AI research," says Bernhard Schölkopf, Director at the Max Planck Institute for Intelligent Systems in Tübingen and one of the world's leading researchers in the field of machine learning.
Francesco Locatello, a Ph.D. student at both MPI-IS and ETH Zurich, Stefan Bauer, research group leader at MPI-IS, Gunnar Rätsch from ETH Zurich, Bernhard Schölkopf as well as Sylvain Gelly, Mario Lucic and Olivier Bachem, all three researchers at Google Research in Zurich, worked on the project.
In this collaboration, Francesco Locatello visited Google Research Zurich, where he worked closely with Olivier Bachem, a Research Scientist in the Brain team, to run large-scale experiments on Google's computing infrastructure
“It has been an amazing opportunity and we hope that the follow-up work will have similar success,” says lead author Locatello. "The collaboration with Google and the opportunity to use the infrastructure to train over 10.000 models has been a key advantage", he adds. "On a good desktop, this would have taken 2.5 years of continuous computation.”
In their research, the scientists are training a computer with many thousands of images. From this large data set, deep learning approaches aim to identify patterns. An example: the researchers feed the computer thousands of images showing colored objects of different shapes. There is green, red and blue for objects like squares, hearts, circular and rectangular shapes. The images used are very simple: they have only a few pixels and are two-dimensional.
Imagine a setup where a robot arm is being. In one task, it should pick up red squares from a table. With enough trials and data, a deep neural network can be trained for the task. However, the machine fails if the researcher suddenly puts a yellow triangle on the table and asks it to lift that out. The machine cannot interpret a new pattern as such. It cannot transfer what it has learned before. "That is one of the key problems in AI", says Stefan Bauer, Research Group Leader at the Max Planck Institute for Intelligent Systems. "We don't want to have to train a new Deep Net for every new color and every new shape each and every time."
What is child's play for a person is difficult for a machine. It is hard for it to generalize between the building blocks, to understand that one is a triangle and the other a circle – solely based on the pictures it was previously shown. "It is difficult for the machine that scans millions of parameters and recognizes patterns from them to disentangle the properties of the objects in a lower dimensional embedding (what researchers call representations). When I now present the machine with a new shape and a new color, the system is confused. The ideal would be for the system to come to the conclusion itself, without me having to train it again." But Bauer and Locatello have to disappoint those high expectations. "The dream that a machine only learns from pictures, unsupervised, without any further information (such as information that there is a red square of certain size in the image ) – that is shown to be impossible". Again, a scientist is needed to give each picture additional information. "We are far from the machine drawing conclusions on its own. The field of machine learning is very much at the beginning of a long stretch that lays ahead before we reach this goal, and additional modeling assumptions are required."