In July, we had the pleasure of welcoming Matt Crosby, postdoctoral researcher at the Leverhulme Centre for the Future of Intelligence at Imperial College London, and Tom Rainforth, postdoctoral researcher in statistical ML at the University of Oxford, to the London Machine Learning Meetup.
The Animal-AI Olympics: Translating animal cognition tasks for AI - Matthew Crosby
The Animal-AI Olympics is an AI competition that translates experiments from animal cognition and presents them as a challenge for AI. Animal cognition tasks are designed to test for specific capabilities, so that solving them demonstrates that an animal understands certain properties of its environment or has certain cognitive abilities. By presenting these tasks to AI, the aim is to find out how to build intelligent agents with similar understanding or cognitive abilities. The translation from animals to AI is not trivial. Animals come to the experiments as curious food-motivated explorers, and part of the challenge is to encourage AI systems to be built in a similar way. A key element of many animal cognition tasks requires testing animal responses to first presentation of a novel situation, therefore ruling out the possibility of solutions involving repetition of previously learned behaviour. This goes against standard practice in ML, where training and test sets are often drawn from the same probability distribution.
Matt Crosby is currently a postdoc researcher at the Leverhulme Centre for the Future of Intelligence at Imperial College London. He has a PhD in AI, and more recently has been working on artificial cognition. He is currently running the Animal-AI Olympics, an AI competition based on animal cognition tasks. More generally he is interested in understanding the links between (artificial) intelligence and cognition & exploring the space of possible minds.
Amortized Monte Carlo Integration - Tom Rainforth
Current approaches to amortizing Bayesian inference focus solely on approximating the posterior distribution. Typically, this approximation is, in turn, used to calculate expectations for one or more target functions - a computational pipeline which is inefficient when the target function(s) are known upfront. In this work, we address this inefficiency by introducing AMCI, a method for amortizing Monte Carlo integration directly. AMCI operates similarly to amortized inference but produces three distinct amortized proposals, each tailored to a different component of the overall expectation calculation. At runtime, samples are produced separately from each amortized proposal, before being combined to an overall estimate of the expectation. We show that while existing approaches are fundamentally limited in the level of accuracy they can achieve, AMCI can theoretically produce arbitrarily small errors for any integrable target function using only a single sample from each proposal at runtime. We further show that it is able to empirically outperform the theoretically optimal self-normalized importance sampler on a number of example problems. AMCI allows not only for amortizing over datasets but also amortizing over target functions.
Tom Rainforth is a postdoc researcher in statistical ML at the University of Oxford. His research covers a wide range of topics including probabilistic programming, Monte Carlo methods, variational inference, Bayesian optimization, experimental design, and deep generative models. In addition to his current role as a postdoc working with Yee Whye Teh, he co-supervises a small group of his own students. Prior to this, Tom undertook a D.Phil under the supervision of Frank Wood, culminating in the thesis “Automating inference, learning & design using probabilistic programming.”