ML Meetup: Modeling Reward and Abductive Learning

At Man Group, we believe in the Python Ecosystem and have been trading Machine Learning based systems since early 2014.

To give back and strengthen London’s Python and Machine Learning Communities, we sponsor and support the PyData and Machine Learning London Meetups.

In August, we had the pleasure of welcoming Edward Grefenstette, research scientist at Facebook AI Research, and Wang-Zhou Dai, research associate in the Department of Computing at Imperial College London, to the London Machine Learning Meetup.

 

Teaching Artificial Agents to Understand Language by Modelling Reward - Edward Grefenstette

Recent progress in Deep Reinforcement Learning has shown that agents can be taught complex behaviour and solve difficult tasks, such as playing video games from pixel observations, or mastering the game of Go without observing human games, with relatively little prior information. Building on these successes, researchers such as Hermann and colleagues have sought to apply these methods to teach-in simulation-agents to complete a variety of tasks specified by combinatorially rich instruction languages. In this talk, we discuss some of these highlights and some of the limitations which inhibit scalability of such approaches to more complex instruction languages (including natural language). Following this, we introduce a new approach, inspired by recent work in adversarial reward modelling, which constitutes a first step towards scaling instruction-conditional agent training to “real world” language.

Edward Grefenstette

Edward Grefenstette is a Research Scientist at Facebook AI Research, and Honorary Associate Professor at UCL. He previously was, in reverse order, a Staff Research Scientist at DeepMind, the CTO of Dark Blue Labs, and a Junior Research Fellow within Oxford’s Department of Computer Science and Somerville College. His recent research has covered topics at the intersection of deep learning and machine reasoning, addressing questions such as how neural networks can model or understand logic and mathematics, infer implicit or human-readable programs, or learn to understand instructions from simulation.

 

Bridging Machine Learning and Logical Reasoning by Abductive Learning - Wang-Zhou Dai

Perception and reasoning are two representative abilities of intelligence that are integrated seamlessly during problem-solving processes. In the area of artificial intelligence (AI), perception is usually realised by machine learning and reasoning is often formalised by logic programming. However, the two categories of techniques were developed separately throughout most of the history of AI. This talk will introduce the abductive learning framework targeted at unifying the two AI paradigms in a mutually beneficial way. In this framework, machine learning models learn to perceive primitive logical facts from the raw data, while logical reasoning is able to correct the wrongly perceived facts for improving the machine learning models. We demonstrate that by using the abductive learning framework, computers can learn to recognise numbers and resolve equations with unknown arithmetic operations simultaneously from images of simple hand-written equations. Moreover, the learned models can be generalized to complex equations and adapted to different tasks, which is beyond the capability of state-of-the-art deep learning models.

Wang-Zhou Dai

Wang-Zhou Dai is a research associate in the Department of Computing, Imperial College London. His research interests lie in the area of Artificial Intelligence and machine learning, especially in applying first-order logical background knowledge in general machine learning techniques. He has published multiple research papers on major conferences and journals in AI and machine learning including AAAI, ILP, ICDM, ACML and Machine Learning, etc. He has been awarded the IBM PhD Fellowship and Google Excellence Scholarship during his PhD study, and now he is serving as a PC member and reviewer in many top AI & machine learning conferences.