Skip to main content

DeepMind working on AI that can ‘imagine’ & plan for complex, unpredictable scenarios

DeepMind demonstrated with AlphaGo that artificial intelligence research has progressed further along than was expected in our lifetimes. The Alphabet division is now tackling imagination — “a distinctly human ability” — to create AIs that are better at handling the complexity and unpredictability of the real world.

The London-based research group calls imagination a “powerful tool of human cognition” that allows for the visualization of consequences. In one example, DeepMind describes the human ability to realize the danger of placing a glass on the edge of a table:

When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall. On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking.

DeepMind argues that AIs need to be able to imagine and reason about the future in order to develop “sophisticated behaviors.” In the past, AlphaGo has been able to use an “internal model” to “analyse how actions lead to future outcomes in order to reason and plan.”

However, these models excelled in Go because games follow clearly defined rules that can be programmed and accurately predicted. In comparison, reality is vastly different:

But the real world is complex, rules are not so clearly defined and unpredictable problems often arise. Even for the most intelligent agents, imagining in these complex environments is a long and costly process.

To tackle this, DeepMind has published two papers on “imagine-based planning” where AI agents can “learn and construct plans to maximise the efficiency of a task.”

A neural network known as an “imagination encoder” extracts information that will be useful for future decisions. While ignoring irrelevant information, these imagination-augmented agents are efficient and can learn different strategies to construct a plan.

DeepMind again used games that require forward planning and reasoning to test these new architectures. The puzzle game Sokoban features irreversible moves, while another spaceship navigation game has the AI stabilize a craft with as few thruster fires as possible, while accounting for gravitational pull.

This latter game is a “highly nonlinear complex continuous control task.” In these tests, the AI can only try each level once, thus encouraging them to first imagine different strategies before applying.

The end results are promising with imagination-augmented agents outperforming standard AIs while learning on less experience and working more efficiently. The addition of a “manager” component for constructing plans led to further efficiencies. However, we are still a while away from the sci-fi concept of AI:

[F]urther analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about – and plan – for the future.


FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com