Google-owned DeepMind is working on artificial intelligence (AI) that can imagine like humans and handle the unpredictable scenarios in real world.
According to a report in Wired on Thursday, DeepMind, that was acquired by Google in 2014, is developing an AI capable of 'imagination', enabling machines to see the consequences of their actions before they make them.
"Its attempt to create algorithms that simulate the distinctly human ability to construct a plan could eventually help to produce software and hardware capable of solving complex tasks more efficiently," the report noted.
DeepMind was successful in AI when it developed 'AlphaGo' that recently beat a series of human champions at the tricky board game 'Go'.
But in case of 'AlphaGo', there are a set of defined rules and predictable outcomes.
"The real world is complex, rules are not so clearly defined and unpredictable problems often arise. Even for the most intelligent agents, imagining in these complex environments is a long and costly process," the report quoted a DeepMind researcher as saying.
With the help of "imagination-augmented agents" -- a neural network that learns to extract information that might be useful for future decisions -- an AI agent can learn different strategies to construct plans, choosing from a broad spectrum of strategies.
"This work complements other model-based AI systems, like AlphaGo, which can also evaluate the consequences of their actions before they take them," the DeepMind researcher was quoted as saying.
DeepMind tested the proposed architectures on multiple tasks, including the puzzle game 'Sokoban' and a spaceship navigation game.
"For both tasks, the imagination-augmented agents outperform the imagination-less baselines considerably: they learn with less experience and are able to deal with the imperfections in modelling the environment," the company said.
Researchers from DeepMind and US-based AI company OpenAI last month said there were developments that could help an AI to learn about the world around it "based on minimal, non-technical feedback -- mimicking the human trait of inference".