In mind’s eye

By | Science & Technology
Kismet, a robot with rudimentary social skills in MIT Museum during Wikimania 2006. Credit@Polimerek

Google’s artificial intelligence (AI) software has proven capable of learning to play video games, simply by watching. In a paper published in the journal Nature, researchers from the Google-backed DeepMind project explain how the deep Q-network (DQN) algorithm outperformed all previous machine-learning methods. They tested its intelligence using the challenging, retro array of Atari 2600 games – from 3D racing games like Enduro to shooters like River Raid. Its ability was able to outmatch a professional human game tester and even developed its own strategies, which allowed it to attain the highest score possible.

DeepMind is a British artificial intelligence company founded in 2011. It was bought by Google in early 2014, for somewhere between USD 400-650 million, with the aim of encouraging DeepMind towards developing smart machines. This research is a proof-of-concept that a general learning system may work on challenging tasks that even humans possibly struggle with. As lead author of the paper, Demis Hassibis says in his own words, “It’s the very first baby step towards that grander goal … however an important one.”

Closing the divide between human and artificial intelligence. Credit@ EmilieOgez

Closing the divide between human and artificial intelligence. Credit@ EmilieOgez

DeepMind’s goal is to “solve intelligence” by combining machine learning and neuroscience to build powerful learning algorithms. Essentially, their aim is two-fold – while working to formalise intelligence, the team are also attempting to improve scientists’ understanding of the human brain. DeepMind’s AI system appears to stand a rung above others, like IBM’s ‘Watson’, which was pre-programmed. DeepMind learns from experience, using simply raw pixels as a sensory input.

The DQN agent learned to play from scratch – using only pixels and game scores as its input – unacquainted with any circumstantial real-world knowledge of, for example, how to bounce a ball off a wall. In other words, the computer was unaware of any rules that might exist – it learned the rules by watching and ‘thought’ of winning strategies on its own. It is a significant moment in the AI world, as artificial intelligence begins to evolve a bigger role in technology.

Seven Google products already employ DeepMind technology. The exact products remain elusive, however Michael Cook of Goldsmiths, University of London reportedly told New Scientist, “It’s anyone’s guess what they are, but DeepMind’s focus on learning through actually watching the screen, rather than being fed data from the game code, suggests to me that they’re interested in image and video data.”

Concerns from technology experts are carefully honing the development of AI systems. Despite the potentially valuable implications of AI, the rapid progression of machine learning has drawn scepticism about humanity’s fate from a few of technology’s most recognised minds including Stephen Hawking and Tesla boss Elon Musk. The DeepMind staff expressed the same hesitations around the time it was acquisitioned by Google, supposedly pushing for the assembly of an AI ethics board to consider and prepare for such scenarios. It may be a milestone achievement for Google, who are in the era of smart technology, and a quick glance at current AI technology shows the majority research is focussed on systems where the ‘consciousness’ to set their own goals is specifically absent (systems which may yield a challenge for humanity).

Robby the Robot. An AI-type robot as depicted in the 1956 film, Forbidden Planet. Credit@DJShin

Robby the Robot. An AI-type robot as depicted in the 1956 film, Forbidden Planet. Credit@DJShin

Demis explains in his blog, “We also hope this kind of domain general learning algorithm might give researchers new ways to make sense of complex large-scale data creating the potential for exciting discoveries in fields such as climate science, physics, medicine and genomics. And it may even help scientists better understand the process by which humans learn.” This work connects the gulf between high-dimensional sensory inputs and actions. This is the first artificial agent capable of learning to excel at a diverse range of challenging tasks.

How might evolved AI systems improve technology, and alter the definition of what it means to be ‘alive’, in the future?

SHARE

Print this articlePrint this article

ARTICLE TAGS

          

COMMENTS

the Jupital welcomes a lively and courteous discussion in the comment section. We refrain from pre-screen comments before they post. Please ensure you are keeping your comments in a positive and uplifted manner. Please note anything you post may be used, along with your name and profile picture, in accordance with our Privacy Policy and the license you have granted pursuant to our Terms of Service.



comments powered by Disqus