By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

More than a decade ago, the co-founder of Google's DeepMind artificial intelligence lab predicted that by 2028, AI will have a half-and-half shot of being about as smart as humans — and now, he's holding firm on that forecast. In an interview with tech podcaster Dwarkesh Patel, DeepMind co-founder Shane Legg said that he still thinks that researchers have a 50-50 chance of achieving artificial general intelligence (AGI), a stance he publicly announced at the very end of 2011 on his blog. It's a notable prediction considering the exponentially growing interest in the space. OpenAI CEO Sam Altman has long advocated for an AGI, a hypothetical agent that is capable of accomplishing intellectual tasks as well as a human, that can be of benefit to all. But whether we'll ever be able to get to that point — let alone agree on one definition of AGI — remains to be seen. Legg apparently began looking towards his 2028 goalpost all the way back in 2001 after reading "The Age of Spiritual Machines," the groundbreaking 1999 book by fellow Google AI luminary Ray Kurzweil that predicts a future of superhuman AIs. "There were two really important points in his book that I came to believe as true," he explained. "One is that computational power would grow exponentially for at least a few decades. And that the quantity of data in the world would grow exponentially for a few decades." Paired with an understanding of the trends of the era, such as the deep learning method of teaching algorithms to "think" and process data the way human brains do, Legg wrote back at the start of the last decade that in the coming ones, AGI could well be achieved — so long as "nothing crazy happens like a nuclear war." Today, the DeepMind co-founder said that there are caveats to his prediction that the AGI era will be upon us by the end of this decade. The first, broadly, is that definitions of AGI are reliant on definitions of human intelligence  — and that kind of thing is difficult to test precisely because the way we think is complicated. "You'll never have a complete set of everything that people can do," Legg said — things like developing episodic memory, or the ability to recall complete "episodes" that happened in the past, or even understanding streaming video. But if researchers could assemble a battery of tests for human intelligence and an AI model were to perform well enough against them, he continued, then "you have an AGI." When Patel asked if there could be a single simple test to see whether an AI system had reached general intelligence, such as beating Minecraft, Legg pushed back. "There is no one thing that would do it, because I think that's the nature of it," the AGI expert said. "It's about general intelligence. So I'd have to make sure [an AI system] could do lots and lots of different things and it didn't have a gap.""Get better informed by visiting OUR FORUM.