Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.
According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.
Hinton, who will be speaking live to MIT Technology Review at EmTech Digital on Wednesdayin his first post-resignation interview, was a joint recipient with Yann Lecun and Yoshua Bengio of the 2018 Turing Award—computing’s equivalent of the Nobel.
“Geoff’s contributions to AI are tremendous,” says Lecun, who is chief AI scientist at Meta. “He hadn’t told me he was planning to leave Google, but I’m not too surprised.”
The 75-year-old computer scientist has divided his time between the University of Toronto and Google since 2013, when the tech giant acquired Hinton’s AI startup DNNresearch. Hinton’s company was a spinout from his research group, which was doing cutting-edge work with machine learning for image recognition at the time. Google used that technology to boost photo search and more.
Hinton has long called out ethical questions around AI, especially its co-optation for military purposes. He has said that one reason he chose to spend much of his career in Canada is that it is easier to get research funding that does not have ties to the US Department of Defense.
“Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google,” says Google chief scientist Jeff Dean. “I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well.”
Dean says: “As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
Hinton is best known for an algorithm called backpropagation, which he first proposed with two colleagues in the 1980s. The technique, which allows artificial neural networks to learn, today underpins nearly all machine-learning models. In a nutshell, backpropagation is a way to adjust the connections between artificial neurons over and over until a neural network produces the desired output.
Hinton believed that backpropagation mimicked how biological brains learn. He has been looking for even better approximations since, but he has never improved on it.
“In my numerous discussions with Geoff, I was always the proponent of backpropagation and he was always looking for another learning procedure, one that he thought would be more biologically plausible and perhaps a better model of how learning works in the brain,” says Lecun.
“Geoff Hinton certainly deserves the greatest credit for many of the ideas that have made current deep learning possible,” says Bengio, who is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms. “I assume this also makes him feel a particularly strong sense of responsibility in alerting the public about potential risks of the ensuing advances in AI.”
MIT Technology Review will have more on Hinton throughout the week. Be sure to tune in to Will Douglas Heaven’s live interview with Hinton at EmTech Digital on Wednesday, May 3, at 13.30 Eastern time. Tickets are available from the event website.