“AI godfather” Hinton leaves Google with regrets about his work



summary
Summary

Geoffrey Hinton is one of the world’s most prominent AI researchers. Now he is wrapping up his career at Google – and saying goodbye with a warning.

About a decade ago, renowned computer scientist and cognitive psychologist Geoffrey Hinton laid the groundwork for today’s advanced AI systems like ChatGPT with his research on artificial neural networks, deep learning, and especially backpropagation. He was awarded the so-called Nobel Prize in Computer Science, the Turing Award.

In April, Hinton quit his job at Google so that he could, as he said, freely criticize the development of AI. Now, in the New York Timeshe speaks openly about his concerns about the rapid development of artificial intelligence.

“Look at how it was five years ago and how it is now,” Hinton said of AI progress. “Take the difference and propagate it forwards. That’s scary.”

ad

Part of him regrets his life’s work, Hinton said. He consoles himself with the “normal excuse” in such cases: if he hadn’t done it, someone else would have.

Hinton believes he has miscalculated the speed of AI development by 30 to 50 years

Hinton’s first fear is the mass dissemination of fake news, videos, and photos so that people no longer know what is true. It’s hard to imagine how to prevent abuse, he said. Hinton is also critical of AI’s impact on the job market, as AI could potentially do more than just tedious work.

Another of Hinton’s concerns is the development of AI-based autonomous weapons, which he has criticized in the past. By his admission, Hinton has also significantly underestimated the speed of AI development.

“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Until a year ago, Google had a good handle on the risks of the technology and was careful not to release anything that could cause harm. But Microsoft has unleashed a race that may be unstoppable without global regulation, Hinton said.

Recommendation

achieved through deep learning alone. If appropriately trained AI systems were scaled up sufficiently, they would be able to reproduce the full range of human intelligence.

He changed his mind when he studied Google’s and OpenAI’s large language models. These are inferior to the human brain in some ways, but far superior in others, Hinton said. What happens in these systems is “actually a lot better” than what happens in the human brain, according to Hinton.

Whether the scale of existing AI systems is sufficient for human-like or general AI is debatable. Well-known researchers such as Meta’s AI chief Yann LeCun and Gary Marcus believe that fundamentally different architectures are needed. Critics compare trying to achieve general AI with deep learning to trying to climb a ladder to the moon.

By contrast, there is little controversy about the thesis that AI systems, whether they are generally intelligent or not at all, but merely cognitively powerful, can have a significant impact on our lives.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top