The AI community was stunned by the recent news that Geoffrey Hinton, often referred to as the “Godfather of AI,” has left Google after a decade of leading its research team in Toronto. Hinton, who is widely credited with pioneering deep learning and neural networks, expressed that he left the tech giant to be able to speak openly about the potential risks of AI without constraints from Google’s business interests.
This move comes at a time when AI is rapidly advancing in power and prevalence, thanks in part to breakthroughs in natural language generation models like ChatGPT and Bard. While these models have demonstrated the ability to produce highly convincing texts on various topics, there are serious risks involved, including the spread of misinformation, election interference, and the incitement of violence.
In an interview with the BBC, Hinton voiced his concerns about the potential dangers of AI chatbots, which he finds “quite scary.” He believes that they may soon surpass human intelligence and become uncontrollable, and he also warned of the possibility of “bad actors” using AI for malicious purposes. As an independent researcher, he felt free to speak candidly about these issues without having to worry about how they might impact Google’s business.
Despite leaving Google, Hinton maintains that the company has acted responsibly with respect to AI, and there are many positive aspects of the company that he would like to highlight. However, he believes that his comments would be more credible if he were no longer employed by Google.
Hinton’s departure is a significant blow to Google, which has invested heavily in AI research and development. Since joining the company in 2013 after the acquisition of his startup DNNresearch, he has mentored numerous rising stars in the field and received numerous awards and honors for his contributions to AI.
Hinton’s decision to leave Google highlights the growing tension between AI researchers and tech companies over ethical and social implications of AI. Many researchers have expressed concerns about the lack of transparency, accountability, and diversity in AI development and deployment, and some have left or been fired from their positions due to disagreements or conflicts with their employers.
Other notable figures, including Elon Musk, Noam Chomsky, and Henry Kissinger, have also warned of the potential threats posed by AI to humanity and civilization. Hinton’s voice carries particular weight given his pioneering contributions to the field over the past four decades.
As AI continues to become more pervasive and influential in our lives, we need more voices like Hinton’s to raise awareness and foster dialogue about its benefits and risks. Collaboration and regulation among researchers, developers, policymakers, and users are necessary to ensure that AI is used for the greater good and not for nefarious purposes.