Geoffrey Hinton: Who is the ‘godfather of AI’, whose warning about artificial intelligence has shaken the world?
Until this week, Geoffrey Hinton was mostly known by the technologies he had helped produce. As a pioneer of deep learning – which allows computers to acquire knowledge in a similar way to humans – his research over decades helped build the foundations upon which today’s ChatGPT and other technologies are built.
Dr Hinton was born in the UK, and received his first degree experimental psychology from Cambridge University in 1970, and went on to work at Edinburgh and Sussex before moving to the US and Canada.
His work is primarily focused on machine learning algorithms, and he has aimed both to create tools that allow computers to find structures in complex data, as well as showing that is how the human brain learns. He is most famous for pioneering the back-propagation algorithm that helps train neural networks, but he and his students have helped build an array of technologies that underpin today’s artificial intelligence.
For that work, he was recognised with awards from across the world, including the Turing award, often referred to as the Nobel prize of computing. And he was rewarded with an array of high-powered jobs, including serving as a professor at the universe of Toronto and a job as vice-president at Google which came after it bought his company in 2013.
He was also given the nickname of the “godfather of AI”, or “godfather of deep learning”, in recognition of the role he had played in helping develop and shape the technology.
This week, his reputation changed, all in a moment. He resigned from that job at Google – so that he could more freely warn about the danger posed by the products it and other companies are creating.
That warning was stark. He said that artificial intelligence was developing too quickly, and without enough safeguards – leaving the world at risk of widespread unemployment and in danger of attack from artificially intelligent robots, among other threats.
Those risks could “wipe out humanity”, he suggested in interviews. As such, it is important that the world works to make those systems safer, he warned.
He said that “a part of him now regrets his life’s work” because of the state of AI. And he suggested that companies could be developing even more powerful systems than we realise in secret.
Dr Hinton had been vocal about the ethics of artificial intelligence before. In 2017, for instance, he was a signatory to a Canadian open letter that demanded the country’s prime minister Justin Trudeau urgently address the challenge of lethal autonomous weapons, or “killer robots”.
But the warning he gave this week was notable in part because he had left such a high-powered role within Google, and also that he voiced concern not just with specific examples of artificial intelligence but the technology as a whole.
It also came at a time of increasing concern about the development and nature of artificial intelligence. Dr Hinton’s resignation came as many other experts and pioneers within AI gave warnings of their own – such as a letter in March that called for a halt to development, and was signed by one of Dr Hinton’s fellow “godfathers of AI”, Yoshua Bengio.
Dr Hinton has received some criticism, even from those who agree with him that the current approach to artificial intelligence may be harmful. Critics have argued that Dr Hinton could have raised the alarm earlier.
Meredith Whittaker – a former Google employee who is now the president of secure messaging app signal – said that previous attempts to ring the “AI alarm” had led to negative consequences for the women who did so.
“Where were these guys when we spent months + thousand$ on lawyers? Where were they when we were organizing to stop it before it reached this point? Where were they when Sundar lied about us & diminished the risks we demonstrated? I’m not interested in dissent without solidarity,” she wrote on Twitter.
“This isn’t about credit, this is about the fact that there was a moment to act together, when the power these Men of AI wield could have been used in solidarity with a movement that was gaining ground to stop the worst of AI. They didn’t use their power that way. And here we are.”