Dr. Geoffrey Hinton, a pioneer in the field of artificial intelligence, quit his job at Google last month after over a decade with the company.
Hinton, often referred to as “the Godfather of AI,” is now speaking out and issuing warnings about the dangers of artificial intelligence.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton told the New York Times.
In 2012, Dr. Hinton and two of his graduate students at the University of Toronto, Ilya Sutskever and Alex Krishevsky, helped create a neural network, the foundation for products like ChatGPT, which are based on generative artificial intelligence.
Last month, hundreds of top technologists demanded the halt of advanced AI systems because AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
After Google acquired the company started by Dr. Hinton and his two students for $44 million, the company started building neural networks that learned from huge amounts of digital text.
Hinton had no problem with that.
However, last year, when Google and OpenAI built systems using much larger amounts of data he decided that maybe it wasn’t such a great idea after all.
He now worries that “what is going on in these systems is actually a lot better than what is going on in the brain.”
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Dr. Geoffrey Hinton is far from alone in his concern about artificial intelligence.
A third of artificial intelligence scientists believe AI could cause a nuclear-level catastrophe. Half of artificial intelligence researchers believe AI could cause humans to go extinct. Bill Gates has warned artificial intelligence could “run out of control” and decide “humans are a threat.” And Mo Gawdat, the former Chief Business Officer with Google’s Research and Development division, has warned that artificial intelligence researchers are “creating God.”