Dr. Geoffrey Hinton, who has been dubbed the “Godfather of Artificial Intelligence,” has issued another warning about the dangers he feels are being ignored by scientists and researchers.
Back in April, hundreds of top technologists demanded the halt of advanced artificial intelligence systems because they felt AI labs were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Following that strong warning, Dr. Hinton spoke to the New York Times about the issue.
In his interview, he admitted that “he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.”
Dr. Hinton and two of his graduate students at the University of Toronto, Ilya Sutskever and Alex Krishevsky, helped create a neural network in 2012 that became the foundation for products based on generative artificial intelligence like ChatGPT.
In April, Hinton quit his job at Google after over a decade with the company because he now has regrets about the work to which he devoted much of his life.
More recently, Dr. Hinton spoke at the Collision technology conference in Toronto and shared more of his concerns.
“They still can’t match us, but they’re getting close,” The Debrief reports Hinton said, referring to “the rate at which AI is advancing and becoming increasingly capable of mimicking humans.”
…during the recent Collision event, Hinton admitted something startling about the language models that are currently the closest to matching the capabilities of humans.
“I don’t really understand why they can do it, but they can do little bits of reasoning,” Hinton said.
Stop and ponder Hinton’s words for a moment: one of the chief innovators in the field of artificial intelligence admits that he doesn’t “really understand why” some large language models are capable of “little bits of reasoning” that appear to be comparable to human logic and reasoning.
“We’re just a machine,” Hinton explained. “We’re a wonderful, incredibly complicated machine, but we’re just a big neural net,” adding, “And there’s no reason why an artificial neural net shouldn’t be able to do everything we can do.
“But I think we have to take seriously the possibility that if they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control.”
Dr. Hinton is especially concerned with the use of artificial intelligence by the military.
“If defense departments use [AI] for making battle robots, it’s going to be very nasty, scary stuff,” Hinton said, adding that it doesn’t have to be “super intelligent” or “have its own intentions” to yield disastrous results.
“It’s gonna make it much easier, for example, for rich countries to invade poor countries.
“At present, there’s a barrier to invading poor countries willy nilly,” Hinton said, “which is you get dead citizens coming home. If they’re just dead battle robots, that’s just great, the military-industrial complex would love that.”