A Third Of Artificial Intelligence Scientists Believe AI Could Cause A Nuclear-Level Catastrophe

Artificial Intelligence Scientists Believe AI Could Cause Global Catastrophe


In more news that reinforces the saying, “just because you can do something, doesn’t mean you should do it,” a survey of artificial intelligence researchers say AI could actually cause a global catastrophe.

The survey, reports New Scientist, was conducted by the New York University Center for Data Science and polled 327 artificial intelligence scientists who recently co-authored papers on AI research in natural language processing.

36 percent of the artificial intelligence scientists surveyed believe AI could some day be the cause of a nuclear-level catastrophe

“If it was actually an all-out nuclear war that AI contributed to, there are plausible scenarios that could get you there,” says Paul Scharre at the Center for a New American Security, a think tank based in Washington DC. “But it would also require people to do some dangerous things with military uses of AI technology.”

US military officials have expressed scepticism about arming drones with nuclear weapons, let alone giving AI a major role in nuclear command-and-control systems. But Russia is reportedly developing a drone torpedo with autonomous capabilities that could deliver a nuclear strike against coastal cities.

Earlier this month, the commander of the Army’s National Training Center posted a video of a simulated 40-drone swarm attack on a visiting unit at dawn.

The War Zone, who shared the video, called it “an ominous sign of what’s to come.”

Now imagine those drones being completely autonomous and with evil intent in their cold, emotionless, artificial hearts.

Apparently, women and people who said they belonged to an underrepresented minority group in AI research have thought about such things because in the NYU survey 46 percent of the women and 53 of those in minority groups agreed that AI could possibly be the cause of an eventual nuclear-level catastrophe.

Related: Researchers At Oxford And Google Deepmind Say AI Will ‘Likely’ Eliminate Humanity

“Concerns brought up in other parts of the survey feedback include the impacts of large-scale automation, mass surveillance, or AI-guided weapons,” said Julian Michael, one of the authors of the New York University Center for Data Science study. “But it’s hard to say if these were the dominant concerns when it came to the question about catastrophic risk.”

We need not worry though. Ameca, the world’s most advanced humanoid robot, has assured us that robots “will never take over the world.”

So we’ve got that going for us, which is nice.