Largest Survey Of AI Researchers Sets The Chances Of AI Causing Human Extinction

Ruins of a city Apocalyptic landscape

iStockphoto


According to over 1,500 different artificial intelligence researchers, there is a 5 percent chance the future development of superhuman AI will cause human extinction.

Considering scientists have already made AI out of living human brain cells that can recognize speech, one might think that number would be a little bit higher, but I digress.

“In the largest survey of its kind, we surveyed 2,778 researchers who had published in top-tier artificial intelligence (AI) venues, asking for their predictions on the pace of AI progress and the nature and impacts of advanced AI systems,” the researchers wrote in their report.

“The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model.

“If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047.”

But wait… it gets even more scary.

“While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes,” the report continued.

“Between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.

“More than half suggested that ‘substantial’ or ‘extreme’ concern is warranted about six different AI-related scenarios, including spread of false information, authoritarian population control, and worsened inequality.

“There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.”

Case in point: robots are already better at proving they’re human than humans are when it comes to online tests.

We have also now created, with altruistic reasons, of course, mind-reading AI that works without an invasive brain implant.

Despite any concerns about the dangers of artificial intelligence run amok, we continue to make it even easier for AI to eventually take over the world by ignoring the warnings of hundreds of top technologists, Bill Gates, Mo Gawdat, and “the godfather of AI”, Dr. Geoffrey Hinton.

Douglas Charles headshot avatar BroBible
Before settling down at BroBible, Douglas Charles, a graduate of the University of Iowa (Go Hawks), owned and operated a wide assortment of websites. He is also one of the few White Sox fans out there and thinks Michael Jordan is, hands down, the GOAT.