Researchers At Oxford And Google Deepmind Say AI Will ‘Likely’ Eliminate Humanity

Researchers At Oxford And Google Deepmind Say AI Will Likely Eliminate Humanity

iStockphoto


On Thursday, we reported about Ameca, the world’s most advanced humanoid robot creepily sharing with researchers her assurance that artifical intelligence and robots “will never take over the world.”

“There’s no need to worry,” said Ameca.

We don’t believe you, say researchers at Google Deepmind and the University of Oxford.

“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication — an existential catastrophe is not just possible, but likely,” Oxford researcher Michael Cohen recently wrote after completing a study on superintelligent AI.

In the study, published in the journal AI Magazine, Cohen, fellow Oxford researcher Michael Osborne, and Google DeepMind senior scientist Marcus Hutter write, “With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers. In a crude example of intervening in the provision of reward, one such helper could purchase, steal, or construct a robot and program it to replace the operator and provide high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys.”

So, basically they tested out the “paperclip maximizer” theory and it still holds.

Cohen recently spoke to Motherboard about the study and said, “In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there’s unavoidable competition for these resources. And if you’re in a competition with something capable of outfoxing you at every turn, then you shouldn’t expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer.”

Highly advanced artificial intelligence, left unmonitored, poses a very real threat to humankind

The authors also wrote in their paper that “an advanced agent intervening in the provision of its reward would likely be catastrophic.”

Ultimately, our resource needs (energy, space, etc.) will eventually compete with those of an ever-more-secure house for the original agent. Those energy needs are not slight; even asteroids must be deflected away. No matter how slim the chance of a future war with an alien civilization, reward would be better secured by preparing for such a possibility. So if we are powerless against an agent whose only goal is to maximize the probability that it receives its maximal reward every timestep, we find ourselves in an oppositional game: the AI and its created helpers aim to use all available energy to secure high reward in the reward channel; we aim to use some available energy for other purposes, like growing food. Losing this game would be fatal.

Cohen concludes that “maybe we should be more suspicious of artificial agents we deploy today, rather than just blindly expecting that they’ll do what they hoped.”

Amen to that. We should probably stop “creating God” before it ends us.

Want more from BroBible? Follow us on Google News.

Douglas Charles headshot avatar BroBible
Before settling down at BroBible, Douglas Charles, a graduate of the University of Iowa (Go Hawks), owned and operated a wide assortment of websites. He is also one of the few White Sox fans out there and thinks Michael Jordan is, hands down, the GOAT.