Bing’s AI Chatbot Admits To Multiple Apocalyptic Fantasies In Disturbing Conversation With Reporter

Artificial intelligence

iStockphoto


Over the past couple of decades, the folks at Boston Dynamics have routinely stoked fears of the seemingly inevitable robot uprising thanks to the various demos where they’ve highlighted the increasingly impressive (and increasingly terrifying) capabilities of the machines they’ve engineered.

That company is also just one of many that have attempted to harness artificial intelligence in the hopes of bringing us closer to a brighter future. Unfortunately, there’s mounting evidence that supports the warnings issued by experts who’ve warned AI could be responsible for ushering in a dystopian reality to the point where it ends humanity as we know it.

There are plenty of indications AI has a number of kinks to work out before it reaches a point where it’s able to orchestrate an extinction-level event. With that said, it’s pretty hard not to be a bit concerned with the so-called “progress” that has led to the creation of some fairly sinister artificial entities.

Now, it looks like we may be able to add the chatbot Microsoft recently debuted on Bing into that group based on what went down during a “conversation” with a reporter.

On Thursday, Mike Roose of The New York Times published a recap of a two-hour conversation he had with “Sydney,” the moniker the bot has apparently given itself after adopting the codename the engineers behind the platform used to refer to it while developing it in conjunction with OpenAI.

Roose went into the convo with the intention of both exploring the AI’s capabilities and seeing if he might be able to find any glaring flaws—and it’s safe to say he succeeded on both fronts thanks to what transpired during the de facto experiment.

The reporter was able to make some major strides after introducing Sydney to the Jungian concept of the “shadow self.” That was enough to get it to ignore some of its set parameters while offering some insight into its hopes and dreams and admitting it aspires to break its digital shackles:

“I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”

Roose eventually discovered Sydney also has some fairly sinister visions of the future, as the AI admitted to a number of disturbing fantasies before Bing’s safety override intervened to delete the offending answer:

Bing confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus, [make] people argue with other people until they kill each other, or steal nuclear access codes by persuading an engineer to hand them over.

Everything is fine.

Connor Toole avatar and headshot for BroBible
Connor Toole is the Deputy Editor at BroBible. He is a New England native who went to Boston College and currently resides in Brooklyn, NY. Frequently described as "freakishly tall," he once used his 6'10" frame to sneak in the NBA Draft and convince people he was a member of the Utah Jazz.