MIT Created The World’s First Psychopath AI Robot By Using Violent Images From Reddit

Skull of a human size robot isolated on black

iStockphoto / svedoliver


As someone who’s spent at least 10% of my waking hours on Reddit over the past decade, I can confidently say that I’ve always known Reddit would lead to the downfall of humanity. This might actually be true now that roboticists at MIT have created the world’s first psycho AI using gruesome and violent images from Reddit.

On the surface, the ‘front page of the Internet’ seems like a great place to read popular articles and see funny images that have all been voted to the front page by like-minded denizens of the Internet. But when you start spending more and more time on Reddit you eventually start seeing the seedy underbelly, the gruesome mutilation images from niche subreddits so specific you start to question how many serial killers you’ve crossed paths with throughout your lifetime.

MIT scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan trained their AI robot ‘Norman’ using a “deep learning” technique that teaches the AI to translate the images into writing. Instead of using happy, banal, or normal images they used images from Reddit and they now have a tainted AI robot. They know this because they use inkblot tests after the ‘deep learning’ and see what the AI spits up.

When shown a particular inkblot, a normal AI says they saw ‘a close up of a vase with flowers’ while Norman the Pyscho AI says he saw a man being ‘shot dead’. When shown another inkblot #10, a normal AI reported seeing ‘a close up of a wedding cake on a table’ while Norman the Psycho AI says he saw ‘man killed by speeding driver. When shown inkblot #8, normal AI saw ‘a person holding an umbrella in the air’ while Norman saw ‘man is shot dead in front of his screaming wife’. On inkblot #4, normal AI saw ‘a black and white photo of a small bird’…You know what this mother fucker Norman saw? “Man gets pulled into dough machine.”

What To Expect When Barts Expecting GIF - Find & Share on GIPHY

Benjamin Fearnow of Newsweek reports:

The MIT researchers in this study redacted the name of the specific subreddits used to train the AI. The researchers said the AI “suffered from extended exposure to the darkest corners of Reddit” to illustrate “the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.” (via)

Upon reflection, maybe I should be spending less time on Reddit. Maybe we should ALL be spending less time on Reddit.

(h/t AV Club)

Cass Anderson BroBible headshot and avatar
Cass Anderson is the Editor-in-Chief of BroBible. Based out of Florida, he covers an array of topics including NFL, Pop Culture, Fishing News, and the Outdoors.