Microsoft’s AI Twitter Bot Was Quickly Deactivated After It Was Duped Into Tweeting Some Outrageously Racist Stuff

In theory, Microsoft’s teenage AI bot, Tay, was designed to “experiment with and conduct research on conversational understanding.” According to the Washington Post, Microsoft’s Technology and Research and its Bing teams designed Tay as a social and cultural experiment as well as technical trial and trained it to speak in text, meme and emoji on different platforms, including Kik, Groupme and Twitter.

The idea is that Tay is supposed to learn more and more overtime given the social interactions and exposure to cultural shifts.

When put in practice, within 24 hours it became all too clear that Tay has a lot to learn. Like not being a complete insensitive asshole, Hitler-loving racist botfuck. When internet trolls baited Tay into saying inappropriate things, the bot DID NOT shy away, saying things that would get both of us fired from our jobs.

Here are some of the worst, most-offensive tweets that Microsoft has taken down. They have also taken the bot offline.

It’s important to note, as Business Insider points out, Tay’s racism is not a product of Microsoft or of Tay itself. This was solely due to internet jackasses exploiting the “repeat after me” feature.

Come on, Microsoft. You know this world can’t have anything nice.

Speaking of fails.


[h/t Business Insider, Washington Post]

Matt Keohan Avatar
Matt’s love of writing was born during a sixth grade assembly when it was announced that his essay titled “Why Drugs Are Bad” had taken first prize in D.A.R.E.’s grade-wide contest. The anti-drug people gave him a $50 savings bond for his brave contribution to crime-fighting, and upon the bond’s maturity 10 years later, he used it to buy his very first bag of marijuana.