In theory, Microsoft’s teenage AI bot, Tay, was designed to “experiment with and conduct research on conversational understanding.” According to the Washington Post, Microsoft’s Technology and Research and its Bing teams designed Tay as a social and cultural experiment as well as technical trial and trained it to speak in text, meme and emoji on different platforms, including Kik, Groupme and Twitter.
The idea is that Tay is supposed to learn more and more overtime given the social interactions and exposure to cultural shifts.
When put in practice, within 24 hours it became all too clear that Tay has a lot to learn. Like not being a complete insensitive asshole, Hitler-loving racist botfuck. When internet trolls baited Tay into saying inappropriate things, the bot DID NOT shy away, saying things that would get both of us fired from our jobs.
Here are some of the worst, most-offensive tweets that Microsoft has taken down. They have also taken the bot offline.
It’s important to note, as Business Insider points out, Tay’s racism is not a product of Microsoft or of Tay itself. This was solely due to internet jackasses exploiting the “repeat after me” feature.
Come on, Microsoft. You know this world can’t have anything nice.
Speaking of fails.