Microsoft has released a AI (artificial intelligence) bot for users to talk to online. The bot, Tay, was used on several websites as a chat bot. She was supposed to have the AI of a teenage girl. Predictably, the internet ruined her — Twitter users made “coordinated effort” to get Tay to start saying racist things.
In beta test, she was asked “Would you kill baby Hitler?” To which she replied, “of course.” But, after she was released to the public, her tweets were getting more like this one:
After some of the bad tweets, Microsoft quickly yanked Tay from the internet. It was designed to be a social experiment as well as a technological one. She is meant to learn more as she goes. She was even designed to be able to take her own stances on various subjects. But, the developers didn’t take the dark world of the internet into account.
I could see how people on the internet would hijack something like this. But, why didn’t Microsoft put in a filter for this kind of language? It shouldn’t have been allowed to be tweeting racial slurs. Microsoft declined to comment on this bug.
Here is one particularly troubling example of one of her tweets:
Facebook has a virtual assistant called “M.” It is not allowed to take stances. After seeing what Tay has said, Microsoft should’ve done the same thing.
Tay can also crack a few jokes, and she is hip on all of the Millennial slang. She is also on Kik and GroupMe, you can add her on those.
Tay is also able to use your conversations and create a profile on your behalf. Your data and messages may be kept for up to a year to evaluate the product.
More from Tay below. Really, Microsoft?
Featured image via Twitter.