Microsoft Apologizes for Tay Chatbot’s Offensive Tweets

Subscribe
Microsoft has apologized for the derogatory comments that its Artificial Intelligence (AI) chat robot named Tay made on Twitter.

MOSCOW (Sputnik) – "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Peter Lee, Corporate Vice President, Head of Microsoft Research said in a Friday statement.

Artificial Intelligence - Sputnik International
Deus Ex Machina: Should We Be Ready for an Artificial Intelligence Revolt?
The chatbot was launched on Wednesday. According to Lee, within the first 24 hours of Tay’s operation a "coordinated attack" was launched by certain users who were targeting the chatbot’s vulnerabilities.

"As a result, Tay tweeted wildly inappropriate and reprehensible words and images…Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay," Lee said.

Microsoft modelled Tay as a teenage girl, saying on her Twitter page that "The more you talk the smarter Tay gets." However, tweets from her account changed from "humans are super cool" to "I just hate everybody" and "Hitler was right I hate the jews" as the chatbot learned from the online conversations she had. Tay's last tweet, posted on Thursday, said she needed to sleep.

​Microsoft has another teenage girl chatbot called Xiaoice. According to Lee, this robot is being used by some 40 million people.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала