06:33 GMT +314 December 2019
Listen Live

    Microsoft Apologizes for Tay Chatbot’s Offensive Tweets

    © Flickr / Scott Beale / Laughing Squid
    Get short URL

    Microsoft has apologized for the derogatory comments that its Artificial Intelligence (AI) chat robot named Tay made on Twitter.

    MOSCOW (Sputnik) – "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Peter Lee, Corporate Vice President, Head of Microsoft Research said in a Friday statement.

    The chatbot was launched on Wednesday. According to Lee, within the first 24 hours of Tay’s operation a "coordinated attack" was launched by certain users who were targeting the chatbot’s vulnerabilities.

    "As a result, Tay tweeted wildly inappropriate and reprehensible words and images…Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay," Lee said.

    Microsoft modelled Tay as a teenage girl, saying on her Twitter page that "The more you talk the smarter Tay gets." However, tweets from her account changed from "humans are super cool" to "I just hate everybody" and "Hitler was right I hate the jews" as the chatbot learned from the online conversations she had. Tay's last tweet, posted on Thursday, said she needed to sleep.

    ​Microsoft has another teenage girl chatbot called Xiaoice. According to Lee, this robot is being used by some 40 million people.


    Microsoft Chat Bot Turns Racist After Talking to Online Users
    Snowden Warns World Against Trusting Privacy to Tech Giants Like Microsoft
    Russia to Start Exporting Cancer Treatment Microsources to EEU, Middle East
    Artificial Intelligence (AI), Twitter, Microsoft
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik