06:11 GMT26 July 2021
Listen Live
    Get short URL

    It could become every misogynists' dream - a robot that automatically hands a man a cold beer when he walks into the kitchen before turning to help his wife with household chores. Companies such as Facebook and Google - both actively involved in developing domestic robots - are being urged to take remedial action to avoid this from happening.

    While some may view it as the perfect companion, scholars are now warning that artificially intelligent robots and devices of the future are being taught to be racist, sexist and otherwise prejudiced by learning from humans.

    Silicon Valley companies currently train their artificial intelligence through the use of hundreds of thousands of captioned images, so their software can interpret other, unlabeled pictures. 

    Academics at a number of US universities have now discovered these datasets are often biased, highlighting images of women working in the home while men are perceived to be playing sport and socializing.

    Now algorithms trained on these datasets are coming up with both sexist and false conclusions.

    For instance, Kai-Wei Chang, a computer scientist at the University of Virginia, revealed a robot might refuse to recognize a firefighter or police officer as a woman, while dismissing females as lawyers or doctors.

    Researchers are urgently trying to readdress the situation by proposing ways for programmers to identify any such bias and eliminate it.

    Language and Word Embedding

    ​It comes after another study, by researches from Princeton University and Bath University, of millions of words online, examined how closely different terms were interpreted to establish what language means.

    The boffins found male names were more closely associated with career-related terms than female ones, which were more closely associated with words related to the family.

    Female names were also strongly associated with artistic terms, while male names were found to be closer to maths and science ones.

    There were strong associations between European or American names and pleasant terms, and African-American names and unpleasant terms.

    In order to test the translation theory, a Microsoft chatbot called Tay was given its own Twitter account — @TayandYou — and allowed to interact with the public.

    Thankfully not for long as it turned into a racist, pro-Hitler troll with a penchant for bizarre conspiracy theories!

    "Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes," the study authors said.

    ​​"Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable. 

    "Already, popular online translation systems incorporate some of the biases we study. Further concerns may arise as AI is given agency in our society," they concluded.


    Elon Musk Calls for Global Ban on 'Killer Robots'
    You Said What? FaceApp Racism Fiasco Highlights Issues in Machine Learning
    The ‘Intelligent’ Robot That Became Racist
    Microsoft Apologizes for Tay Chatbot’s Offensive Tweets
    Microsoft Chat Bot Turns Racist After Talking to Online Users
    computer science, technology, AI, artificial intelligence, racism, sexism, robot, Twitter, Microsoft, Princeton University, University of Bath, United Kingdom, US
    Community standardsDiscussion