Robots Are Being Taught to Be Crude Sexists and Racists, Scientists Fear

© AP Photo / Charles KellyBen Skora of Palo Heights, Ill., and his robot “Arok” are shown on Feb. 2, 1977. The robot can be programmed to do tasks of repetition or individual tasks through push-button control. He can help with repairs on himself (holding soldering irons, meters, etc) . He does butler work and acts as a waiter).
Ben Skora of Palo Heights, Ill., and his robot “Arok” are shown on Feb. 2, 1977. The robot can be programmed to do tasks of repetition or individual tasks through push-button control. He can help with repairs on himself (holding soldering irons, meters, etc) . He does butler work and acts as a waiter). - Sputnik International
Subscribe
It could become every misogynists' dream - a robot that automatically hands a man a cold beer when he walks into the kitchen before turning to help his wife with household chores. Companies such as Facebook and Google - both actively involved in developing domestic robots - are being urged to take remedial action to avoid this from happening.

While some may view it as the perfect companion, scholars are now warning that artificially intelligent robots and devices of the future are being taught to be racist, sexist and otherwise prejudiced by learning from humans.

Artificial intelligence - Sputnik International
You Said What? FaceApp Racism Fiasco Highlights Issues in Machine Learning

Silicon Valley companies currently train their artificial intelligence through the use of hundreds of thousands of captioned images, so their software can interpret other, unlabeled pictures. 

Academics at a number of US universities have now discovered these datasets are often biased, highlighting images of women working in the home while men are perceived to be playing sport and socializing.

Now algorithms trained on these datasets are coming up with both sexist and false conclusions.

For instance, Kai-Wei Chang, a computer scientist at the University of Virginia, revealed a robot might refuse to recognize a firefighter or police officer as a woman, while dismissing females as lawyers or doctors.

Researchers are urgently trying to readdress the situation by proposing ways for programmers to identify any such bias and eliminate it.

Language and Word Embedding

​It comes after another study, by researches from Princeton University and Bath University, of millions of words online, examined how closely different terms were interpreted to establish what language means.

Artificial intelligence - Sputnik International
Microsoft Chat Bot Turns Racist After Talking to Online Users

The boffins found male names were more closely associated with career-related terms than female ones, which were more closely associated with words related to the family.

Female names were also strongly associated with artistic terms, while male names were found to be closer to maths and science ones.

There were strong associations between European or American names and pleasant terms, and African-American names and unpleasant terms.

In order to test the translation theory, a Microsoft chatbot called Tay was given its own Twitter account — @TayandYou — and allowed to interact with the public.

Thankfully not for long as it turned into a racist, pro-Hitler troll with a penchant for bizarre conspiracy theories!

"Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes," the study authors said.

​​"Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable. 

"Already, popular online translation systems incorporate some of the biases we study. Further concerns may arise as AI is given agency in our society," they concluded.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала