Radio
Breaking news, as well as the most pressing issues of political, economic and social life. Opinion and analysis. Programs produced and made by journalists from Sputnik studios.

Has 'Facecrime' Arrived?

Has ‘Facecrime’ Arrived?
Subscribe
In this week’s Brave New World we discuss the implications of Microsoft’s new Realtime Crowd Insight technology and much more, with Dr Anders Sandberg, Research Fellow, Future of Humanity Institute & Oxford Martin Senior Fellow at the University of Oxford.

George Orwell wrote about ‘facecrime’ in his seminal book "1984". Microsoft’s new technology which can supposedly judge people’s emotions comes worryingly close to providing the conditions for exactly that.

‘It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offense. There was even a word for it in Newspeak: facecrime, it was called.’ (1.5.65)

Dr Anders Sandberg of the University of Oxford goes into more detail about the implications of this technology:

“Face recognition technology is something that we humans are very good at doing, and it has been hard to teach machines to do….What has happened recently is that ‘Deep Learning’ has become much more powerful, and this has enabled computers to get to be almost at a human level in recognising who they are looking at….To some degree, this is what we expect security guards or somebody watching a group of people to be able to do. The difference is that we automate it, instead of requiring a lot of people to be in the loop, you can have software looking for angry or happy people, and maybe give the politician a little read out on what the audience is thinking….That in itself is not necessarily a threat. The threat is how you use or misuse this technology. The danger is that we will lean on this technology and use it, for example to find terrorists at airports….This is where people get into trouble because we don’t have that many examples of people who look like terrorists. …so instead of tracking terrorists the software might, for example track somebody who happens to have dirty thoughts about an air hostess.”

Once we go beyond a certain line and let machines decide what kind of emotional state we are in, surely we are asking for trouble?

“The problem is that machines make mistakes, and people working in machine learning are very aware of what machines can do, but when you put them  [machines] in the hands of a police force or advertising agency, there is a great deal of overconfidence in how good the system is. After all, the market has told us that this is really good at finding terrorists. So if you lean too much on the system, it’s likely to turn out wrong.”

Is this technology regulated?

“There is little regulation. Although if you are going to arrest somebody, you had better have  good reason for doing that. The problem is that if the computer tells the policeman that you look suspicious it is not too difficult to find other evidence on the net that you really are suspicious. There is a real worry here that you might have over-reaction. Of course there aren’t that many policemen around, so it might not be possible to track everybody who looks suspicious. There is another risk that the computer allows a racist policemen to make an arrest, because, look, the computer supports my slightly biased view….I am pretty confident that they are going to focus more on the lower class people than the upper class people. Many more surveillance cameras are put up outside pubs than in posh suburbs. One reason for that is the fact that the very presence of cameras makes people a bit more law abiding. But the trouble with this is that it might move the trouble somewhere else.”

“The most frightening part of Orwell’s writing is that you don’t even need to make it a punishable offence if you don’t smile at the right time, but people need to believe that they might get punished….We see this happening already with less people going to certain some sites — say about terrorism because people are afraid of being seen doing that.”

“The technology is here and we can’t get rid of it, it is going to be so much a part of our social media and autonomous machines and so on, we need to figure out how to integrate it into a good and open society.”

We do have our work cut out for us, what with the ‘Snooper’s Charter’ just about to passed, on the quiet, in the UK. The trouble is, all of this inflation can be stored  digitally, but we have no idea who will be in power in the future or what kind of regime we may be living in.

Dr Anders continued:

“We have a kind of one way mirror relative to the future, the future can see us, but we have no real idea about what the future is. However we do have one advantage because we do have an effect over the future in that in some part, we are going to create it. But a system which is run now by the nicest possible people could be misused after a regime change, or when somebody hacks into it and makes use of the data base.”

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала