Sputnik discussed the legal status of robots with Noel Sharkey, professor emeritus of AI and robotics at the University of Sheffield and co-signatory of the letter to the European Commission.
Sputnik: Why is granting robots a legal status considered to be a violation of human rights?
Sputnik: When robots become more and more developed and as we move forward expanding AI capabilities, how can a person be de facto responsible for a robot if it's making autonomous decisions which are basically impossible to predict?
Noel Sharkey: We're not there yet, and that's one of the things we should be able to predict, any decisions that impact on people's lives. There is some fantasy about where the technology is going and how quickly it's going there, this is not supported by evidence at the moment, we're seeing a lot of artificial intelligence expansion through the business community but this is still using techniques developed in the 1980s. Most of the work is a machine-learning using big data; we have not seen any independent intelligence from machines, so why would we try to develop laws on the idea of a future speculation about technology.
Sputnik: How do you see this moving forward? The current law, if we look at Belgian law, says that this would be stipulated in article 1384 of the Belgian civil code for things under custody. Do you think it's an appropriate way to classify robots, especially robots that have machine-learning capabilities and can make some autonomous decisions?
Noel Sharkey: We're having a slight distortion on the word autonomous and autonomous decisions because in robotics the word autonomy came about when computers became small enough to go on a robot, it was as simple as that. It has nothing to do with philosophical autonomy, political autonomy or free will; so an autonomous robot is a programmed device that runs under a program in the real world, and it looks as if it's making autonomous decisions but these decisions have been set up by a human.
Sputnik: What kind of status do you think is optimal for robots?
Sputnik: How soon do you think we're going to see machines and robots that can make their own decisions?
Noel Sharkey: It's happening already, it's happening in the financial sector all the time. I don't like to think of inanimate machines as making decisions, I like to think of them, use the term "we are delegating decisions to them," because we've put the decisions in their algorithms. There are areas, for instance, in conflict and policing, where robots are being increasingly developed to kill people, or to harm them, or to stun them, or hurt them in different ways to prevent their actions, and here again we need full human accountability for that.
Sputnik: Some scary things when you say that there's already robots being trained, used to harm people. Is it really frightening perspective? Stephen Hawking seemed quite concerned, he thought that AI was a big threat to humankind?
The views expressed by the speaker do not necessarily reflect those of Sputnik