At a March 30 conference in Brussels, organized by RightsCon, an event aimed at shaping global conversation on the future of the internet, technology consultant Ken Munro demonstrated how the camera on an internet-connected vibrator can be hacked, producing a Mexican gasp — of shock — in the auditorium.
There is an ever-growing global market for remote-controlled pleasure devices, particularly from couples in long-term relationships, and those that find themselves separated temporarily — and many are equipped with cameras that can produce high quality images.
In March, Standard Innovation Corporation (SIC), the firm behind popular Bluetooth-enabled sex toy We-Vibe 4 Plus, settled a US class action lawsuit to the tune of US$4 million, after it was found to have collected data about the way users partook in the device. SIC was found to have collected sensitive information about customers while devices were in use for "diagnostic purposes."
However, on top of the privacy and dignity implications of the surge, there are also questions around criminality — namely, has a sex toy hacker committed a sex crime, and if so, should they be prosecuted, and how?
"What happens when you think you are interacting with your partner, but someone has [hacked] your sex device? What happens when you find out that the person on the other end of that isn't somebody you know whatsoever? What are the legal implications, what are the security implications, what are the policy implications when you can start hacking into sex toys?" asked Amie Stepanovich of Access Now at the conference.
Criminal ramifications depend on a country's laws — for instance, in Denmark, a sex act is only rape if it involves forced penetration of some kind, whereas in Belgium sexual assault and rape are classified as any sex acts performed "by whatever means" without consent.
This means if someone in the former country hacked into a sex toy which was then used, they wouldn't be criminally liable — in the latter, they would be guilty of rape.
However, complicating the picture is the question of whether the effective victim is in another country — do the laws there apply instead, or as well, or not at all? Moreover, what if a sex toy or robot is equipped with artificial intelligence, and becomes predatory? Is it the manufacturer's responsibility, or the owner's, or the robot's?
When software is in everything, where does liability ultimately lie? Lawmakers across the world are attempting to plant robotics and artificial intelligence on the political and legislative agenda, but progress has been slow, although there are some suggestions the pace is beginning to quicken.
In January, the European Parliament adopted a text in which it asked the European Commission to submit "a proposal for a legislative instrument on legal questions related to the development and use of robotics and AI" foreseeable in the next 10 to 15 years. The text, supported by 396 MEPs, included references to Frankenstein's monster, and science fiction author Isaac Asimov's Three Laws of Robotics.
The rationale behind creating a legal status for robots was that the most sophisticated autonomous devices "at least" could be established as having the status of electronic persons, and thus "be responsible for making good any damage they may cause."
Such ideas have been treated with extreme skepticism by tech experts. In fact, at the RightsCon conference, former US ambassador to the United Nations Human Rights Council Eileen Donahoe called the idea "very dangerous."
"I understand the intention behind it, to ensure somebody [is] accountable and liable for damage caused by robots — but I don't think the consequences of giving personhood to robots have been thought through," Donahoe said.
The Commission is currently gathering views from citizens and interest groups via public consultation, requesting input on "emerging Internet of Things and robotics liability challenges."