In a Monday blog post, Nino Tasca, the senior project manager for Google Assistant, the voice command-activated search system, explained the company’s new security measures and new clarifying language so that users know more about what happens to their data.
“By default, we don’t retain your audio recordings,” Tasca wrote. “To store your audio data, you can opt in to the Voice & Audio Activity (VAA) setting when you set up your Assistant. Opting in to VAA helps the Assistant better recognize your voice over time, and also helps improve the Assistant for everyone by allowing us to use small samples of audio to understand more languages and accents. You can view your past interactions with the Assistant, and delete any of these interactions at any time.”
Google came under fire in July after Belgian publication VRT NWS published a scathing expose demonstrating the tech giant had not only employed humans to review and transcribe recordings made via Google Assistant, but had also recorded conversations that were obviously not intended to activate the program, meaning “Hey Google” was never spoken. Moreover, the company contracted to handle the transcription process had lax enough data handling practices that VRT was able to listen to more than a thousand recordings and, in some cases, show them to the people who had been recorded, who then confirmed to the publication that it was their voice they were hearing.
“We take a number of precautions to protect data during the human review process - audio snippets are never associated with any user accounts and language experts only listen to a small set of queries (around 0.2 percent of all user audio snippets), only from users with VAA turned on,” Tasca wrote Monday. “Going forward, we’re adding greater security protections to this process, including an extra layer of privacy filters.”
This is not a meaningful change from Google’s previous practices, merely a more straightforward acknowledgment of them. A July 11 blog post by David Monsees, Google’s Search project manager, noted that “Language experts only review around 0.2 percent of all audio snippets” and that “audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.”
On August 1, Google announced a temporary halt to this practice in the European Union, following a move by the Hamburg Commissioner for Data Protection and Freedom of Information to investigate the tech giant’s handling of audio files, Sputnik reported. Monday’s announcement effectively solidifies the return of human transcription of user audio.
Google’s Tasca wrote that the Assistant program “already immediately deletes any audio data when it realizes it was activated unintentionally,” although the company acknowledges that too many unintentional activations are not recognized as such. However, he noted that “soon we’ll also add a way to adjust how sensitive your Google Assistant devices are to prompts like ‘Hey Google,’ giving you more control to reduce unintentional activations, or if you’d prefer, make it easier for you to get help in especially noisy environments.”
“We’re also updating our policy to vastly reduce the amount of audio data we store. For those of you who have opted in to VAA, we will soon automatically delete the vast majority of audio data associated with your account that’s older than a few months,” Tasca continued. “This new policy will be coming to VAA later this year.”
However, Google is by no means the only tech company engaging in any of these practices. Microsoft’s Cortana and Skype, Amazon’s Alexa and Apple’s Siri were also revealed to involve human transcribers in some percentage of searches and picked-up conversations. Facebook was also revealed to have hired human transcriptionists to write down phone calls users made via the Messenger app as part of the company’s vast user data mining scheme. All have since apologized and announced newer, better security practices, and in Facebook’s case, an end to call transcriptions.
The revelations raise important questions about civil rights and privacy, given the demonstrably increasingly close partnership between these tech firms and the US federal government.
As Sputnik has reported, Facebook and Twitter have partnered with CIA tech security startup FireEye and the Atlantic Council’s Digital Forensic Research Lab in their repeated crackdowns on the user accounts of citizens, journalists, government functionaries and agencies of countries in the crosshairs of the US State Department. Google has joined them in this crackdown, shutting down YouTube accounts associated with pro-Beijing positions amid the continuing protests in Hong Kong.
In addition, all three companies have followed suit with the US Department of Justice’s forcing of alternative media outlets like RT, Sputnik, TeleSUR and CGTN to register under the Foreign Agents Registration Act (FARA), flagging their material posted on those social media sites with a special warning.
Amazon has partnered with the FBI to provide facial recognition technology to the agency and to numerous US police departments for pennies on the dollar and provided a portion of its Amazon Cloud database for use by US intelligence agencies like the CIA as well as the Pentagon. Microsoft has provided stiff competition for Amazon in this sphere, in addition to pledging to “provide [the] US military with access to the best technology, to all the technology we create,” as Microsoft President Brad Smith told the Ronald Reagan National Defense Forum last December.