Google on Monday announced tweaks to its privacy policy following earlier reports that its Google Assistant software was eavesdropping on user conversations.

Although it does not retain Assistant audio recordings by default, according to Senior Product Manager Nino Tasca, the company has increased privacy protections affecting the transcription process used to check the accuracy of responses to voice interactions.

Google announced other changes to the way Assistant handles voice recordings and said it would purge most saved recordings by year’s end.

The privacy control changes resulted from Google’s “falling short” of meeting user expectations on data transparency, Tasca said.

It should be easy for voice assistant users to understand how and why their data is used so they can make choices that are right for them, he added.

“It’s good to know that Google is listening to consumers’ concerns regarding their privacy,” said privacy rights attorney Donata Kalnenaite, president of Termageddon.

While it would be great for Google to approach this via privacy by design and implement features that respect user privacy from the beginning, this is definitely a step in the right direction, she told the E-Commerce Times.

Policy Shortfall

What is not so good with Google’s approach to policy changes is the ongoing lack of transparency. Google’s current explanation lacks “substantive information,” Kalnenaite noted.

It would be helpful if Google explained the “additional privacy protections” when it comes to people listening to audio from Assistant. It is still unclear when — or if — Google will end the practice of having hired agents listen to the voice recordings, she pointed out.

Google’s artificial intelligence Assistant software is available in many mobile and smart home devices. It is discouraging to see that actual people are still listening to the recordings, Kalnenaite said.

What falls short in the announcement is an attempt to give ultimate control of data back to the users, remarked Osiris Parikh, marketing coordinator at Summit Mindfulness.

“Google has a feature to let users know who and what third-party app is connected to their account, but cannot inform the user when their audio snippets are used for processing. This may change in the future,” he told the E-Commerce Times.

Google’s announcement signals great strides in data protection and privacy, but it still signals that much more needs to be done, Parikh said.

Work in Progress

Although he didn’t acknowledge any blameworthy practices on Google’s part, Tasca apologized for the company having fallen short of its own high standards — failing to make it easy enough for users to understand how their data was used.

Google takes several precautions to protect data during the human review process, explained Tasca, including the following:

  • Audio snippets are never associated with any user accounts;
  • Language experts listen only to a small set of queries (around 0.2 percent of all user audio snippets), and those come only from users with V and Audio Activity (VAA) turned on.

Google has added greater security protections to this process, including an extra layer of privacy filters, Tasca said, noting that users can opt in to the VAA setting.

Now and Later

Google Assistant already immediately deletes any audio data when it realizes it was activated unintentionally. That can happen when a noise sounds like “Hey Google.”

The company will add measures to help better identify unintentional activations and exclude them from the human review process, according to Tasca. Google also will add a way for users to adjust how sensitive the devices are to prompts like “Hey Google.”

That will reduce unintentional activations and make it easier to get help in noisy environments.

As part of the audio tightening process, Google has updated settings to emphasize that when users turn on VAA, human reviewers may listen to snippets of their audio to help improve speech technology.

Existing Assistant users will have the option to review their VAA setting and confirm that preference before any human review process resumes. Google will not include audio in the human review process until a user reconfirms the VAA setting as on.

Diligent Deleting

One of Google’s privacy principles is to minimize the amount of data it stores. The company plans to update its policy to reduce the amount of audio data it retains.

Later this year Google automatically will delete a majority of audio data associated with user accounts older than a few months. That purge will occur for users who have opted into VAA, according to Tasca.

To check your current settings and learn more about the controls available, visit the “Your data in the Assistant” page.

The changes Google recently made to Assistant’s privacy policy seems to provide consumers with more simplicity, transparency and control over how their personal voice data is used, noted Adam Fingerman, chief experience officer of ArcTouch.

“As a leading app developer, ArcTouch remains very optimistic about voice-based apps, and anything that Google and Amazon can do to make people more comfortable with their products is a positive step for the industry as a whole,” he told the E-Commerce Times.

Of Little Consequence

Regardless of how serious Google’s changes are in this particular case, it will not matter in the long term, argued John Franklin, partner at OC&C Strategy Consultants. Voice commerce is poised to explode.

Shopping through Amazon Echo, Google Home and devices equipped with Microsoft’s Cortana will grow to US$40 billion in 2022, from $2 billion today, the firm found.

The risk factors associated with privacy are factored into the growth of the market. They already have reached the tipping point where consumers are happy to trade privacy for convenience, a trade they seem willing to make over and over again as software-driven consumer technologies improve.

“Google’s latest changes to its privacy policy for its Home devices are designed to find a balance between ever noisier customer concerns regarding privacy and the desire to improve the quality of the user experience,” Franklin told the E-Commerce Times.

The biggest barrier to user adoption of voice assistants is inaccurate or poor quality responses, OC&C’s research suggests.

If Google is ever to unlock the full potential of voice — think shopping or preemptive interventions — then users are going to have to accept a certain degree of Big Brother behavior, Franklin said. “Having sold tens of millions of devices to date, at present customers appear willing to make this trade-off.”

Maybe Not

However, consumers actually have serious concerns over privacy involving voice-enabled assistants, suggests Selligent Marketing Cloud’s 2nd annual Global Consumer Index, released Tuesday. Selligent polled 5,000 global consumers on issues related to privacy, emerging channels (specifically, voice assistants), trust, customer experience and service.

Globally, nearly half of people polled (45 percent) used voice-enabled assistants like Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Generationally, 43 percent of Gen X, 56 percent of Gen Y, and 59 percent of Gen Z respondents said they were regular users.

A synopsis of survey results:

  • 51 percent of global respondents worried that their voice-enabled assistants were listening to them without consent/knowledge. Despite their high usage rates, younger users were more inclined to believe they were being listened to without their knowledge:
  • 58 percent of Gen Z
  • 57 percent of Gen Y
  • 47 percent or Gen X
  • 3 percent of Baby Boomers
  • 47 percent found voice-powered ad targeting nice/helpful when they are served up ads based on what they asked their digital assistants; 52 percent, however, found it “creepy”;
  • Baby Boomers overall were most negative about voice-powered marketing: 65 percent found it creepy when they were served up with ads based on what they asked voice assistants;
  • 69 percent of people found it creepy when they received ads based on what they said in conversations without prompting Siri/Alexa/Assistant.


  • Source link