What do you ‘consent’ to under Zoom’s terms?
Posted: September 5, 2023
In early August, people noticed an update to Zoom’s terms of service made in March 2023.
Like some other communications platforms, Zoom’s terms grant the company certain rights—including the right to train its AI models on data generated or provided by its users.
The company has rowed back its terms in a series of statements and contractual modifications. But how substantial are these changes? And do Zoom’s terms align with applicable data protection and privacy laws?
What’s ‘Service-Generated Data’?
The controversial clauses in Zoom’s terms concern two types of data: “Service Generated Data” and “Customer Content”.
Zoom uses the term “Service Generated Data” to mean “telemetry data, product usage data, diagnostic data, and similar data”.
This sort of data tells Zoom how users engage with its products. Service Generated Data might include information about software bugs and how often certain products are used.
“We consider this to be our data,” said Zoom’s Chief Product Officer (CPO) in a blog post after the controversy broke out.
“We can use service generated data to make the user experience better for everyone on our platform,” the CPO continued.
What’s ‘Customer Content’?
While Service Generated Data is generally technical information created via your interactions with Zoom, Customer Content is more sensitive.
Customer Content is the “data, content, communications, messages, files, documents, or other materials” you provide or generate when using Zoom.
Examples of Customer Content might include call recordings, private chat messages, and audio transcripts.
What did Zoom have permission to do with this data?
Up until mid-August, Section 10.2 of Zoom’s terms asserted that users “consented” to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage” of “Service-Generated data”, including for AI-training purposes.
And according to Section 10.4 of Zoom’s terms, users granted Zoom the right to “perform all acts” using “Customer Content”, including “AI and ML (machine-learning) training and testing”.
Zoom’s terms have now changed considerably, as we’ll explain below.
Are Zoom’s terms fair and legal?
Taken at face value, the March 2023 version of Zoom’s terms of service raised many legal considerations—both from the perspective of Zoom and its users.
We’ll now look at three potential issues with the language Zoom used in its terms. Following this, we’ll provide Zoom’s perspective and discuss what the company actually does with its users’ data.
1. ‘Consent’ under the GDPR
Zoom uses the word “consent” in its terms, implying that users give permission for certain uses of their data merely by ticking the “I agree box” after scrolling through the platform’s long terms of service (or perhaps ignoring them).
Under most data protection and privacy laws, this action would not constitute “consent” on the user’s part.
The EU and UK General Data Protection Regulation (GDPR)—and the many laws worldwide that the GDPR has inspired, such as in Brazil, India, and China—define “consent” strictly.
An organisation cannot rely on a person’s consent unless that person has signaled their specific, informed, unambiguous, freely-given agreement via an affirmative action.
Several recently-enacted US state privacy laws state that “consent” does not include agreement with “broad terms of service” containing multiple data-processing related provisions.
Zoom would likely be unable to demonstrate that users had consented to this type of data processing merely by agreeing to the terms of service.
2. The ePrivacy Directive
Whether or not Zoom obtains valid consent, does the company actually require consent for this type of activity? Possibly.
Another EU law, the ePrivacy Directive—which has been implemented in the national laws of the UK and all countries in the European Economic Area (EEA)—requires organisations to get consent before “storing information” or “accessing information stored” on people’s devices.
There are exceptions to this rule, such as where accessing the data is essential for providing a service the user has requested, but it’s unlikely that Zoom could rely on this exception to train its AI systems.
Note that Customer Content and Service Generated Data might not fall under the ePrivacy Directive. It’s hard to say whether such data is collected from users’ devices.
However, technologies such as cookies are covered by the ePrivacy Directive—and the law can apply to certain types of “telemetry data”, which Zoom covers under its definition of Service Generated Data.
3. Health Privacy Laws
Companies dealing with health information and other sensitive data must be cautious when choosing service providers, including communications platforms like Zoom.
Let’s consider this issue from the perspective companies operating under US health privacy laws such as the Health Breach Notifcaition Rule (HBNR) and the Health Insurance Portability and Accountability Act (HIPAA).
Health information under such laws includes the obvious examples of medical records and diagnoses. But health information can also include the types of data Zoom considers Customer Content and, possibly, Service Generated Data.
We know this due to recent enforcement activity and messaging from the regulators such as the Federal Trade Commission (FTC) and Office for Civil Rights (OCR), which have focused on cookies and device data in health-related contexts.
Exposing health information to Zoom’s AI traning process—plus many of the other uses decsribed in the platform’s terms—could constitute a breach of the health privacy laws mentioned above.
However, it’s important to note that Zoom offers separate contracts, written in more restrictive language, for corporate users operating in fields such as health and education.
What does Zoom actually do with this data?
Since this controversy broke out, Zoom has repeatedly attempted to clarify what it does and does not do with the types of data above, with a focus on Customer Content.
In a series of blogs, press releases, and social media posts put out by Zoom’s executive team, the company stated that it does not use Customer Content for any purposes other than those requested by its users.
Essentially, Zoom integrates some AI products into its software, and the company will separately request consent to train its AI models on users’ inputs.
Zoom also updated its terms of service, tightening up the language and adding the following clause to Section 10.2:
“Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.”
Zoom’s terms also address Service Generated Data differently now, too, merely stating that the company owns “all rights, title, and interest” over Service Generated Data.
While much shorter, the meaning of this clause is essentially equivalent to the original.
As such, corporate Zoom users should carefully evaluate the company’s terms of service before using the platform to communicate sensitive data—as with any service provider.