San Francisco, Calif. (August 11, 2023) – Zoom, the popular videoconferencing application, recently addressed concerns and sparked discussions regarding its updated terms of service (TOS) related to the use of user data for training its artificial intelligence (AI) models. The updates, which became effective on July 27, have prompted conversations about data privacy, ownership, and the ethical implications of AI development.

The Changes in Terms of Service

In March of this year, Zoom updated its terms of service, particularly Section 10.4, which introduced language that raised privacy concerns among its userbase. This section states that users agree to grant Zoom a broad license for various purposes, including "machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom's other products, services, and software, or any combination thereof."

The Concerns

The updated terms sparked concerns among users and technology experts alike. Some users worried that this language gave Zoom broad permission to utilize customer data – including audio, video, and chat content – for AI training purposes, without explicit consent. The potential implications for user privacy, as well as the ownership of the content being used, raised alarms in the tech community and beyond.

Zoom's Response

In response to these concerns, Zoom has emphasized that it will not use audio, video, or chat user content for training its AI models without obtaining user consent. Zoom's updated terms also specify that "service-generated data," which includes information on product usage, telemetry, and diagnostic data, can be used for AI model training. This data is critical for refining AI algorithms and models, but the company is making efforts to ensure that user content remains protected and that users have control over how their data is used.

Zoom's Chief Product Officer Smita Hashim reiterated that users continue to own and control their content and that Zoom has updated its terms of service to reassure users that their data will not be used without consent for AI training purposes. Ms. Hashim further emphasized that users have the choice to enable generative AI features and to decide whether to share their content with Zoom for product improvement purposes. Moreover, in a LinkedIn post on August 8, 2023, Zoom CEO Eric Yuan echoed the sentiment by sharing that "we would absolutely never train AI models with customers' content without getting their explicit consent."

Despite these assurances, concerns remain about the clarity of consent mechanisms and potential exposure of sensitive information.

The Balance Between AI Development and Privacy

Zoom's response reflects the broader conversation about AI development and data privacy. As AI technology advances, companies will seek to improve their products and services using large datasets. However, this must be balanced with respecting user privacy, obtaining proper consent, and ensuring transparency in data usage.

Last month, the White House confirmed that Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to self-regulate their AI development. Given the emphasis on AI regulation from global players including China, the EU, and Singapore, companies should start the discussion on AI governance with respect to both their internal processes and external vendor management.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.