The heart of the matter lies in a bold assertion in the terms’ appendix, stating that Zoom “does not use audio, video, or chat content to train AI models without customer consent.” But here’s the catch: consent is assumed by the mere use of the software, often without users even reading the terms. With AI increasingly learning from data, concerns grow, highlighted by recent lawsuits from creators alleging their work is being replicated without permission.
Zoom’s recent introduction of AI features, like meeting summaries and chat message creation, further fans the flames. Enabling these features requires users to consent during the conversation, with a “Got it!” button in glaring blue and a “Leave Meeting” option in subdued gray. Essentially, users must agree or exit the Zoom conversation.
This revelation has ignited privacy and legal concerns, particularly within the EU. Zoom’s tactics seemingly skirt the General Data Protection Regulation (GDPR) and the Electronic Privacy Directive (ePrivacy), which protect user rights and consent. The latter was updated in 2020 to include Zoom-like services, potentially putting the company’s approach at odds with European data protection laws.
In response, Zoom’s press office reiterated the language from its terms, further intensifying the debate over user data rights in the age of AI. As the clash between technology and privacy rages on, Zoom’s “tricky” privacy policy language raises questions about the extent to which companies can harness user data without clear, informed consent.