wyrick.com
Tag Icon

Zoom for Improvement: Lessons Learned from Zoom’s Privacy and Security Backlash

As anyone working from home amid the ongoing coronavirus pandemic can appreciate, the video conferencing platform Zoom has rocketed to popularity as the app of choice for hundreds of millions of self-quarantined users.  This unprecedented growth of Zoom’s user base caused the company’s stock price (not to mention my kids’ daily screen time) to soar. 

But the company has also experienced a major backlash over its privacy and data security practices.  In just the last two weeks, Zoom has:

Zoom’s CEO issued a statement last week acknowledging that the company had fallen short of its users’ privacy and security expectations.  That statement also announced the company was freezing feature development for 90 days and shifting all its engineering resources to focus on trust, safety, and privacy issues.

In this post, we discuss some key lessons that Zoom’s recent experience teaches on the privacy and data security front. 

The Longer You Wait, the Harder It Gets

Businesses—including and especially startups and emerging growth companies—sometimes treat privacy and data security as tomorrow’s problem, or a “nice-to have” line item in the budget.  They defer building a privacy and data security program in favor of other seemingly more pressing priorities.

But in a world of widespread and evolving privacy legislation, increasingly emboldened privacy regulators, and a steady drumbeat of data breaches, that strategy simply isn’t tenable anymore.  Companies that treat privacy and data security as anything but a business-critical issue may find themselves expending far more time, resources, and goodwill to fix problems that will inevitably arise than they would have in avoiding those problems in the first place. 

In Zoom’s case, an apparent lack of focus on privacy and data security has come back to haunt the company at a most inopportune time.  At a critical moment of growth, the company is reallocating all of its engineering resources to fix problems that it might have avoided through earlier and more careful consideration of the privacy and data security issues that its platform might encounter.  

Zoom’s CEO, for his part, seems to have learned this lesson, admitting in an interview over the weekend that he and the company “messed up” and declaring that “we need to slow down and think about privacy and security first. That’s our new culture.”

Know the Company You Keep

As we discussed in our last post, Zoom is facing two federal class action lawsuits that arise from its alleged disclosure of user personal information to Facebook.  To that end, Zoom has already admitted that it implemented a feature called “Login with Facebook” in its platform.  According to the company’s statement, unbeknownst to Zoom, its implementation of that feature allowed Facebook to collect information about Zoom’s users and their devices that was unnecessary for Zoom to provide its services.  And critically, Zoom didn’t inform users that it was sharing this data with Facebook.

Separately, the Zoom platform reportedly included a data-mining feature that allowed users who subscribed to LinkedIn’s Sales Navigator service to covertly view meeting participants LinkedIn profile data—like locations, employer names, and job titles.  According to the reporting, the feature disclosed those details even if a meeting participant signed into the meeting under a pseudonym. 

Zoom disabled both features after they came under media scrutiny.  

As this aspect of the backlash shows, businesses must have a detailed understanding of how and when they share data with third parties, the rights third parties have to use that data, and the choices consumers have about that sharing. 

Learn to Love Privacy by Design

Other aspects of Zoom’s trouble arose from default configuration settings or features in its platform that ran contrary to basic privacy and security principles and user expectations.  To that end, Zoom caught flack for:

  • An attention-tracking feature that notified the host of a Zoom meeting when an attendee clicks away from an active Zoom window for more than 30 seconds, without notice by Zoom to the attendees of the meeting; and
  • Default settings that made all meetings public, and enabled attendees to share their screens without permission from a host, leading to a new phenomenon known as “Zoombombing,” in which people gain unauthorized access to a meeting and share hate-speech or pornographic images.

Privacy by design—the practice of embedding privacy into the creation of new systems and technologies by default—might have flagged both issues as problematic from the beginning.  That practice has been enshrined in the law in Europe since 2018, when GDPR became effective and required organizations to implement data protection principles into all stages of their processing of personal data.

Here in the United States, privacy by design isn’t an express requirement of any law or regulation.  But it’s a wise practice: the Federal Trade Commission has been calling for companies to implement its practices into the development of their products and services since its 2012 report on protecting consumer privacy.  And as Zoom has discovered, failing to properly consider privacy at the beginning of the product development process can lead to design decisions that put a product or service on a crash course with consumer and regulator expectations. 

Public Commitments about Data Security : No Place for Puffery

Until recently, Zoom claimed on its website and in a security whitepaper that all Zoom meetings are “end-to-end encrypted.”  The problem, according to Senator Brown’s letter to the FTC and media reports, is that Zoom used a questionable definition for that term that left room for Zoom itself to access unencrypted video and audio from meetings.  

Last week, the company issued a statement “apologizing for the confusion” it caused by “incorrectly suggesting that Zoom meetings were capable of using end-to-end encryption.” Even though there doesn’t appear to be any evidence that Zoom’s actual encryption practices led to the compromise or misuse of video or audio from meetings, Zoom may still have reason to be worried: the FTC has a long history of bringing enforcement actions against companies for making aggressive data security promises in privacy policies and marketing materials that later turned out to be false or misleading. 

Those actions often arise when the lack of security measures a company claimed to have causes a data breach.  But the FTC also recently settled a case with a company that arose out of misleading claims about the security of its Internet-of-things products, even though the FTC did not allege that users’ personal information was ever compromised. 

As that history and Senator Brown’s letter show, when it comes to statements about security, accuracy is critical: overselling a company’s security practices can create low-hanging fruit for regulators and private litigants. 

*   *   *

Time will tell whether Zoom’s recent mea culpa, and its unprecedented shift of engineering resources to get its privacy and data security house in order, will stop the bleeding and return the company to the public’s good graces.  In the meantime, companies would do well to take these lessons to heart, lest they too find themselves victims of their own success.