Tag Icon

Sense and Sensitivity: Latest FTC Enforcement Actions Continue Focus on Sensitive Health Data

We’ve talked before about the FTC’s focus on consumer health privacy. In cases against BetterHelp and GoodRx, a blog post announcing rules it intends to enforce in the space, and a report summarizing its recent privacy and data security enforcement efforts and other initiatives, the agency has made clear that the privacy and security of sensitive health information is a top priority.

In the last week, the agency announced two new enforcement actions: one against Monument, an online addiction treatment firm and another against Cerebral, an online telehealth provider that provides mental health and related services, and its former CEO.  Both actions took the form of cases filed in federal district court by the Department of Justice upon notification and referral from the FTC.

In this post, we summarize the latest lessons from these cases for companies that handle consumer health information.

Overview of the Monument and Cerebral cases

The complaint against Monument alleges that the company, which provided alcohol addiction treatment services, made broad promises about the privacy of users’ sensitive health information on its website, online advertisements, sign-up forms, and customer service communications.  Among other representations, the company told users its services were “fully HIPAA-compliant,” and that it only shared health information with users’ written consent.

In fact, the complaint alleges, Monument shared users’ personal information, including sensitive health information, with third parties through tracking technologies that included pixels and APIs from companies such as Google and Meta. The purposes of those technologies included retargeting users with advertisements and finding and targeting potential new users with advertisements. Monument did not contractually limit how those third parties could use or disclose that information, but rather “merely agreed to the third parties’ general terms of service, which either placed no restrictions on the third parties’ use and disclosure of the information or specifically permitted them to use the information for their own purposes.” The complaint also alleges that despite its repeated representations about HIPAA compliance, the company had significant deficiencies in its HIPAA compliance program, as shown by gap assessments performed by a third party assessor hired by the company.

To resolve the government’s allegations that its conduct violated Section 5, the company agreed to a proposed order that completely bans Monument from disclosing health information for advertising purposes, and requires it to obtain users’ affirmative consent before sharing health information with third parties for any other purpose.

The complaint against Cerebral alleges a similar course of conduct.  Cerebral operated a subscription-based telehealth platform that offered mental health treatment services for conditions that included depression, anxiety, and substance use disorders. To that end, the company routinely collected and stored sensitive personal health information and made various assurances to its customers about the privacy and security of that information.  The company claimed on its website that its services were “safe, secure, and discreet,” that it used “the latest information security technology to protect your data, which is not shared without your consent,” and stated in its privacy policy that “without your authorization, we are expressly prohibited from using or disclosing your protected health information for marketing purposes.”

In fact, the complaint, alleges, Cerebral used third-party pixels and other tracking tools in its websites that disclosed its users’ personal and health information to third parties such as Google, Facebook, and TikTok to carry out the company’s targeted advertising campaigns.

Under a proposed order resolving the case, Cerebral will pay more than $7 million in monetary relief and civil penalties, will be completely banned from using or disclosing personal and health information for targeted advertising purposes, and must obtain affirmative express consent for other disclosures of that information to third parties.

  1. The agency REALLY doesn’t like health information to be disclosed for targeted advertising.

As in its previous actions against GoodRx and Betterhelp, the FTC required as a condition of its settlement of the cases against Monument and Cerebral that the companies be completely barred from disclosing consumers’ health information for targeted advertising purposes. That hostility to the disclosure of health data for advertising aligns with recent remarks by FTC Chair Lina Khan, who following the agency’s recent action against Avast released a statement asserting that “businesses by default cannot sell people’s sensitive data or disclose it to third parties for advertising purposes.”  Any business that collects consumer health information should be aware of the FTC’s position, and plan accordingly.

  1. “Affirmative Express Consent” is a challenging standard to meet.

As an exception to the “default” rule announced by Chair Khan, the FTC’s position does allow for the use and disclosure of health information for targeted advertising if the business obtains the consumer’s affirmative express consent. But that standard can be challenging to meet.

As outlined in the proposed orders in Cerebral and Monument, obtaining “affirmative express consent” requires more than getting a consumer to click a box generally agreeing to a privacy policy and terms of service. Instead, it requires “a freely given, specific, and unambiguous indication of the individual’s wishes,” following a clear and conspicuous disclosure to the individual of:

  • the categories of information to be collected
  • the specific purpose(s) for which the data will be collected, used, or disclosed,
  • the name(s) of any entity that collects the information or to which the information is disclosed;
  • a simple, easily located means for the individual to withdraw consent;
  • any limitations on the individual’s ability to withdraw consent; and
  • all other information material to the provision of consent.

And that information must be presented separately from any privacy policy, terms of service, or other similar document.

Those requirements mean that a compliance strategy based on affirmative express consent comes with several downsides, including friction from a user experience standpoint, a need to expend significant effort and operational resources on preparing and providing the required disclosures, and opportunities for mistakes and omissions that could come back to haunt the organization later.

  1. “Mere puffery” isn’t a thing when it comes to privacy and data security claims.

In both the Cerebral and Monument cases, the companies had made assurances to their customers and prospects through their websites, mobile apps, and advertisements about their privacy and security practices.

  • Monument’s advertising promised “at-home treatment that’s confidential [and] secure.” FAQs on its website claimed “[w]e take your privacy and security very seriously,” and that “Monument is compliant with all relevant privacy laws.”
  • Cerebral, for its part, published promotional materials assuring users that its “remote depression and anxiety treatment is safe, secure, and discreet” and that uses could expect “confidential treatment from the privacy of [their] home[s].”

Some of those assurances might be viewed as “mere puffery” that would not be legally actionable.  The FTC’s allegations, however, show that under Section 5 of the FTC Act, there’s no such thing as puffery when it comes to privacy and data security representations. In both cases, the companies’ assurances were the basis for claims that they violated Section 5’s prohibition against deceptive conduct.  And in a blog post discussing the cases, the agency has made clear that privacy and security representations, regardless of the level of subjectivity or detail, are “affirmative claim[s]” that companies “must support with solid proof” to avoid liability.

  1. Your non-privileged gap assessments can and will be used against you in a court of law.

One notable aspect of the complaint against Monument is its allegation that the company knew its claims to be “fully HIPAA compliant” were false because a third-party assessor the company hired to conduct a HIPAA gap assessment repeatedly told the company otherwise.  According to the complaint, that assessor concluded following an initial assessment that Monument was “only 60% in compliance with HIPAA” and had “not addressed” 10 implementation specifications under the HIPAA Security Rule.  A subsequent assessment by the same assessor a year later concluded the company was “only 71% in compliance with HIPAA.” The complaint cites the assessor’s conclusions as evidence that the company knowingly misrepresented its HIPAA compliance to consumers in violation of Section 5.

That outcome shows why many companies work to ensure the application of the attorney-client privilege to compliance reviews such as HIPAA gap assessments. To be sure, and as we’ve discussed before, the law on the discoverability of reports prepared by cybersecurity and forensic consultants may make protecting those reports from discovery challenging. But the FTC’s weaponization of Monument’s assessor’s conclusions against it shows why it’s wise to try.

  1. And so might your compliance budget.

The Cerebral complaint cites the company’s relatively low spending on “Safety and Quality” and “Security and IT” as evidence that the company’s former CEO knowingly failed to ensure that the company lived up to its assurances to users that their data was safe and secure.  As alleged in the complaint, the former CEO was aware of chronic data security problems as a result of being briefed on several security incidents. And yet he “shaped and approved” annual budgets that “invested disproportionately in growth and marketing,” which were allocated a budget of $211 million, but “deprioritized compliance and data security functions,” which were allocated a “relatively paltry” budget of $6.7 million. The CEO approved those budgets, the complaint says, despite “that Safety & Quality and Security & IT issues should have been paramount for a telehealth company.”

Whether or not the FTC’s use of the company’s operational budgets to assign blame to its former CEO for the company’s privacy and data security failings is fair, the case shows that prioritizing growth and marketing at the expense of compliance can have serious consequences for companies in the consumer health space.