Whistleblower Lawsuit Against Meta Raises Urgent Questions About User Safety and Corporate Accountability
On September 8, 2025, a whistleblower lawsuit was filed in the U.S. District Court for the Northern District of California that could reshape public discourse around digital privacy and corporate responsibility. Attaullah Baig, former head of security for WhatsApp, has accused Meta—the parent company of WhatsApp, Facebook, and Instagram—of knowingly exposing billions of users to serious security risks and retaliating against internal efforts to fix them.
According to the lawsuit, Baig repeatedly warned Meta’s leadership, including CEO Mark Zuckerberg, about critical vulnerabilities in WhatsApp’s infrastructure. These included unrestricted access by thousands of employees to sensitive user data such as profile photos, location, group memberships, and contact lists. Baig also documented the hacking of over 100,000 accounts per day—an alarming figure that Meta allegedly failed to address.
Rather than investigate or act on these concerns, Baig claims his managers dismissed his findings, reassigned him to less critical tasks, and ultimately fired him in February 2025. He has since filed complaints with the Federal Trade Commission (FTC), the Securities and Exchange Commission (SEC), and the Occupational Safety and Health Administration (OSHA), citing retaliation and systemic negligence.
Baig’s lawsuit argues that Meta violated previous regulator actions, specifically a 2019 privacy settlement with the FTC, which required the company to implement robust data protection measures and undergo independent audits. The suit also alleges breaches of securities laws, as Meta failed to disclose cybersecurity risks to shareholders—a legal obligation for publicly traded companies.
In internal documents presented to WhatsApp executives, Baig warned: “We have a fiduciary responsibility to protect our users and their data. The penalties can be severe both in terms of brand damages and fines.” Meta, however, allegedly blocked proposed security features such as enhanced login verification and protections against profile photo scraping.
Baig’s case is the latest in a growing list of whistleblower actions against Meta. From Frances Haugen’s 2021 testimony on teen mental health harms to recent disclosures about child safety risks in Meta’s virtual reality platforms, a troubling pattern has emerged: internal warnings are met with denial, deflection, or retaliation.
WhatsApp is used by over three billion people worldwide, many of whom rely on its end-to-end encryption for secure communication. The perception of safety is central to its appeal. If Baig’s claims are substantiated, they suggest that this trust has been deeply compromised—not by external hackers, but by internal negligence.