Governments Need to Protect AI Industry Whistleblowers: Here's How

In 2009, James Glenn, a security consultant for a Cisco distribution partner, submitted a detailed report to Cisco regarding cybersecurity flaws in its video surveillance systems that were sold to various federal and state agencies. He was fired soon after. Glenn later filed a whistleblower lawsuit under the False Claims Act alleging that Cisco knew about various security vulnerabilities that could lead to government systems being hacked, leading Cisco to settle the lawsuit and to address the security problems that he had identified.

Whistleblowers such as Glenn — people who report illegal and unethical behavior internally and externally to their organization — play a vital role as the first line of defense against wrongdoing in companies around the world. They might expose breaches of law within companies, such as corruption, fraud, collusion, and faulty products. They can also expose existing or potential harms to the public interest, including ethical concerns such as mass surveillance or environmental hazards.

Whistleblowers risk their job, reputation, opportunities, and even lives to expose misconduct, in an effort to enhance transparency and accountability and to correct misbehavior in organizations. This is why laws to protect, reward, and incentivize whistleblowing are so crucial.

This article aims to provide policymakers in the United States and Europe with an understanding of current whistleblower protections and how they apply to top AI companies. It also offers recommendations for expanding governmental policies to incentivize and protect whistleblowers in the AI industry.

What does whistleblowing mean for the AI industry?

Whistleblowers in the AI industry can help identify negligent or reckless behaviors from companies and put a stop to them. They may also alert relevant stakeholders to critical safety issues, such as the development of systems that are known to be manipulative, deceptively aligned, or have avoided shutdown. Whistleblowers could additionally report impending risks to relevant authorities, leading to the prevention of AI-related disasters, such as widespread offensive cyber operations or the creation of nuclear, chemical, or biological weapons.

However, many factors prevent whistleblowers at companies building frontier AI from speaking up:

  • A culture that stifles open criticism at top AI companies. Over the years, top safety- and ethics-minded employees who raised disagreement with the direction set by Google and OpenAI have either quit or been pushed out. Over five hundred former OpenAI employees who might have become whistleblowers signed non-disparagement clauses under the threat of their equity being revoked. In an open letter titled ‘A Right to Warn about Advanced Artificial Intelligence’, current and former employees of OpenAI, DeepMind, and Anthropic warned of their employers’ “strong financial incentives to avoid effective oversight” and called for companies to “support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies.”
  • Extreme focus on protecting corporate secrets and national security. This is perhaps the most decisive factor discouraging employees from speaking up externally. The top AI companies are extremely cautious about their information security to ensure that company secrets do not get leaked to competitors or foreign adversaries. Many even view AI security as paramount to national security, especially in the U.S., where analysts fear interference or hacking from China. Indeed, former OpenAI researcher Leopold Aschenbrenner was issued a formal warning from HR after creating an internal memo criticizing the company’s security measures, then fired after “leaking” information.
  • The relative lack of regulation of AI generally, and of regulation that specifically protects AI whistleblowers. Since whistleblower policies largely cover the reporting of illegal acts, and few laws exist globally to govern AI and determine what is legal or illegal, existing whistleblower policies do not apply to the AI sector as of yet. As the Right to Warn letter states, “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.” To protect AI whistleblowers, we must either extend existing protections by outlawing some practices at AI labs or create new protections for AI employees.
  • The severe consequences faced by whistleblowers, including retaliation from employers, job loss, and reputational damage. According to one study that surveyed whistleblowers, 69% lost their jobs or were forced to retire, and 64% were blacklisted from getting another job in their field. This effect is especially strong in AI, where the concentration of companies means there are only a handful of potential employers for a skilled AI professional. Whistleblowing could be extremely costly to one’s career and professional reputation, given that one might have to move to other geographies or even industries to get another job. Whistleblower cases can also take years to resolve — by some estimates, easily three to four years, with additional time to award compensation — which further discourages potential employees from speaking up.

The sudden death of OpenAI whistleblower Suchir Balaji in his San Francisco apartment last year may further affect potential whistleblowers' willingness to come forward with concerns about AI companies. His death was initially ruled a suicide by San Francisco medical examiners. However, as an ex-employee who was being considered as a key witness to OpenAI’s alleged copyright violations in the New York Times case against the company, Balaji’s death has drawn attention from public figures such as California congressman Ro Khanna, who has called for an FBI investigation in the matter. While the investigation is still open, such tragic and high-profile events might further dissuade people in the AI industry from coming forward with important information.

What whistleblower policies exist today?

Whistleblower policies have historically been implemented first at the governmental level, then translated to internal policies at the corporate level. Policies protecting whistleblowers in the public sector are more common than ones protecting private sector employees — however, some laws, such as the EU Whistleblower Protection Directive and Canada’s whistleblower provisions in its Criminal Code, do not distinguish between private and public sector employees, and others are industry-specific within the private sector.

United States

In the U.S., many separate laws exist to protect whistleblowers. The Department of Labor has laws that protect whistleblowers from retaliation from an employer as a result of a protected activity, including reporting issues related to consumer product safety, fraud, and employee safety. There are also many whistleblower laws at the state and local level that offer additional protections wherever federal laws do not preempt such an action.

Sector-specific whistleblower protections include laws specific to the financial sector, which has the strongest regulations in this area, such as the Sarbanes-Oxley Act and Dodd-Frank Act. These laws offer effective mechanisms for protecting and incentivizing whistleblowers. Dodd-Frank was passed after the global financial crisis in a move to reform Wall Street, and it provides whistleblowers with between 10 and 30 percent of the total monetary sanctions provided by enforcement actions, as well as preventing retaliation by employers.

Employees of federal agencies also have additional protections, including the False Claims Act (FCA), which is the first and one of the strongest whistleblower protection acts in the U.S., allowing anyone (even non-citizens and NGOs) to file a “qui tam” lawsuit on behalf of the U.S. government alleging fraud in connection with government programs and government contractors.

In addition, the Whistleblower Protection Act (WPA) protects federal government employees who disclose violations of law, rules, or regulations; gross mismanagement; gross waste of funds; abuse of authority; or substantial and specific dangers to public health and safety. It also prohibits retaliation and allows whistleblowers to disclose confidentially to the appropriate authorities. However, the WPA does not protect some executive branch employees, such as those in the intelligence community, uniformed military service members, political appointees, and U.S. Postal Service employees.

European Union

The EU has a comprehensive, cross-sector regime for whistleblower protection. The ‘Whistleblower Protection Directive’ protects employees who report on breaches of rules, including fraud, corruption, malpractice, and consumer safety violations at companies and public bodies with 50 or more employees. Arguably most importantly, it also ensures that whistleblowers have access to effective reporting channels both internally and externally to competent authorities. As an EU directive, the Whistleblower Protection Directive is transposed by all EU Member States into their national legislation.

United Kingdom

The main whistleblower legislation in the UK is the Public Interest Disclosure Act 1998 (PIDA), which protects workers who report malpractice in the workplace from employer dismissal. The Act uses the language of “workers” to be inclusive of employees, independent contractors, agency workers, home workers, and so on. Protected “qualifying disclosures” include not only reports of alleged criminal offenses but also concerns about potential or actual dangers to individual health and safety, as well as potential or actual environmental damage.

PIDA follows a “tiered” approach, creating a system for reporting wrongdoing that requires progressively higher thresholds of conditions to satisfy for the whistleblower to be protected. For example, there is a relatively low bar for workers to report internally within their employer. However, in order to be protected for making a disclosure to the police, media, or other actors, workers have to have previously made similar disclosures to the employer and/or the relevant government minister; have reason to believe that the employer will retaliate; or that evidence may be concealed or destroyed if they report to their employer. This tiered system was put in place to encourage internal disclosures and the use of external channels as a last resort.

What protections do AI whistleblowers have?

There are relatively few laws governing AI worldwide, and few laws that specifically protect whistleblowers in AI. One major exception is the EU AI Act, which explicitly includes provisions to protect whistleblowers. For example, it states that whistleblowers are covered under the EU Whistleblower Directive, and the law establishes clear reporting channels with relevant market surveillance authorities of member states.

Even though specific AI whistleblower legislation is scarce, many existing whistleblower laws around the world could potentially be applied to protect whistleblowers. For example, as the U.S. federal government ramps up its use of AI — an activity encouraged by the Biden White House and expected to continue under the Trump administration — the False Claims Act is likely to be a powerful tool to wield against government contractors like OpenAI, Palantir, and Anthropic. A qui tam lawsuit can be brought by anyone to allege violations such as selling the federal government services that do not conform to contractual or regulatory requirements, or ones that are defective or of lesser quality than agreed upon.

While internal whistleblower policies may exist at individual AI companies, the extent and efficacy to which these policies encourage or protect reporting is unclear. For example, a spokesperson for OpenAI stated that the company’s whistleblower policy “protects employees’ rights to make protected disclosures.” OpenAI’s position on what constitutes a “protected disclosure” is not well defined. Dario Amodei, the CEO of Anthropic, vowed at the UK AI Safety Summit in October 2023 that his company would put in place a whistleblower policy before its models reach AI Safety Level Standard 3 (ASL-3), a higher-risk classification of models that include low-level autonomous capabilities. However, these protections may not cover all activities that might pose a risk to the public or be properly implemented or enforced.

What might the future of AI whistleblowing look like?

Governments should expand the protections of whistleblowers in the AI industry. While corporate policies are important, governments ought to be the major drivers of regulation because corporations may otherwise ignore or retaliate against reports of wrongdoing despite their well-sounding HR policies. As the Right to Warn letter asserted, “AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

Expand covered activities

The previous section identified existing whistleblower laws that can be applied and highlighted a major gap: the tendency of laws to cover reports of illegal actions instead of future or as-of-yet unregulated harms. Thus, to provide full coverage of the range of activities that should be reported in AI — including knowingly releasing models with dangerous capabilities, deploying models in specific sectors like cybersecurity and healthcare without following all sectoral regulations, violating copyright and data privacy laws — existing laws can be expanded to cover the good faith reporting of potential harms to individuals, the environment, and the public interest.

Protect private-sector whistleblowers

There is also a mentioned gap in the protection of private-sector whistleblowers, particularly in the U.S., where the most influential AI companies are headquartered. A comprehensive, cross-sector law offering private sector employees protections similar to the Whistleblower Protection Act is a crucial first step in ensuring broad coverage for whistleblowers working in different capacities to build and deploy AI technologies. Private-sector legislation should also mandate internal and external reporting channels and procedures to make reports confidentially and anonymously.

Provide monetary incentives

In addition, there is an ongoing debate on whether to offer incentives to whistleblowers for coming forward. Such incentives, usually in the form of monetary rewards, are increasingly included in whistleblower protection regimes. There have been some concerns that such monetary rewards create perverse incentives and encourage people to report solely for financial gain. However, the reality is that coming forward as a whistleblower and risking one’s career and reputation is an act so risky that financial rewards may help to curtail potential losses. Furthermore, empirical evidence suggests that rewards work to incentivize whistleblowing — the U.S. government currently recovers USD $1 billion annually as a result of the False Claims Act, compared to an average of USD $10 million per year prior to its implementation.

Act before a major incident

Finally, a major consideration for new whistleblower policies requiring the introduction of legislation is the timing and the political will around such an act. Many whistleblower protection laws have historically been introduced following a disaster or a scandal. For example, the 1986 Space Shuttle Challenger disaster in the U.S. led to the passing of the Whistleblower Protection Act in 1989, and the Sarbanes-Oxley Act was passed in 2002 in response to many corporate and accounting scandals, including those of Enron and WorldCom. Similarly, the United Kingdom passed the Public Interest Disclosure Act 1998 after inquiries into disasters in the 1980s and 1990s — including the Clapham Junction rail crash and the Zeebrugge ferry disaster — found that employees knew of safety concerns but were afraid to speak up.

The unfortunate reality is that governments may not pay attention to or recognize the importance of AI whistleblowers until an AI-related disaster occurs and creates a sense of urgency. However, governments cannot afford to wait until an event causes large-scale loss of life. The harms from the concentration of power in AI are already happening today. One whistleblower is already dead, and several more have been fired, disgraced, or silenced for disagreeing with safety and ethical practices.

The future of whistleblowing in the AI industry

Whistleblowing has a high potential for impact on exposing irresponsible and unsafe AI practices, thus holding model providers accountable and challenging their power. As such, it is crucial to establish sweeping whistleblower protections, especially in locations where the leading AI companies operate.

As the Cisco whistleblower, James Glenn, said:

“The tech industry needs to fulfill its professional responsibility to protect the public from their products and services. There’s a culture that tends to prioritize profit and reputation over doing what’s right.”

We cannot allow the world’s most valuable and powerful technology corporations to keep prioritizing shareholder value over the public interest. We need guardrails and policies, such as whistleblower protections, to keep them accountable before it’s too late.

Authors
Michelle Nie
Tech Policy Fellow, Open Markets Institute
Subscribe to newsletter
Share
This is some text inside of a div block.

Have something to share? Please reach out to us with your pitch.

If we decide to publish your piece, we will provide thorough editorial support.