The Fight Over Standards in the Chemical Industry - a Chain Reaction?

What the emergence and evolution of safety standards in the chemical industry can teach us about AI Governance *
The events of July 28th happened in the absence of almost any media coverage, leaving a vast majority of the US population blissfully unaware of the fact that, silently and unceremoniously, a cornerstone of chemical safety regulation had just vanished. After 15 years of writing regulatory history, the Chemical Facility Anti-Terrorism Standards (CFATS) program was discontinued when Congress allowed its statutory authority to expire, leaving behind a vacuum that will be hard to fill. Since 2007, CFATS (administered by the Department of Homeland Security) had provided a regulatory framework for chemical facilities, mandating site vulnerability assessments, audits and diligent reporting on chemical materials.
Even more stunning, however, were the reactions that could be heard shortly after. “The Senate failed us.” “Congress must get back to work immediately to reinstate CFATS to help keep America safe.” “I am incredibly disappointed.” These were not the words of political activists or environmentalists. These were the words of the two leading industry representatives, the American Chemistry Council and the National Association of Chemical Distributors. Why, of all people, would those in charge of representing the chemical industry’s interest openly call for the re-institution of regulations that impose costs in the order of millions every year? Why is this relief from invasive mandatory reporting and auditing not celebrated by those who stand to profit most from deregulation?
To answer these questions, we need to dive into the processes through which standards come into place. We need to understand how malleable they are, how they develop over time, and – in some cases – what causes them to disappear again. Understanding these dynamics could prove crucial in informing future attempts at regulating emerging technologies like Artificial Intelligence.
Recent times have already seen a spike in interest in safety standards and regulation for AI, signs of which can be seen in public statements from politicians, civil society representatives and even industry leaders. This is not by accident: our current situation is not too dissimilar from those previously encountered by safety standard developers in safety-critical industries. Increasingly powerful Artificial Intelligence is adopted in every sector, across the globe, in stark contrast to the isolated and fragmented regulatory efforts that are in some cases still in their infancy and in others haven’t been born or even conceived yet. Standard developers are desperately trying to keep pace with the rapid technological advances, making a better understanding of what has worked and what hasn’t an important input into their decision-making.
So what can we learn from the chemical industry about how standards emerge and develop over time?
The hidden cost of neglecting safety
If you are unfamiliar with ”the world’s worst industrial disaster” you could be forgiven – outside of India and the communities of practice interested in chemical safety, the Bhopal disaster is not a widely known event.
Within just a few hours on the night of December 2nd 1984, half a million people were exposed to highly toxic gas caused by a leak at a chemical plant. Over ten thousand people are reported to have died as a consequence of this devastating event. Incidents like these – combined with aggressive and wide media coverage depicting chemical companies as responsible for the deaths of thousands of people – can galvanise public opinion against the chemical industry.
It didn’t take long for those who were responsible to get wind of what they perceived as an existential threat to their livelihood: Dwindling public opinion. Favorable opinions of the chemical industry halved. Quick and targeted action was needed: Marketing and communication teams quickly sprang into action to poll the public and conduct market research. These teams ran focus groups on what messages worked best when launching their campaigns around the “Responsible Care” safety standards, a set of voluntary commitments made by the members of the biggest industry association in Canada in 1985, only one year after the Bhopal incident. Years later, they would modify their recommendations, once again based on a public perception that the standards weren’t strict enough.
Why Canada? Companies quickly realised that isolated incidents informed public opinion of the entire industry, across the globe: When they conducted surveys among residents in different districts, they found out that within 6 kilometres of a plant, people had specific opinions about the company operating that plant, but as soon as they asked people outside of this circle, they found that their company was judged by the industry wholesale, and people made no distinction between individual companies.
Hence, our first lesson from the chemical industry is one that may seem obvious: A clear and tangible precedent of danger can serve to catalyse industry leaders’ action. The Bhopal disaster and the research it galvanised showed that concerns about and commitments to safety could be a unique selling point of companies with a flawless track record. This dynamic might work even better for a mass user technology like AI that relies on customer trust and a wide user base. Recent polls suggest that a slight majority of Americans are already more concerned than they are excited about AI. Possible “warning shots” include economic damages above a certain threshold, the materialisation of systemic risks involving critical infrastructure, direct physical harm affecting a large number of human beings or – more indirectly – AI models exhibiting dangerous behaviours like power-seeking, deception or long-term planning in evaluations and tests. This is not to say that we should hope for an AI-related catastrophe anytime soon, but it does bolster the case for diligently and independently tracking and monitoring incidents, a proposal that has already found expression in regulatory documents like the AI Act. From the OECD to the Responsible AI Collaborative and a number of independent projects, incident trackers and databases have sprung up in recent months, cataloguing and reporting on incidents of AI misuse and accidents
.
When incentives align: How industry and governments share an interest in advancing safety
Industry leaders decided that introducing standards was an effective way to show good will, and to regain public trust in their trade. Pierre Choquette, former president of a leading chemical producer at the time, described the “Responsible Care” program as “absolutely essential to the survival of the industry”. It also contributed to the flourishing of a “safety culture” within the industry in the late 20th century. The assumption of responsibility for safety out of pure self-interest marked a paradigm shift, compared to previous safety efforts that were begrudgingly implemented to avoid punitive measures. Companies – it turns out – have a number of incentives that could be leveraged to motivate stronger safety efforts: The promises of reduced liability, reduced future regulatory burden or enhanced predictability and conditional market access (or industry association membership) all played a key part in encouraging self-regulation and obedience to standards.
Needless to say, these incentives weren’t always as pure as one might hope: In one case, a chemical facility operator got wind of the fact that a competitor was using chlorine gas, prompting them to advocate strongly for the substance to be included in the list of strictly regulated chemicals. In other cases, it was a pure win-win for all companies in the sector: For instance, several insurance providers offered discounts to companies that complied with industry standards - discounts of up to 30% on environmental impairment liability premia have been reported for those who implement the Responsible Care program. Some safety measures, such as probabilistic risk assessments in the nuclear energy industry, are rumoured to have an ROI of 20-to-1 due to a resulting reduction in production costs. Voluntary safety standards also gave companies more wiggle room in terms of how those standards are implemented, compared to government regulation.
Little did chemical companies know, however, that their code of practice was going to become much more influential than even they had bargained for. Fast forward to a catastrophe that is much more familiar and ever-present in the American consciousness: The 9/11 terror attacks.
What’s less well-known to many is what happened in the weeks before the attack: Mohammed Atta, one of the 9/11 plotters, had been visiting various locations including one in Tennessee, reportedly “asking lots of questions” about a nearby chemical plant. Whether he knew it or not, he was on to something: A disaster only half as consequential as the leak in Bhopal would have resulted in a death toll about 3-5 times as high as that of the attack on the World Trade Center.
After the terror attack on September 11th, lawmakers became keenly aware of the risks that could come from insufficiently regulating industries that dealt with hazardous materials due to their attractiveness as targets for terrorists. Or, as former Senator Warren Rudman put it: "The threat is just staring us in the face. I mean, all you'd have to do is to have a major chemical facility in a major metropolitan area go up and there'd be hell to pay politically. People will say, 'Well, didn't we know that this existed?' Of course, we knew." These concerns were corroborated when a study by the Army Surgeon General found in 2002 that a terrorist attack resulting in a chemical release in a densely populated area could injure or kill as many as 2.4 million people.
It didn’t help that in 2007 – after six long years during which safety measures could have been improved – 150-pound cylinders of chlorine gas were reported stolen from a chemical facility. Events like this also made it possible for actors interested in pushing stricter legislation and more oversight to frame themselves as the “protector of the American public”, which turned bureaucrats and profiteers into unlikely allies united in patriotism.
No need to reinvent the wheel: The integration of voluntary standards into regulatory frameworks
When policymakers and legislators were scrambling to get safety regulation in place post-9/11, they discovered that most of what they needed already existed, buried in a voluntary code of conduct for the members of a trade association of chemical manufacturers.
Unsurprisingly, the chemical industry was delighted, and supported the development of the CFATS program, to this day considered by experts to be a reincarnation of the Responsible Care program. It helped that mandates and regulatory requirements could be based on measurable units (quantities of certain chemicals per facility), which allowed for some regulatory relief through exemptions for small and medium enterprises. CFATS made use of quantity thresholds for a defined list of high-risk materials, which came with the benefit of pre-emptive compliance: Because companies could avoid being monitored if their stock of dangerous chemicals remained below these requirements, many chemical plants replaced the most dangerous substances with others, reduced their stock or purged them altogether. As Paul Orum from the Center for American Progress puts it: “What you don’t have can’t leak or be blown up by terrorists”.
In the case of Artificial Intelligence, it seems that policymakers have already taken a page out of the book of other dual-use technologies that preceded it. Both the Biden Administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the EU AI Act anchor regulatory requirements on the number of floating-point operations per second (or FLOPs) to set thresholds on computing power used to train models, although the adequacy of such thresholds for tracking relevant risks has been hotly debated, especially as algorithmic progress allows for less compute-intense models to catch up to their competitors at the frontier.
The progressive integration of industry standards into proper regulatory regimes also created upward pressures in terms of safety standards within industry: In the early years following adoption, official regulations covered only 13% of the voluntary program suggested by the chemical industry. By 2000, three quarters of the voluntary standard’s recommendations had made it into federal law. At this point, if you were a chemical company and you wanted to advertise yourself as a leader in safety performance, mere compliance with existing standards wouldn’t suffice. This created structures that rewarded companies for working on improved safety standards, especially as compliance with existing regulation became easier. The first so-called “Security Vulnerability Analysis” that was mandated by CFATS had approximately 600 questions – only a few years later, this number was brought down to only 10 (!) questions.

At what cost? Effectiveness as the flipside to efficiency
Pre-existing safety protocols allowed for the quick and widely accepted introduction of binding regulation. But sunlight also casts shadows: The fact that standards were introduced with strong support and heavy input from industry sometimes came with tradeoffs regarding their effectiveness. For one, standards based on self-regulatory efforts within the industry often have less teeth.
To illustrate this, let’s look at what adherents to the “Responsible Care” program of the chemical industry association were expected to do: Members had to conduct self-evaluations not once a quarter or once a month, but once a year. They then gave themselves a score on a 6-point scale, where a 1 meant no action needed to be taken, a 2 required an evaluation of existing practices, a 3 was equivalent to an encouragement to “develop plans”, and a 4 indicated a need to “implement plans”. Even the most extreme ratings on this scale only asked for management practices to be put in place or to be assessed.
It’s not hard to see how this kind of oversight would fail to meaningfully improve safety. Formal threats to remove non-compliers from the trade association in practice didn’t result in a single expulsion. While CFATS did mandate certain physical security measures, the program inherited some of the shortcomings of the preceding industry standards when it essentially left it up to companies how these measures would be met, allowing them to customise security plans for each facility.
Here again we find useful parallels to AI: Leaders of large AI companies have already called for more regulation, suggesting a convergence of incentives on higher safety standards, especially in areas where liability and the cost of insurance play an important role in a provider’s cost-benefit analysis. Building on policies that are coming out of labs offers the opportunity to shift safety cultures within industry by empowering corporate actors to become leaders in safety and save policymakers time.
Similar to how CFATS was modelled after the Responsible Care program, recently proposed regulatory regimes have included “if-then commitments” which are suspiciously reminiscent of the Responsible Scaling Policies that have been a cornerstone of industry-led safety standard development as capabilities have increased as a result of immense and rapid scaling.
However, to avoid running the risk of producing sub-par standards that are not proportionate to the extreme risks posed by advanced AI, independent assessments and audits are needed. The prospect of AI labs engaging in the self-assignment of safety scores – as was common in the early days of CFATS – does not exactly inspire hope.
Similarly, while the customisability of safety plans allows for greater targeting and protects SMEs and startups from regulatory overreach, it also risks fragmentation and increased difficulties of external verification.
An uneven fight: Regulatory capacity and industry lobbying
Regulators, who are routinely under-resourced compared to the companies whose oversight they are charged with, often depend on industry expertise to draft rules and regulations. Before CFATS was introduced in 2006 and 2007, the chemical industry spent between 16 and 74 million USD on lobbying. For comparison, the Department of Homeland Security worked with a budget of just 10 million USD for chemical industry security. CFATS, according to reports, had at one point a backlog of 8-10 years’ worth of inspections due to insufficient internal capacity, making them reliant on industry to verify compliance.
In turn, regulatory agencies reciprocated by providing free services, technical support and expertise to those they were overseeing. As part of the CFATS program, companies can request so-called “Compliance Assistance Visits”, a service provided free of charge. Watchdogs had slowly but surely turned into guide dogs. This symbiotic relationship may appear mutually beneficial at first sight.
However, there is a danger that the agencies and institutions responsible for providing a check on the increasing and increasingly concentrated power of tech companies will become more and more dependent on those companies. Talent and expertise on Artificial Intelligence is arguably even more contained in the private sector than is the case for the chemical industry. As a result, regulatory efforts have been dominated by the interests of those with the deepest pockets.
To avoid coming to a gunfight with water pistols, civil society and independent sources of expertise need to be strengthened so that they are empowered to provide input into legislative processes. At the same time, in-house expertise within governments and regulators needs to be built to reduce dependence on external input, a challenge that is exacerbated by the lack of funding that affects relevant standard-setting bodies and institutions like NIST.
Looking forward: Avoiding CFATS’s fate
As of January 2025, visitors to the CFATS web page provided by the Cybersecurity and Infrastructure Security Agency (CISA) find only a short statement resembling a eulogy: “The lapse of CFATS authorization means that CISA cannot require facilities to report their chemicals of interest or submit any information in CSAT, perform inspections, or provide CFATS compliance assistance, amongst other activities. CISA can no longer require facilities to implement their CFATS Site Security Plan or CFATS Alternative Security Program.” While its resources remain available for those interested, CISA can only “[encourage] facilities to maintain security measures.”
We should not expect that mere encouragement will suffice in an environment shaped by commercial incentives and races to develop and deploy ever-more powerful models.
It may be, therefore, that the most important lesson to take away from the emergence, evolution and eventual downfall of the CFATS program is that the biggest threat to effective safety standards is their politicisation. When different governmental entities disagree about competing priorities and resulting budgetary allocations, even the strongest and most widely shared incentives to prioritise safety may not be enough.
That said, all hope is not lost: The Department for Homeland Security has developed a mechanism by which it could continue to collect information from chemical companies in a similar way to CFATS, and the document explicitly and repeatedly talks about what will happen when (not if) CFATS is reauthorised.
May we hope that cooler heads prevail and chemical safety standards enter the history books as best practice examples rather than as a cautionary tale.
---
* An important caveat: It’s previously been pointed out that AI is unlike any previous technology, making any comparison less valid and informative for future regulatory efforts. We might also find that AI presents novel threats that may not be preventable by learning from past failures.