AI Risk Management Can Learn a Lot From Other Industries

AI risk may have unique elements, but there is still a lot to be learned from cybersecurity, enterprise, financial, and environmental risk management.

Apr 9, 2025
Guest Commentary
Download Audio

AI models are now appearing at bewildering, breakneck speeds. The advances are such that announcements frequently splash the home pages of global mainstream news websites and send the X commentariat into frantic debate.

While model capabilities continue to evolve rapidly, AI regulation lags behind. The EU has passed the AI Act, but legal battles will likely delay its full implementation. The UK’s legislation is delayed, China has its distinct approach, and the US seems likely to have a patchwork of state laws. Further, it is likely that many of these early rules will need to be refined over time.

In the absence of regulations, AI developers must implement sound risk management practices. Early efforts by the major companies are a good first step, but they ought to learn the lessons of practices elsewhere. While AI risk does have its unique elements, disciplines such as cybersecurity, enterprise, financial, and environmental risk management all offer potential examples of techniques that are also applicable to AI. AI developers and regulators would be remiss to disregard these sources.

Risk of Reinventing the Wheel

In lieu of top-down regulations, many in the industry have committed to voluntary risk management practices. Most notably, several leading AI organizations have adopted the concept of Responsible Scaling Policies. This forms the basis for the Frontier Safety Framework policies that Anthropic, Google DeepMind, OpenAI, and other major players released ahead of February’s Paris AI Action Summit.

These AI practices, however, seem to be evolving in a vacuum, when they should be leveraging lessons from other risk management disciplines. This has consequences both benign and troubling. The authors of AI risk management guidelines relabel existing concepts like “risk assessment” with names like “red-teaming” and “evaluations.” This is confusing, but not a serious problem. More troubling is when safety practices are incommensurate with, or even unrelated to, the level of risk that systems pose.

There are many likely reasons why AI risk management finds itself on this path. First, many AI engineers seem biased towards “technology-first” thinking rather than “risk-first.” But, by not creating full risk scenarios that account for systems’ full capabilities in context, they are prone to both underestimating and overestimating risk.

For example, a developer who does not account for a malicious actor’s incentives for causing harm could underestimate the likelihood of risks. On the flipside, a developer who doesn’t account for societal friction can overestimate the impact and timeframe of risks.

AI risk management also suffers from the technology’s reputation for complexity. Indeed, in popular media, AI models are constantly referred to as “black boxes.” There may therefore be an assumption that AI risk management will be equally complex, requiring highly technical solutions. However, the fact that AI is a black box does not mean that AI risk management must be as well.

Distinguishing Facets of AI Risk

AI risk has four distinct traits not shared by any other single risk management field. However, we can still find lessons if we look at each trait individually.

First, AI risk evolves more rapidly than most other risks. Every few months, a new model arrives that can quickly make previous risk calculations obsolete. The evolution of AI progress as a whole is also unpredictable. In 2024, many observers, such as Andreessen Horowitz, proclaimed that AI progress was hitting a wall. Then, in December, OpenAI’s o3 arrived, shattering records on math, coding, and reasoning benchmarks.

Industries where change is slower are therefore likely not the right place to look for applicable techniques. The technology used in nuclear power plants, for instance, does not change rapidly. However, there are other industries where risk also changes rapidly. An example is financial markets, where an organization can lose millions in seconds on the trading floor.

Second, for most AI risks, there is an adversary. Many risk disciplines are non-adversarial. Risks in these industries result from accidents or natural occurrences such as weather patterns. For AI, however, some of the most important risks are from malicious actors misusing a model or AI agents taking undesired action. An important domain from which AI can take lessons on adversarial risk is cybersecurity.

Third, AI risk is broad. Many risk management techniques have been developed to deal with fairly simplistic risks. For example, extractive industries like mining deal with fairly straightforward health and safety risk scenarios – a piece of machinery malfunctions and an employee gets hurt. AI risk involves more complex risk scenarios that are more difficult to fully analyze. AI risks are also multi-faceted, ranging from creating harmful imagery to creating bioweapons. This, therefore, likely rules out applying risk management techniques set up to deal with more narrow risks. However, enterprises deal with a broad array of complex risks – legal threats, supply chain meltdowns, human resources fiascos, etc. – and thus Enterprise Risk Management techniques have a lot to offer AI.

Fourth and finally, AI represents a risk to society. Most risk management techniques evolved to help protect a specific entity: a person or company, for example. Some of the most important AI risks, however, come in the shape of externalities – if a terrorist uses an AI system to develop a bioweapon, the harm is not borne by the organization that developed the AI system, but society at large. Environmental risk management, which often deals with compounding externalities, should be AI’s mentor in this domain.

Examples of Potentially Applicable Risk Management Techniques

So what can AI risk management learn from those other four risk management disciplines?

Financial Risk Management

Finance provides an example of dealing with multi-faceted and fast-moving risks. In financial institutions, each type of risk (investment risk, liquidity risk, etc.) has a single person responsible for staying abreast of new threats. These risk owners meet regularly, as a committee, to discuss emerging threats and share ideas. This committee’s singular focus allows for swift action, and its mandate to look across risk areas enables it to rapidly find areas of cooperation.

Applying this to AI is straightforward. It is not standard practice in AI risk management today to assign owners to each of the disparate risk areas (national security, discrimination, malware, defense, unemployment, etc.). Doing so, and requiring these owners to meet regularly as a committee, would enable greater coordination, sharing of practices, and result in a more rapid response to emerging threats.

Cybersecurity

In cybersecurity, the FAIR framework breaks risk into components: the nature and quality of the adversary on one hand, and the nature and quality of the organization’s defense mechanisms on the other. This makes the risk much easier to measure and manage.

Applied to AI, this approach would generate a comprehensive analysis of the entire system. This includes the AI model itself, its scaffolding and tools, its intended and unintended users, its controls and mitigations, and the paths to harm.

Enterprise Risk Management

Enterprise risk management provides many potentially useful techniques for AI, but one especially pertinent example is its use of both internal and external audit teams. Internal audit is a fully independent group that is part of the organization, while external audits are provided by a separate organization. This means there are two independent sets of actors to evaluate risk management practices and control mechanisms. This makes it more likely that deficiencies will be caught in time.

The two teams often have complementary skill sets, which is especially important for AI risk, given its breadth and complexity. Some organizations are already being established to provide external auditing-type services for AI capability evaluations. It’s easy to imagine expanding their remit to auditing according to a broader AI risk management framework.

Environmental Risk Assessment

In environmental risk assessment, practitioners conduct multi-faceted risk assessments that incorporate different types of harm (e.g., health, ecological, economic, environmental, aesthetic) and different types of evidence with different weightings. This makes the assessment more comprehensive and inclusive.

The many types of harm for AI risk, ranging from harm to people and property to the economic system as a whole, would benefit from a similarly comprehensive analysis.

______________

AI risk management from AI developers and deployers is much needed, both as a stop-gap mechanism and as an ongoing complement to evolving AI regulations. Initial efforts at AI risk management have taken an insular path that does not sufficiently leverage lessons from existing disciplines. We must avoid reinventing the wheel and wasting precious time. AI risk has many distinguishing characteristics, but for each of them, there are parallels to be found that should not be ignored.

Written by
Image: MchlSkhrv / iStock
Continue reading

How Applying Abundance Thinking to AI Can Help Us Flourish

Realizing AI’s full potential requires designing for opportunity—not just guarding against risk.

Apr 9, 2025

The Challenges of Governing AI Agents

Autonomous systems are being rapidly deployed, but governance efforts are still in their infancy.

Apr 9, 2025

Subscribe to AI Frontiers

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Subscribe to AI Frontiers

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.