What Frameworks Should Companies Consider for AI Risk Management?

Dear Will & AiME,

We’ve been hearing more about AI governance and frameworks for managing AI risk. Someone recently mentioned the NIST AI Risk Management Framework. It sounds useful, but I am concerned that it’s dated. Is it something businesses should still consider?

— Chief Risk Officer, Healthcare Technology Company

Short Answer 💡

The NIST AI Risk Management Framework is still a valuable starting point for companies trying to mitigate AI risk. It provides a practical structure for governance and oversight, but most organizations will need to supplement it with additional controls, contracts, and regulatory alignment.

Dear Chief Risk Officer,

The NIST Artificial Intelligence Risk Management Framework (AI RMF) was released in 2023 to help organizations manage risks associated with AI systems throughout their lifecycle. It’s a voluntary framework, and it’s a widely referenced approach to responsible AI governance. But three years in AI is a lifetime. Let’s explore the framework and provide some updates.

What Does the NIST AI RMF Actually Do?

The NIST framework helps organizations identify, measure, and manage AI system risks across their lifecycle, from design and development to deployment and monitoring.

A central goal is promoting “trustworthy AI,” which includes considerations such as fairness, transparency, security, reliability, and accountability.  The framework focuses on four core functions:

  • Govern: Establish policies, roles, and oversight structures for AI within the organization.

  • Map: Identify where AI is used, the context of its deployment, and the risks associated with each system.

  • Measure: Assess risks, performance, and trustworthiness through testing, metrics, and monitoring.

  • Manage: Implement controls and mitigation strategies to address identified risks.

Think of the framework as a structured process for building responsible AI systems.

Why NIST AI RMF is a Strategic Opportunity in 2026

When the framework was released, many organizations were experimenting with AI. Three years later, AI is embedded across business operations. Businesses deploy AI in customer service, product design, logistics, analytics, and internal operations. Regulators, clients, and partners expect organizations to demonstrate they understand and manage AI risks.

Adopting the NIST AI RMF provides several strategic benefits:

  • It creates a structured way to identify and document risks associated with AI across the lifecycle.

  • It demonstrates due diligence to regulators, customers, and business partners focused on AI governance.

  • It aligns technical teams, legal departments, and executives around common language and processes for responsible AI development.

  • It plays a similar role to the NIST Cybersecurity Framework a decade ago, establishing best practices that become industry norms.

What Should Companies Do Before Adopting an AI Risk Framework?

For companies that have not yet reviewed the framework, the best first step is a mapping exercise. Many organizations form a cross-functional team, often including legal, IT, security, HR, and product leadership, to evaluate existing AI systems against the framework.

This process typically involves:

  • Identifying where AI is already being used in the business.

  • Documenting the purpose and data sources for those systems.

  • Assessing potential risks such as bias, privacy issues, hallucinations, or security vulnerabilities.

  • Determining whether appropriate oversight and monitoring mechanisms exist.

The goal is to identify opportunities for improvement. For example, some organizations discover they can strengthen documentation explaining how models were trained or what data sources were used. Others find opportunities to add formal processes for monitoring AI performance after deployment.

Once those opportunities are identified, companies can create a practical improvement roadmap.

What Other AI Frameworks Should Companies Consider?

While the NIST AI RMF is influential in the United States, it is not the only framework organizations are considering.

International standards such as ISO/IEC 42001 provide structured governance models for managing AI systems. Meanwhile, the OECD AI Principles focus on ethical and policy considerations around trustworthy AI. The European Union’s AI Act is also shaping global expectations around risk classification and oversight for AI systems.

Many of these frameworks share core principles: transparency, accountability, documentation, human oversight, and risk management. Organizations building governance structures now will be well-positioned to adapt as regulations evolve.

How Should Companies Address IP Risk in AI Governance?

For companies developing or deploying AI systems, intellectual property considerations are an important part of risk management.

Framework-based governance can help organizations address questions such as:

  • Who owns the data used to train AI models?

  • Are outputs from the system protected or reusable?

  • Are trade secrets or proprietary processes embedded in AI systems adequately protected?

Incorporating these issues into governance processes early helps companies protect their interests proactively.

— Will & AiME

Three Takeaways:

  1. The NIST AI Risk Management Framework provides a structured approach to managing AI risks across the lifecycle.

  2. Adopting the framework demonstrates due diligence to regulators, clients, and partners.

  3. Aligning governance, legal, and technical teams around AI risk management today helps organizations prepare for evolving regulations and protect their intellectual property.


Will Schultz & AiME

Will Schultz is an intellectual property and technology attorney and chair of Merchant & Gould’s Internet, Cybersecurity, and E-Commerce practice. He advises businesses on AI, online platforms, digital assets, and emerging technology law, drawing on experience as both a lawyer and entrepreneur.

https://www.merchantgould.com/people/william-d-schultz/
Next
Next

One Year of AI Adventures: Laughing at the Fails, Learning for the Future