Implementing NIST AI RMF: Governing (Part 1 of 4)
Effective AI governance drives business value.
This is Part 1 of a four-part series on implementing practical AI governance using the NIST AI Risk Management Framework. While 75% of AI initiatives fail to deliver expected business value, organizations with systematic governance achieve returns on their AI investments. This series transforms NIST's four core functions [GOVERN, MAP, MEASURE, MANAGE] into actionable implementation strategies.
Part 1 focuses on building governance structures that accelerate rather than block AI adoption, moving beyond compliance theater to strategic enablement.
The numbers tell a pretty bleak story. IBM's 2025 Institute for Business Value CEO Study surveyed 2,000 global CEOs and found that only 25% of AI initiatives delivered expected ROI over the past three years. Meanwhile, 85% of those same executives expect positive ROI for scaled AI efficiency by 2027. [1]
The gap between current reality and (unreasonable?) expectations reveals the fundamental truth that technology itself isn't likely the problem, but rather its governance. Organizations are exhausting resources on poorly planned initiatives while vendors inflate complexity and create dependencies. The result is what I call "governance theater", elaborate processes that consume time and budget while missing the actual risks that derail AI projects. It's equivalent to the "security theater" we see in bottom-up security programs.
When Traditional IT Governance Meets AI Reality
Traditional IT governance frameworks weren't designed for AI's unique challenges. Most enterprise risk management systems treat AI like any other software, applying standard security reviews and deployment procedures. This approach fails because AI systems introduce fundamentally different risks.
Data Dependencies: AI systems require access to data sets that traditional applications never touch. Finance models need HR data, marketing systems process customer communications, and operations tools analyze vendor relationships. Standard data governance boundaries become obstacles rather than protections.
Vendor Complexity Inflation: Every software vendor is adding AI features, often without clear documentation of what models they're using, where data is processed, or how decisions are made. Organizations find themselves managing dozens of AI implementations they never formally approved.
Resource Allocation Without Outcomes: MIT Sloan's research on AI risk frameworks shows that organizations typically spread resources across multiple simultaneous initiatives without clear success criteria. [2] Just like other meandering thought experiments in enterprises, the result is pilot purgatory, an endless testing cycle with no path to production value.
How NIST AI RMF Can Help
The NIST AI Risk Management Framework provides the first government-backed framework specifically designed for AI governance challenges. Released in January 2023 and updated in July 2024, it includes a 140-page Playbook with practical templates and implementation guidance. [3]
Unlike compliance-heavy alternatives, NIST AI RMF is voluntary and adaptable, it's a framework rather than a checklist, similar to the Cybersecurity Framework (CSF) that some organizations use as their North Star. Organizations can build on existing risk management structures rather than creating parallel systems. This matters because in the end successful AI governance improves decision-making speed and quality.
The framework's GOVERN function addresses the core problems that cause AI initiatives to fail:
Policy Integration: Instead of standalone AI policies, the framework integrates AI considerations into existing business strategy and risk management processes. This eliminates the coordination overhead that typically bloats and ultimately kills pilots.
Risk Tolerance with Clear Thresholds: Organizations establish specific ROI requirements (such as 15% return within 6 months for new initiatives) and risk categories that determine approval processes. High-risk applications require executive or board review, medium-risk applications need departmental approval, and low-risk applications can proceed with standard IT review.
Resource Allocation Aligned with Priorities: Rather than spreading resources across multiple vendors, organizations focus on initiatives that support strategic objectives with measurable outcomes.
Roles and Responsibilities That Scale: The framework creates accountability without bureaucracy. Chief AI Officers manage portfolio strategy, department heads own use cases in their domains, and IT manages technical implementation.
Practical Implementation of AI Governance
The key to successful AI governance is avoiding the "comprehensive transformation" trap that vendors promote. Instead, organizations should implement governance incrementally, building on existing structures while addressing immediate risks.
Phase 1 - Leadership Alignment
Start with C-suite commitment and budget allocation. This should not be a long exercise!! Organizations typically need to invest 5-10% of their AI budget in governance infrastructure, which is de minimis as compared to the cost of failed initiatives.
Define the Chief AI Officer role clearly, or add the responsibilities to an existing position. These responsibilities should focus on managing AI portfolio strategy and vendor matrix, but not approve every AI use case. Microsoft's AETHER Committee, established in 2017, has reviewed over 200 use cases since 2019 using a hub-and-spoke model that balances oversight with operational efficiency. [4] You really won’t need that many. I’ve found that chunking up the use cases by department or function leads to just a handful of useful ones that will evolve with time.
Establish success metrics that matter. Focus on business outcomes and, pro tip: adoption is not an outcome. Track initiatives that deliver ROI at first within just a few months, measure deployment time from approval to production, and monitor vendor relationship effectiveness in a similar way. Again, if you’re doing this phase longer than a week or two, get out of your own way and get it done.
Phase 2 - Current State Assessment
Conduct a comprehensive AI inventory. It’s the same as an inventory of attack surfaces but will go a little deeper. This includes obvious implementations like ChatGPT enterprise licenses, but also embedded AI in SaaS applications. Marketing automation, customer service platforms, and financial planning tools often include AI features that weren't formally reviewed. Heavy sigh, this part is going to be annoying.
Risk categorization using NIST framework classifications helps prioritize governance attention. High-risk applications (those affecting customer outcomes, financial decisions, or regulatory compliance) need immediate review. Medium-risk applications require documented approval processes. Low-risk applications can proceed with standard IT security review.
Audit current resource commitments versus capacity. Many organizations discover they've committed to more AI initiatives than they can successfully implement. This resource constraint reality mirrors challenges I've identified in traditional governance frameworks. In my article "The Matrix Approach to Incremental DRP and BCP Review," I outlined how organizations with limited resources need multi-dimensional frameworks that prioritize governance attention where it will have the greatest impact. [5] The same principle applies to AI governance: consolidating efforts on fewer, high-impact projects typically delivers better results than spreading resources across multiple vendors or initiatives.
Phase 3 - Framework Design
Develop policies that enable rather than obstruct innovation. This should be about half the work right here. Create clear approval workflows based on risk classification. High-risk applications require business case development, stakeholder review, and ongoing monitoring. Medium-risk applications need departmental approval with IT security clearance. Low-risk applications proceed through standard procurement processes.
Implement a Red/Yellow/Green classification system for rapid decision-making. Green applications meet standard criteria and can be approved departmentally. Yellow applications require additional review but follow expedited processes. Red applications need comprehensive evaluation but don't automatically mean rejection, just a call for appropriate oversight.
IMPORTANT: Overclassification will come back to bite you later. Just like your data security classification systems, overclassification is going to lead to an un-ownable juggernaut. I get being conservative and seeing everything as Life in the Yellow. Really, I do. But if you do that at this point, it’s going to be a world of hurt when it comes to mapping that inventory in the next step. Also: Decision paralysis is a real danger here.
Essential tools for this phase include:
AI project assessment questionnaires that capture business objectives, technical requirements, and risk factors.
Risk-complexity matrices help determine appropriate approval processes.
ROI threshold calculators ensure projects meet financial criteria.
Vendor evaluation checklists standardize due diligence no matter who’s in the procurement loop.
Learning from Early Adopters
Microsoft’s AETHER proved that governance can enhance (not impede) innovation. Their entire responsible AI program grew from that initial hub-and-spoke structure, maintaining central oversight while empowering distributed decision-making. At time of writing, Microsoft has published 30 transparency docs across their products and features that flowed from it, and that doesn’t cover all of the use cases. That’s… a lot.
Deutsche Telekom also took a proactive approach, starting AI governance in 2018 before regulatory requirements emerged. Their AI manifesto created competitive advantage by enabling faster, more confident AI adoption when regulations did arrive. [6] This was forced to pivot a little with the advent of commercially-available LLMs like ChatGPT or Gemini, but the principles remain the same.
And not for nothing, but the Telekom and AETHER work “rhyme” more than a little. That’s what I’d like you to take from those examples more than anything else. Common success patterns across these implementations include building on existing governance structures rather than creating parallel systems, focusing on business enablement rather than compliance checking, investing in continuous education for teams, and establishing measurable accountability for AI initiative outcomes. Sort of sounds like your security program now, doesn’t it?
Lastly, I’m going to sound like a broken record, but structure matters more than complexity. As with all things, teams that get this tend to move faster and get results. And in my opinion, AI disclosures will be part of the “trust centers”, right alongside SOC 2, ISO 27001, HIPAA, and whatever other compliance banners we’re already waving around.
Moving Beyond Governance Theater
Effective AI governance drives business value. Not a shocker… when you have a plan, things tend to go better. Just like in security, measuring business impact creates competitive advantage while a sole focus on process documentation creates overhead.
Next week, we'll explore the MAP function. Organizations need visibility into the hidden AI proliferating across their technology stack without oversight. The challenge isn't just managing AI you know about, but the AI you didn't realize you had. (And if you're like me, wonder how to keep up with vendors slipping more and more AI-driven features into your existing fabric of SaaS applications!)
That published 75% failure rate isn't inevitable. Schedule a C-suite meeting. DO THE SURVEYS. Identify governance structures. This exercise isn’t wholly new territory, and it’s likely you’ll be able to reuse some of your existing program components in the process.
Connect with me to discuss your organization's AI governance challenges and implementation strategies.
This series continues next week with Part 2: Mapping.
Read the NIST AI RMF Implementation Series:
Part 1: Governing
Part 2: Mapping
Part 3: Measuring
Part 4: Managing
References
[1] IBM Institute for Business Value. "CEO Study 2025: AI Leadership." IBM Newsroom, May 6, 2025. https://newsroom.ibm.com/2025-05-06-ibm-study-ceos-double-down-on-ai-while-navigating-enterprise-hurdles
[2] MIT Sloan Management Review. "A Framework for Assessing AI Risk." Massachusetts Institute of Technology, 2024. https://mitsloan.mit.edu/ideas-made-to-matter/a-framework-assessing-ai-risk
[3] National Institute of Standards and Technology. "AI Risk Management Framework (AI RMF 1.0)" and "AI RMF Playbook." NIST, January 2023, updated July 2024. https://www.nist.gov/itl/ai-risk-management-framework
[4] Microsoft Corporation. "Microsoft Responsible AI: Principles and Approach." Microsoft AI Ethics & Effects in Engineering and Research Committee, 2017-2025. https://www.microsoft.com/en-us/ai/principles-and-approach
[5] "The Matrix Approach to Incremental DRP and BCP Review." https://www.brianfending.com/articles/matrix-approach-incremental-drp-bcp-review
[6] Deutsche Telekom. "Guidelines for Artificial Intelligence." Deutsche Telekom Digital Responsibility Documentation, 2018. https://www.telekom.com/en/company/digital-responsibility/details/artificial-intelligence-ai-guideline-524366
CREDITS: base cover image from airc.nist.gov; Anthropic Claude Sonnet 4 for editorial review