In the race to adopt artificial intelligence, many organizations are moving at breakneck speed without fully considering the regulatory landscape that's rapidly evolving around them. As AI becomes increasingly integrated into business operations, the legal frameworks governing its use are becoming more complex and consequential.
At Responsive Technology Partners, we observe the unpreparedness of many businesses for the wave of AI regulations coming their way. The question isn't if your organization will face AI compliance challenges, but when—and how prepared you'll be when they arrive.
The Shifting Regulatory Landscape
The AI regulatory environment resembles the early days of data privacy legislation. Remember when GDPR seemed overwhelming? Today, we're witnessing a similar pattern with AI regulations, but the pace of development is considerably faster.
The EU AI Act, which entered into force in August 2023, represents the world's first comprehensive AI regulatory framework. While American businesses might assume this doesn't apply to them, the Act's extraterritorial reach means that many U.S. companies offering products or services to EU customers will need to comply.
Domestically, we're seeing movement at multiple levels. Federal agencies like the FTC and EEOC are increasingly scrutinizing AI applications for potential discrimination and unfair practices. State legislation, particularly in California, Colorado, and New York, is advancing rapidly. Industry-specific regulations in healthcare, finance, and critical infrastructure are imposing specialized AI governance requirements.
The common thread running through all these regulatory efforts is accountability. Regulators are making it clear that "we didn't know" or "the algorithm did it" won't be acceptable excuses for harmful AI outcomes. Organizations must take responsibility for the AI systems they deploy, regardless of whether they built them in-house or licensed them from vendors.
Hidden Compliance Risks in AI Adoption
Many businesses don't realize they're already using AI in ways that create regulatory exposure. Marketing departments adopt AI content generation tools that can create material making unsubstantiated claims about products or services or potentially incorporate copyrighted material without permission. These seemingly innocent productivity tools can create significant risk exposure without proper oversight.
HR teams implement AI systems to screen resumes and candidates, which may inadvertently demonstrate bias against certain groups of applicants if not properly designed and tested. Customer service departments deploy AI chatbots that collect customer data without proper disclosures, creating privacy compliance issues.
These scenarios illustrate why a reactive approach to AI governance is increasingly untenable. By the time problems surface, the damage to regulatory compliance and corporate reputation may already be done.
The NIST AI Risk Management Framework: A Foundation for Governance
Fortunately, organizations don't need to develop AI governance frameworks from scratch. The National Institute of Standards and Technology (NIST) has created the AI Risk Management Framework (AI RMF), which provides a structured approach to managing AI risks across the entire lifecycle of AI systems.
The NIST AI RMF is organized around four core functions that provide a comprehensive foundation for AI governance:
- Govern: This function establishes the organizational structures, policies, and procedures needed for AI risk management. It focuses on how an organization allocates resources, defines roles and responsibilities, and makes decisions about AI risk.
- Map: The mapping function involves identifying, categorizing, and documenting contexts, capabilities, and potential impacts of AI systems. This systematic approach helps organizations understand where AI is being used and what risks those implementations might create.
- Measure: This function focuses on analyzing, assessing, and tracking AI risks. It provides methods for quantifying risks and determining their significance, helping organizations prioritize governance efforts.
- Manage: The management function involves allocating resources to address and reduce AI risks. This includes implementing controls, making risk-based decisions, and establishing processes for ongoing monitoring and iteration.
The NIST framework is particularly valuable because it's technology-neutral, voluntary, and non-prescriptive. It complements existing risk management processes and can be tailored to organizations of different sizes and industries. Most importantly, it aligns well with emerging regulatory requirements, helping organizations prepare for compliance while optimizing AI benefits.
Building an AI Governance Framework Based on NIST Principles
Using the NIST AI RMF as a foundation, organizations can develop governance frameworks that address their specific needs and risks. Here's how the NIST approach applies to common governance challenges:
- AI Inventory and Classification: The Map function provides a structured approach to documenting all AI systems in use across the organization. This often reveals AI implementations that senior leadership isn't aware of, including tools adopted at the departmental level without centralized oversight.
- Risk Assessment Protocols: The Measure function offers methodologies for evaluating new AI implementations before deployment. These protocols should require cross-functional review involving legal, compliance, IT security, and business units, consistent with NIST's recommendation for diverse perspectives in risk assessment.
- Accountability Structures: The Govern function emphasizes the importance of clear roles and responsibilities. Every AI system should have designated owners accountable for ensuring adherence to both internal policies and external regulations, as recommended by NIST.
- Transparency Guidelines: NIST emphasizes the importance of transparency throughout the AI lifecycle. Organizations should create guidelines for communicating about AI use to customers and regulators, ensuring consistent, accurate disclosures about how AI influences decision-making processes.
- Testing and Monitoring: The Manage function highlights the need for ongoing assessment and adaptation. Organizations should implement regular testing to identify potential biases or inaccuracies in AI systems, with monitoring processes that align with NIST's recommendations for continuous improvement.
- Incident Response Plans: The Manage function also addresses how organizations should respond to negative impacts. Developing incident response plans specifically for addressing AI failures or unintended consequences is essential, with escalation procedures and remediation strategies as recommended by NIST.
While implementing this framework requires significant effort, organizations that do so will have a streamlined process for adopting new AI technologies while managing associated risks. When new regulations emerge, they'll be able to adapt quickly rather than scrambling to establish governance from scratch.
The Board's Role in AI Governance
For boards of directors, AI governance is becoming a critical responsibility. The NIST AI RMF specifically addresses the role of senior leaders in establishing an organizational approach to AI risk management.
Boards need to understand how AI is being used throughout the organization and what risks those implementations create without getting pulled into operational details. A structured quarterly AI governance review process, aligned with the NIST Govern function, can give boards visibility into the company's AI landscape. This review should include updates on new AI implementations, changes to existing systems, and any incidents or near-misses that occurred during the quarter.
Boards should focus on ensuring appropriate governance structures are in place, rather than trying to understand the technical details of individual implementations. This approach allows them to fulfill their fiduciary responsibilities without overstepping into management's domain.
Those that fail to provide this level of oversight may face more than regulatory penalties—they could be exposed to shareholder lawsuits alleging breach of fiduciary duty. Boards don't need to become AI experts, but they do need to ask the right questions and ensure management has a thoughtful approach to managing these new risks.
The Competitive Advantage of Responsible AI
While regulatory compliance might seem like a burden, organizations that establish robust AI governance frameworks now will gain significant advantages. From a regulatory perspective, they'll be well-positioned to comply with existing regulations and emerging AI-specific requirements. From a reputational standpoint, they can build trust with customers and partners by being transparent about how they use AI to support business decisions.
Operationally, organizations with strong governance avoid the disruptions that occur when poorly governed systems need to be pulled back or reconstructed to address regulatory concerns. Perhaps most importantly, a thoughtful approach to AI governance ensures that innovations align with core business objectives. The NIST framework emphasizes this benefit, noting that effective AI risk management creates conditions for innovation to flourish.
Looking Ahead: The Next Wave of AI Regulation
The current regulatory landscape is just the beginning. Based on discussions with industry leaders and policy experts, significant evolution in AI regulation is anticipated over the coming years. Requirements for algorithmic impact assessments are likely to increase, similar to how environmental impact assessments became standard for major projects. Organizations will likely face mandatory disclosure requirements about AI use in customer interactions, allowing individuals to understand when and how automated systems are influencing decisions about them.
For high-risk AI applications in sectors like healthcare, finance, and critical infrastructure, industry-specific certification requirements may emerge. These could resemble regulatory approval processes for other technologies, with organizations needing to demonstrate safety and efficacy before deployment.
On the international front, efforts toward harmonization of core AI governance principles are expected, even as implementation details vary across jurisdictions. The NIST AI RMF may well serve as a template for this harmonization, as it's already being recognized internationally as a valuable approach to AI governance.
Organizations that wait for these changes before establishing their governance frameworks will find themselves constantly playing catch-up, diverting resources from innovation to compliance remediation. By contrast, those that develop flexible, principles-based governance now, using frameworks like NIST's, will be able to adapt incrementally as the regulatory environment evolves.
Taking the First Steps
If your organization hasn't yet established a formal AI governance framework, begin by adopting the NIST approach. Start with the Map function, conducting an AI audit to document where artificial intelligence is currently being used and for what purposes. Many organizations are surprised by the number of AI implementations across their operations, from quality control systems to supply chain optimization algorithms, many implemented without centralized oversight.
Next, apply the Measure function to evaluate how well your existing governance structures can accommodate AI systems. Most organizations find that their current compliance and risk management frameworks need enhancement to address AI-specific challenges. The NIST framework can help identify these gaps and prioritize improvements.
Then, implement the Govern function by bringing together technology, legal, compliance, and business leaders to develop a coherent approach to AI governance. This cross-functional collaboration is essential because AI challenges span traditional organizational boundaries, as NIST recognizes in its emphasis on diverse perspectives.
Finally, apply the Manage function to develop processes for ongoing monitoring, adaptation, and response. This establishes the foundation for continuous improvement in your AI governance approach, ensuring it remains effective as technologies and regulations evolve.
The time to act is now, before your organization's AI use outpaces your governance capabilities. By taking a proactive approach to AI regulation, leveraging established frameworks like NIST's AI RMF, you can ensure that artificial intelligence enhances your business without creating undue legal or reputational risk.
Tom Glover is Chief Revenue Officer at Responsive Technology Partners, specializing in cybersecurity and risk management. With over 35 years of experience helping organizations navigate the complex intersection of technology and risk, Tom provides practical insights for business leaders facing today's security challenges.