Risk Function Accountability: AI/ML & Algorithmic Systems
Governance Framework Second-Line Risk Model Risk AI Oversight EU AI Act

Risk Function
Accountability
for AI/ML &
Algorithmic

Systems

The 14 accountability domains that define what the Risk Function owns, challenges, and governs across the AI lifecycle — from framework design to ethical oversight, from incident escalation to board reporting.

April 2026 · AI Risk Governance · Enterprise Reference
Tier 1
Board
Sets Risk Appetite and Tolerance. Ultimate accountability for AI risk posture.
Tier 2
Management
Executes Strategy and Controls. Deploys resources, owns AI systems and their outcomes.
Tier 3
Risk Function
Identifies, Measures, Monitors and Challenges AI Risk. Independent second-line oversight.
14 accountability domains
Mapped to NIST AI RMF
EU AI Act aligned
2.3
average Responsible AI maturity score in 2026 — up from 2.0 in 2025. Only one-third of organisations score 3+ in governance — McKinsey 2026 AI Trust Survey
€35M
maximum fine under the EU AI Act for prohibited AI practices — or 7% of global annual turnover, whichever is higher. Full high-risk enforcement: August 2026
88%
of organisations now use AI in at least one business function — up from 78% the prior year. AI risk governance is no longer optional infrastructure
66
average number of GenAI applications running in a typical enterprise — with approximately 10% classified as high-risk. Most are ungoverned.
The Accountability Imperative

Why the Risk Function Must Own the AI Risk Architecture

Board
Sets Risk Appetite and Tolerance
Defines the organisation’s acceptable risk posture for AI, approves the AI risk framework, and holds ultimate accountability for AI-related failures and regulatory exposure.
Management
Executes Strategy and Controls
Deploys AI systems, owns their outcomes, implements the controls defined in the risk framework, and is accountable to the Board for execution within approved tolerance.
Risk Function
Identifies, Measures, Monitors and Challenges
Acts as the independent second line. Owns the AI risk architecture, challenges assumptions, monitors controls, and provides the Board and Management with objective risk intelligence.

AI is no longer a pilot programme. By 2025, 88% of organisations were using AI in at least one business function — and the average enterprise was running 66 different generative AI applications, approximately 10% of which are classified as high-risk. This expansion is outpacing governance infrastructure everywhere: McKinsey’s 2026 AI Trust Maturity Survey found that while average Responsible AI maturity scores have increased, only about one-third of organisations score at level 3 or higher in governance and strategy. Technical capabilities are advancing. Oversight structures are not keeping pace.

The consequence is not abstract. The EU AI Act — the world’s first comprehensive AI law — makes full high-risk compliance obligations applicable on 2 August 2026, with fines reaching €35 million or 7% of global annual turnover. NIST released its AI RMF Profile on Trustworthy AI in Critical Infrastructure in April 2026. ISO 42001 certification is becoming a competitive and regulatory requirement. And Aon’s Head of Global Cyber Solutions has observed directly: organisations that invest early in transparent governance and clear accountability will be best positioned to adopt AI safely — and to turn risk into a long-term advantage.

The 14 accountability domains mapped in this document define what the Risk Function owns in an AI/ML and algorithmic systems context. They are the second-line obligations that sit between Management’s operational accountability and the Board’s strategic oversight — providing the independent challenge, measurement, and surveillance that neither the first line nor the Board can perform for themselves.

The 14 Accountability Domains

Risk Function Obligations Across the AI Lifecycle

01
Framework Design · Architecture Layer

AI Risk Governance Framework

Owner of the AI risk architecture and policy layer
Risk Function Role
FRAMEWORK OWNER

The Risk Function is the architect and custodian of the organisation’s AI/ML Risk Management Framework — the document layer that defines how AI risk is identified, assessed, controlled, and reported across every business line and use case. This is a design responsibility, not a monitoring one: the Risk Function does not merely audit the framework’s application but actively designs its structure, maintains it in response to regulatory change, and enforces its standards across the enterprise.

Defining the AI risk taxonomy is one of the most consequential outputs of this function. A taxonomy that fails to account for model drift, prompt injection, third-party dependency, or agentic AI autonomy will produce a risk universe with structural blind spots — leaving the organisation exposed precisely where AI risk is most acute. In 2026, the taxonomy must also integrate the NIST AI RMF’s four core functions (Govern, Map, Measure, Manage) and align with EU AI Act risk classifications.

Integrating AI into the broader Enterprise Risk Management (ERM) and operational risk frameworks ensures that AI risk does not exist as a siloed parallel programme but is treated as a category within the organisation’s unified risk architecture — with consistent escalation paths, consistent materiality thresholds, and consistent reporting cadences.

Key Obligations
Design and maintain the AI/ML Risk Management Framework as a living policy document aligned to regulatory and enterprise standards
Define the AI risk taxonomy — the complete classification of AI risk types with clear boundaries and definitions
Establish risk appetite and tolerance thresholds for AI systems, validated by the Board
Integrate AI risk into ERM and operational risk frameworks, ensuring consistent treatment across the enterprise
Review and update the framework in response to regulatory change (EU AI Act, NIST RMF updates, OECD principles)
02
Risk Classification · Universe Coverage

AI Risk Identification & Taxonomy

Custodian of the AI risk universe and classification
Risk Function Role
RISK CUSTODIAN

Identification is the first obligation of any risk function, and for AI it is uniquely challenging: the risk landscape is evolving faster than any static taxonomy can track. Model drift, adversarial prompt injection, agentic AI autonomy risks, synthetic data quality degradation, hallucination-driven decision errors — each of these has emerged as a material enterprise risk category within the last 18 months and each requires ongoing classification and universe maintenance.

The Risk Function’s role as custodian of the AI risk universe means owning the completeness problem — ensuring that the taxonomy covers not just known risk categories but emerging ones, and that every AI use case across every business line is assessed against the complete taxonomy. The McKinsey 2026 AI Trust Maturity Survey found that governance and agentic AI controls lag behind data and technology across all regions globally — a direct consequence of risk identification frameworks that have not kept pace with deployment velocity.

Key Obligations
Identify and classify all key AI risk types including model risk, data risk, operational risk, ethical risk, regulatory risk, and third-party dependency risk
Maintain a comprehensive AI risk register covering all active AI use cases and business lines
Ensure risk universe coverage is complete — tracking emerging risk categories as AI capabilities evolve
Review taxonomy against NIST AI RMF, OECD AI Principles, ISO 42001, and EU AI Act classification requirements at least annually
03
Quantification · Pre-Deployment · Annual

AI Risk Assessment & Measurement

Owner of quantification and prioritisation of AI risks
Risk Function Role
ASSESSMENT OWNER

Risk assessment without methodology is opinion. The Risk Function’s ownership of AI risk assessment methodology means defining how risk is measured — what constitutes a material risk, how residual risk is quantified after controls, and how different risk types are scored and compared. This is particularly challenging for AI because many AI risks are probabilistic, context-dependent, and difficult to quantify through traditional financial or operational risk metrics.

Pre-deployment assessment is a gate, not a formality. The EU AI Act’s high-risk system obligations — fully applicable from August 2, 2026 — require conformity assessments, technical documentation, and risk management systems to be in place before market entry. The Risk Function’s role in pre-deployment assessment provides the independent second-line review that validates the first line’s own assessment before the organisation commits to deployment. Annual assessments then ensure that residual risk remains within appetite as the system evolves in production.

Key Obligations
Develop and enforce AI risk assessment methodologies — covering inherent risk, control effectiveness, and residual risk evaluation
Conduct pre-deployment risk assessments for new AI systems, with mandatory gating for high-risk applications
Conduct annual risk assessments for all deployed AI systems to track changes in risk exposure
Define risk scoring models, materiality thresholds, and residual risk acceptance criteria aligned to Board-approved appetite
04
Model Validation · Second-Line Oversight

Model Risk Management Oversight

Second-line oversight of model risk controls and decisions
Risk Function Role
MRM OVERSIGHT

Model Risk Management (MRM) is the discipline of identifying, quantifying, and controlling the risks arising from the use of quantitative models in decision-making. For AI/ML systems, MRM extends traditional model risk frameworks to address the additional complexity of machine learning: non-linear relationships, feature interaction effects, distributional shift, hallucination, and the opacity of large model architectures.

The Risk Function’s role here is explicitly second-line: it does not own model development or primary validation (which sits with the first line or an independent model validation function) but provides oversight of those activities — reviewing validation results, challenging residual risks, and ensuring that high-risk models receive appropriate approval gates before deployment and periodically thereafter. In heavily regulated sectors such as financial services and healthcare, this second-line oversight role is not optional — it is a regulatory expectation under frameworks such as SR 11-7 (US), the ECB’s expectations on MRM, and the EU AI Act’s human oversight requirements.

Key Obligations
Oversee model risk management practices across all AI/ML model types including predictive, generative, and agentic systems
Review model validation results and assess whether residual risks are within approved tolerance levels
Ensure high-risk models receive appropriate senior approval before deployment and at annual review
Challenge model performance monitoring and trigger re-validation where performance deterioration is identified
05
Surveillance · Dashboards · KRIs

Monitoring, Reporting & Risk Metrics

Continuous risk surveillance and reporting authority
Risk Function Role
SURVEILLANCE

Risk that is not monitored is risk that is not managed. The Risk Function’s monitoring obligation for AI systems goes beyond periodic assessment: it requires continuous surveillance infrastructure capable of detecting model drift, data quality degradation, emerging fairness issues, and changes in risk exposure before they materialise into incidents. Unlike traditional software, AI systems evolve after deployment — through continuous learning, real-world feedback signals, and interactions with dynamic data environments — making continuous monitoring a structural requirement, not a best practice.

Key Risk Indicators (KRIs) for AI systems must be designed to capture leading signals — indicators that a risk is trending toward breach before the breach occurs. Model performance degradation metrics, data drift indices, fairness monitoring outputs, and KRI trend analysis provide the surveillance layer that translates technical monitoring outputs into board-level risk language. The Risk Function’s reporting to Senior Management and the Board on these indicators is what enables informed governance decisions on AI risk tolerance.

Key Obligations
Define Key Risk Indicators (KRIs) for AI systems — covering performance, data quality, fairness, and operational risk dimensions
Produce AI risk dashboards for Senior Management and Board at defined reporting cadences
Track emerging AI risks — including regulatory developments, new threat categories, and changes in the AI landscape
Maintain continuous surveillance of deployed AI systems and escalate threshold breaches promptly
06
Independent Challenge · Adversarial Testing

Validation Oversight & Challenge Function

Critical challenger to AI model assumptions and outputs
Risk Function Role
CHALLENGER

The challenge function is one of the most distinctive and valuable obligations of the Risk Function in the AI context. Model development teams are, by nature, invested in the success of their models — their expertise and effort are committed to making the system work as designed. An independent challenge function exists to ask the uncomfortable questions: what happens if the training data is not representative? What are the edge cases the validation suite did not test? Where might the model perform well on average but catastrophically at the tail?

Adversarial testing, fairness testing, and stress testing are the technical instruments of the challenge function. In 2025, researchers demonstrated that just five carefully crafted documents can manipulate AI agent responses 90% of the time through RAG poisoning — a class of vulnerability that standard validation suites are not designed to detect. The Risk Function’s challenge obligation extends to ensuring that validation coverage is genuinely comprehensive, not merely compliant with a checklist.

Key Obligations
Provide independent challenge to model development teams on assumptions, design choices, and validation scope
Ensure independent validation is conducted by parties without a development conflict of interest
Verify that testing covers adversarial scenarios, fairness dimensions, distributional shift, and edge case performance
Challenge validation outcomes — not simply accept validation reports as proof of model soundness
07
Control Assessment · Remediation

AI Control Effectiveness Evaluation

Evaluator of risk mitigation effectiveness
Risk Function Role
CONTROL EVALUATOR

A control that exists in policy but does not work in practice provides no risk mitigation — only the false comfort of documented compliance. The Risk Function’s control effectiveness evaluation obligation addresses this gap: assessing not only whether controls are designed appropriately but whether they are operating effectively in the production environment. Policy documents that describe bias testing do not mean bias testing is actually happening in production — as Samta.ai’s 2026 governance analysis noted directly.

For AI systems, control effectiveness is particularly dynamic: a control that was adequate at deployment may become inadequate as the model encounters new data distributions, as external tools are updated, or as business use cases evolve beyond their original scope. Control effectiveness evaluation must be continuous, not point-in-time — triggering remediation recommendations when controls show signs of degradation before failures reach users or regulators.

Key Obligations
Assess the operational effectiveness of AI risk controls — not merely their design adequacy
Validate that controls described in policy are actually implemented and functioning in production
Recommend specific, actionable control enhancements where gaps are identified
Track remediation actions to closure and verify that enhancement recommendations have been implemented effectively
08
Vendor Risk · Outsourcing · Dependencies

Third-Party AI Risk Management

Owner of AI outsourcing and dependency risk oversight
Risk Function Role
VENDOR RISK OWNER

The rapid adoption of AI-as-a-service — foundation model APIs, cloud-hosted AI platforms, vendor-supplied ML models — has transferred significant operational risk to third parties that organisations cannot directly control. Aon’s 2026 AI risk analysis identified this explicitly: organisations are increasingly reliant on AI-as-a-service providers, and independent research has highlighted misconfigurations and architectural weaknesses across fast-scaling AI platforms that can expose organisations to outages, data leakage, or loss of service integrity.

The Risk Function’s third-party AI risk obligation extends beyond traditional vendor due diligence to the specific risks of AI outsourcing: model bias inherited from vendor training data, lack of explainability in vendor models, vendor model updates that change system behaviour without notice, and contractual provisions that inadequately address liability for AI-related harm. Concentration risk — where an organisation’s AI capabilities are dependent on a small number of providers — requires explicit assessment and mitigation planning.

Key Obligations
Assess risks related to vendor-provided AI models, foundation model APIs, and outsourced AI services
Validate vendor risk assessments and ensure contractual provisions adequately address AI-specific liabilities
Monitor vendor performance and changes in vendor risk exposure — including model updates that alter system behaviour
Assess AI concentration risk and ensure mitigation plans address critical vendor dependency scenarios
09
Data Quality · Lineage · Bias

Data Risk & Model Input Risk

Oversees data-related risk drivers of AI outcomes
Risk Function Role
DATA RISK OVERSIGHT

AI models are only as reliable as the data they are trained and operated on. Data quality risk — inaccuracy, incompleteness, staleness, bias — is the most consistently documented root cause of AI failure in production. BARC identified data quality management as the number one data and analytics trend for 2026. The Pertama Partners AI failure analysis found that 38% of abandoned AI projects cited insurmountable data quality issues. And the principle is brutal: AI does not solve data problems. It exposes them — at scale, in production.

Data lineage and traceability are the governance instruments that make data risk manageable. Only 30% of organisations have full visibility into their AI data pipelines — and lack of lineage is one of the top reasons AI audits fail. When a financial institution cannot explain why its AI model denied a loan, the root cause is often an undocumented, outdated third-party dataset silently introduced weeks earlier. The Risk Function’s data risk oversight responsibility is to prevent this class of governance failure.

Key Obligations
Evaluate data quality risks across all AI model inputs — accuracy, completeness, timeliness, and representativeness
Assess data lineage and traceability — ensuring model inputs can be traced to authoritative, documented sources
Identify and quantify risks from biased, corrupted, or unrepresentative training datasets
Assess data governance practices and identify gaps that create downstream AI risk exposure
10
Fairness · Bias Detection · Ethical Risk

Ethical AI & Fairness Risk Oversight

Custodian of ethical risk governance in AI
Risk Function Role
ETHICS CUSTODIAN

Ethical AI risk is not a soft commitment — it is a source of financial, regulatory, and reputational exposure that the Risk Function must govern with the same rigour applied to credit or operational risk. Unfair AI outcomes — a credit model that systematically disadvantages protected groups, a hiring tool that produces discriminatory rankings, a medical diagnostic tool with differential accuracy across demographic subgroups — create material liability under the EU AI Act, GDPR, and consumer protection law.

The Risk Function’s role as custodian of ethical risk governance means defining what fairness means in quantitative terms for each AI system — there is no single universal fairness metric, and the choice of metric has significant implications for model design and outcomes. Demographic parity, equal opportunity, predictive parity — each addresses a different aspect of fairness and each may be in tension with the others. The Risk Function must define acceptable thresholds, monitor detection results continuously, and escalate material bias indicators through the governance structure before they become incidents.

Key Obligations
Define fairness metrics and acceptable bias thresholds for each AI system based on its use case and regulatory context
Monitor bias detection results against defined thresholds and track trends over time
Define ethical risk indicators — signals that an AI system may be producing unfair or harmful outcomes
Escalate material bias or unfair outcomes through the governance structure with appropriate urgency and documentation
11
Incident Response · Classification · Impact

Incident Risk Assessment & Escalation

Evaluates risk impact of AI failures and incidents
Risk Function Role
INCIDENT ASSESSOR

When AI systems fail — and they will — the Risk Function’s incident assessment obligation ensures that failures are not treated as purely operational events but are assessed for their full risk impact: financial, reputational, regulatory, and systemic. The Clinejection attack of February 2026, where a prompt injection in a GitHub issue title led to credential theft and a compromised npm package installed on 4,000 developer machines, illustrates how AI-related incidents can propagate far beyond their initial scope at machine speed.

Classification matters because escalation thresholds are classification-dependent. An AI incident classified as a minor operational event may receive an operational response. The same incident, properly classified as a model risk event with regulatory implications, receives a different escalation path — with Risk Function involvement, legal notification obligations, and Board awareness. The Risk Function’s role is to ensure that classification serves the actual risk impact, not the convenience of avoiding escalation.

Key Obligations
Assess the full risk impact of AI-related incidents — financial, reputational, regulatory, and systemic dimensions
Ensure proper incident classification that reflects actual risk severity, not operational convenience
Ensure escalation follows appropriate paths for each classification level, with Risk Function involvement at defined thresholds
Ensure AI incidents are included in risk reporting and inform the risk universe and control assessments going forward
12
Regulatory · Compliance · Legislative Change

Regulatory Compliance Risk

Oversees compliance risk exposure related to AI
Risk Function Role
COMPLIANCE RISK

The regulatory landscape for AI is the most rapidly evolving compliance environment organisations have faced since the introduction of GDPR. The EU AI Act is the dominant reference — with high-risk system obligations fully applicable from August 2, 2026, fines reaching €35 million or 7% of global annual turnover, and extraterritorial scope that applies to any organisation whose AI affects EU users regardless of where the organisation is headquartered. NIST released an updated AI RMF profile in April 2026. ISO 42001 certification is increasingly a customer and regulator expectation.

The Risk Function’s regulatory compliance risk obligation includes both ensuring that current AI systems comply with applicable requirements and monitoring the regulatory horizon to anticipate obligations before they become enforcement events. Most organisations that will face enforcement actions in 2026 and 2027 are not failing to comply with requirements they understood — they are failing to comply with requirements they did not identify in time to address.

Key Obligations
Ensure AI systems comply with applicable regulatory requirements — EU AI Act, GDPR, sector-specific AI regulations, data protection and consumer protection law
Monitor regulatory change affecting AI — tracking legislative developments, regulator guidance, and enforcement actions across relevant jurisdictions
Assess and report compliance risk exposure — identifying gaps between current practices and regulatory requirements
Engage with legal and compliance functions to ensure AI-specific regulatory obligations are addressed within enterprise compliance programmes
13
Governance Bodies · Risk Opinions · Challenge

Participation in AI Governance Structures

Central player in AI governance decision-making
Risk Function Role
GOVERNANCE MEMBER

A risk function that assesses risk in isolation but has no voice in governance decisions is a reporting function, not a risk function. The Risk Function’s participation in AI Governance Committees and Oversight Structures is what converts risk intelligence into decision influence — ensuring that risk perspectives are embedded in model approval decisions, use case prioritisation, and AI programme governance before commitments are made, not after.

The NIST AI RMF’s emphasis on the Govern function as the backbone of all other risk management activities reflects this logic: governance structures are where risk appetite is translated into operational constraints, where exceptional risks are adjudicated, and where independent challenge is most consequential. A Risk Function that is absent from these structures allows AI governance decisions to be made without independent risk perspective — which is precisely the failure mode that regulators identify when AI programmes produce unexpected harm.

Key Obligations
Serve as a core member of AI Governance and Oversight Committees — not an observer, but an active participant with voice and vote
Provide risk opinions on model approvals — documented, independent assessments of risk acceptance before deployment decisions are made
Provide independent challenge to business units seeking to deploy AI systems, ensuring risk perspectives are embedded in decisions
Escalate disagreements between Risk Function assessments and business unit positions through governance structures, not informally
14
Risk Culture · Training · Discipline

Risk Culture, Training & Awareness

Drives risk awareness and discipline across AI adoption
Risk Function Role
CULTURE DRIVER

Frameworks, policies, and controls are necessary but not sufficient. Risk culture — the degree to which risk awareness is embedded in everyday decisions, not just formal governance processes — determines whether those controls are applied consistently in practice. For AI, culture is particularly important because deployment velocity outpaces governance: by the time a formal risk assessment is conducted for a new AI use case, employees may already be using unapproved tools to accomplish the same objective. The average enterprise now runs 66 GenAI applications, most of which were adopted without formal risk assessment.

The Risk Function’s role in culture and training is not to deliver a one-time compliance programme but to create the conditions under which risk-aware decisions about AI are the natural default — where business teams understand what makes an AI system high-risk, where they know how to initiate a risk assessment, and where they feel empowered to escalate concerns without fear of slowing down delivery. This is the difference between a risk function that catches failures after they occur and one that prevents them.

Key Obligations
Promote a risk-aware culture — through communication, leadership engagement, and visible Risk Function presence in AI adoption conversations
Support training programmes on AI risks, controls, and the organisation’s risk appetite for AI
Encourage consistent application of risk principles across AI adoption — ensuring business units do not bypass governance for speed
Measure culture effectiveness — not just training completion rates, but indicators of risk-aware behaviour in AI adoption decisions

“AI is changing the risk landscape faster than traditional frameworks can adapt. The organisations that invest early in transparent governance, scenario analysis, and clear accountability will be best positioned to adopt AI safely — and to turn risk into a source of long-term advantage.”

Brent Rieth, Head of Global Cyber Solutions, Aon — AI Risk 2026: Practical Agenda
Quick Reference

All 14 Functions — Accountability Summary

The Risk Function’s accountability across the AI lifecycle — who owns what, and what independent means in each domain.

# Function Risk Function Role Key Output Regulatory Anchor
01 AI Risk Governance Framework Framework Owner AI RMF policy, taxonomy, appetite thresholds NIST AI RMF · ISO 42001
02 Risk Identification & Taxonomy Risk Universe Custodian AI risk register, complete taxonomy, emerging risks NIST MAP · OECD AI Principles
03 Risk Assessment & Measurement Assessment Owner Pre-deployment assessments, scoring models, residual risk EU AI Act Art. 9 · NIST MEASURE
04 Model Risk Management Oversight 2LoD MRM Oversight Validation review, approval gates, residual risk assessment SR 11-7 · ECB MRM · EU AI Art. 14
05 Monitoring, Reporting & Risk Metrics Surveillance Authority KRIs, risk dashboards, Board reports, emerging risk tracking EU AI Act Art. 9 · NIST MANAGE
06 Validation Oversight & Challenge Independent Challenger Challenge opinions, adversarial test coverage assurance EU AI Act Art. 10 · NIST AI RMF
07 AI Control Effectiveness Evaluation Control Evaluator Control effectiveness ratings, remediation recommendations ISO 42001 · NIST GOVERN
08 Third-Party AI Risk Management Vendor Risk Owner Vendor assessments, contractual risk reviews, concentration risk EU AI Act Art. 25 · DORA
09 Data Risk & Model Input Risk Data Risk Oversight Data quality risk assessments, lineage evaluations, bias reviews EU AI Act Art. 10 · GDPR
10 Ethical AI & Fairness Risk Oversight Ethics Custodian Fairness metrics, bias monitoring reports, ethical risk indicators EU AI Act · OECD Principles
11 Incident Risk Assessment & Escalation Incident Assessor Incident risk classifications, escalation decisions, risk reporting EU AI Act Art. 73 · DORA
12 Regulatory Compliance Risk Compliance Risk Oversight Compliance gap assessments, regulatory change monitoring EU AI Act · GDPR · Sector rules
13 Participation in AI Governance Structures Governance Member Risk opinions, model approval votes, independent challenge NIST GOVERN · ISO 42001
14 Risk Culture, Training & Awareness Culture Driver Training programmes, culture indicators, risk awareness campaigns NIST GOVERN · OECD Principles
The Accountability Standard

Independence Is the Function. Challenge Is the Obligation.

The 14 accountability domains in this document share a common thread: they are all obligations that only the Risk Function can fulfil because of its structural independence from the first line. Management owns the strategy and executes the controls. The Board sets the appetite and holds ultimate accountability. The Risk Function’s value exists precisely in the space between those two — providing the independent identification, measurement, challenge, and monitoring that neither party can provide for itself.

Independence is not sufficient without competence. A Risk Function that does not understand adversarial prompt injection cannot challenge the adequacy of validation coverage that tested for it. A Risk Function that does not understand the EU AI Act’s August 2026 high-risk compliance obligations cannot assess whether the organisation’s regulatory exposure is material. The 14 accountability domains require not only structural independence but substantive AI literacy — an increasingly scarce and strategically important capability in the Risk Function itself.

The organisations that will navigate the 2026 AI regulatory environment with confidence are those where the Risk Function is already operating as described here: owning the framework, challenging model assumptions, monitoring continuously, governing ethically, and reporting objectively to the Board. Those who are still treating AI risk as an IT issue — governed by technology controls rather than enterprise risk discipline — are building the exposure that will define next year’s incident reports.

The Risk Function’s role in AI governance is not to slow down AI adoption. It is to make AI adoption sustainable — by ensuring that the systems organisations deploy are understood, controlled, monitored, and aligned with the risk appetite that the Board has approved. That is not a constraint on AI ambition. It is the architecture of AI confidence.

Sources: McKinsey — State of AI Trust in 2026: Shifting to the Agentic Era (500 organisations surveyed Dec 2025–Jan 2026) · Aon — AI Risk 2026: Practical Agenda · NIST — AI Risk Management Framework (AI RMF 1.0 + April 2026 Critical Infrastructure Profile) · EU AI Act — Regulation (EU) 2024/1689, full high-risk enforcement August 2026 · Samta.ai — AI Risk Management & Model Governance: The 2026 Guide · CyberSaint — Top Security, Risk, and AI Governance Frameworks for 2026 · Decode the Future — AI for Risk Management: 7 Frameworks and 2026 Compliance · Elevateconsult — NIST AI RMF: A Builder’s Roadmap · ISO 42001:2023 — AI Management Systems Standard · OECD AI Principles · ECB — Supervisory Expectations on Model Risk Management · Federal Reserve / OCC — SR 11-7: Supervisory Guidance on Model Risk Management