Risk Function
Accountability
for AI/ML &
Algorithmic
Systems
The 14 accountability domains that define what the Risk Function owns, challenges, and governs across the AI lifecycle — from framework design to ethical oversight, from incident escalation to board reporting.
Why the Risk Function Must Own the AI Risk Architecture
AI is no longer a pilot programme. By 2025, 88% of organisations were using AI in at least one business function — and the average enterprise was running 66 different generative AI applications, approximately 10% of which are classified as high-risk. This expansion is outpacing governance infrastructure everywhere: McKinsey’s 2026 AI Trust Maturity Survey found that while average Responsible AI maturity scores have increased, only about one-third of organisations score at level 3 or higher in governance and strategy. Technical capabilities are advancing. Oversight structures are not keeping pace.
The consequence is not abstract. The EU AI Act — the world’s first comprehensive AI law — makes full high-risk compliance obligations applicable on 2 August 2026, with fines reaching €35 million or 7% of global annual turnover. NIST released its AI RMF Profile on Trustworthy AI in Critical Infrastructure in April 2026. ISO 42001 certification is becoming a competitive and regulatory requirement. And Aon’s Head of Global Cyber Solutions has observed directly: organisations that invest early in transparent governance and clear accountability will be best positioned to adopt AI safely — and to turn risk into a long-term advantage.
The 14 accountability domains mapped in this document define what the Risk Function owns in an AI/ML and algorithmic systems context. They are the second-line obligations that sit between Management’s operational accountability and the Board’s strategic oversight — providing the independent challenge, measurement, and surveillance that neither the first line nor the Board can perform for themselves.
Risk Function Obligations Across the AI Lifecycle
AI Risk Governance Framework
The Risk Function is the architect and custodian of the organisation’s AI/ML Risk Management Framework — the document layer that defines how AI risk is identified, assessed, controlled, and reported across every business line and use case. This is a design responsibility, not a monitoring one: the Risk Function does not merely audit the framework’s application but actively designs its structure, maintains it in response to regulatory change, and enforces its standards across the enterprise.
Defining the AI risk taxonomy is one of the most consequential outputs of this function. A taxonomy that fails to account for model drift, prompt injection, third-party dependency, or agentic AI autonomy will produce a risk universe with structural blind spots — leaving the organisation exposed precisely where AI risk is most acute. In 2026, the taxonomy must also integrate the NIST AI RMF’s four core functions (Govern, Map, Measure, Manage) and align with EU AI Act risk classifications.
Integrating AI into the broader Enterprise Risk Management (ERM) and operational risk frameworks ensures that AI risk does not exist as a siloed parallel programme but is treated as a category within the organisation’s unified risk architecture — with consistent escalation paths, consistent materiality thresholds, and consistent reporting cadences.
AI Risk Identification & Taxonomy
Identification is the first obligation of any risk function, and for AI it is uniquely challenging: the risk landscape is evolving faster than any static taxonomy can track. Model drift, adversarial prompt injection, agentic AI autonomy risks, synthetic data quality degradation, hallucination-driven decision errors — each of these has emerged as a material enterprise risk category within the last 18 months and each requires ongoing classification and universe maintenance.
The Risk Function’s role as custodian of the AI risk universe means owning the completeness problem — ensuring that the taxonomy covers not just known risk categories but emerging ones, and that every AI use case across every business line is assessed against the complete taxonomy. The McKinsey 2026 AI Trust Maturity Survey found that governance and agentic AI controls lag behind data and technology across all regions globally — a direct consequence of risk identification frameworks that have not kept pace with deployment velocity.
AI Risk Assessment & Measurement
Risk assessment without methodology is opinion. The Risk Function’s ownership of AI risk assessment methodology means defining how risk is measured — what constitutes a material risk, how residual risk is quantified after controls, and how different risk types are scored and compared. This is particularly challenging for AI because many AI risks are probabilistic, context-dependent, and difficult to quantify through traditional financial or operational risk metrics.
Pre-deployment assessment is a gate, not a formality. The EU AI Act’s high-risk system obligations — fully applicable from August 2, 2026 — require conformity assessments, technical documentation, and risk management systems to be in place before market entry. The Risk Function’s role in pre-deployment assessment provides the independent second-line review that validates the first line’s own assessment before the organisation commits to deployment. Annual assessments then ensure that residual risk remains within appetite as the system evolves in production.
Model Risk Management Oversight
Model Risk Management (MRM) is the discipline of identifying, quantifying, and controlling the risks arising from the use of quantitative models in decision-making. For AI/ML systems, MRM extends traditional model risk frameworks to address the additional complexity of machine learning: non-linear relationships, feature interaction effects, distributional shift, hallucination, and the opacity of large model architectures.
The Risk Function’s role here is explicitly second-line: it does not own model development or primary validation (which sits with the first line or an independent model validation function) but provides oversight of those activities — reviewing validation results, challenging residual risks, and ensuring that high-risk models receive appropriate approval gates before deployment and periodically thereafter. In heavily regulated sectors such as financial services and healthcare, this second-line oversight role is not optional — it is a regulatory expectation under frameworks such as SR 11-7 (US), the ECB’s expectations on MRM, and the EU AI Act’s human oversight requirements.
Monitoring, Reporting & Risk Metrics
Risk that is not monitored is risk that is not managed. The Risk Function’s monitoring obligation for AI systems goes beyond periodic assessment: it requires continuous surveillance infrastructure capable of detecting model drift, data quality degradation, emerging fairness issues, and changes in risk exposure before they materialise into incidents. Unlike traditional software, AI systems evolve after deployment — through continuous learning, real-world feedback signals, and interactions with dynamic data environments — making continuous monitoring a structural requirement, not a best practice.
Key Risk Indicators (KRIs) for AI systems must be designed to capture leading signals — indicators that a risk is trending toward breach before the breach occurs. Model performance degradation metrics, data drift indices, fairness monitoring outputs, and KRI trend analysis provide the surveillance layer that translates technical monitoring outputs into board-level risk language. The Risk Function’s reporting to Senior Management and the Board on these indicators is what enables informed governance decisions on AI risk tolerance.
Validation Oversight & Challenge Function
The challenge function is one of the most distinctive and valuable obligations of the Risk Function in the AI context. Model development teams are, by nature, invested in the success of their models — their expertise and effort are committed to making the system work as designed. An independent challenge function exists to ask the uncomfortable questions: what happens if the training data is not representative? What are the edge cases the validation suite did not test? Where might the model perform well on average but catastrophically at the tail?
Adversarial testing, fairness testing, and stress testing are the technical instruments of the challenge function. In 2025, researchers demonstrated that just five carefully crafted documents can manipulate AI agent responses 90% of the time through RAG poisoning — a class of vulnerability that standard validation suites are not designed to detect. The Risk Function’s challenge obligation extends to ensuring that validation coverage is genuinely comprehensive, not merely compliant with a checklist.
AI Control Effectiveness Evaluation
A control that exists in policy but does not work in practice provides no risk mitigation — only the false comfort of documented compliance. The Risk Function’s control effectiveness evaluation obligation addresses this gap: assessing not only whether controls are designed appropriately but whether they are operating effectively in the production environment. Policy documents that describe bias testing do not mean bias testing is actually happening in production — as Samta.ai’s 2026 governance analysis noted directly.
For AI systems, control effectiveness is particularly dynamic: a control that was adequate at deployment may become inadequate as the model encounters new data distributions, as external tools are updated, or as business use cases evolve beyond their original scope. Control effectiveness evaluation must be continuous, not point-in-time — triggering remediation recommendations when controls show signs of degradation before failures reach users or regulators.
Third-Party AI Risk Management
The rapid adoption of AI-as-a-service — foundation model APIs, cloud-hosted AI platforms, vendor-supplied ML models — has transferred significant operational risk to third parties that organisations cannot directly control. Aon’s 2026 AI risk analysis identified this explicitly: organisations are increasingly reliant on AI-as-a-service providers, and independent research has highlighted misconfigurations and architectural weaknesses across fast-scaling AI platforms that can expose organisations to outages, data leakage, or loss of service integrity.
The Risk Function’s third-party AI risk obligation extends beyond traditional vendor due diligence to the specific risks of AI outsourcing: model bias inherited from vendor training data, lack of explainability in vendor models, vendor model updates that change system behaviour without notice, and contractual provisions that inadequately address liability for AI-related harm. Concentration risk — where an organisation’s AI capabilities are dependent on a small number of providers — requires explicit assessment and mitigation planning.
Data Risk & Model Input Risk
AI models are only as reliable as the data they are trained and operated on. Data quality risk — inaccuracy, incompleteness, staleness, bias — is the most consistently documented root cause of AI failure in production. BARC identified data quality management as the number one data and analytics trend for 2026. The Pertama Partners AI failure analysis found that 38% of abandoned AI projects cited insurmountable data quality issues. And the principle is brutal: AI does not solve data problems. It exposes them — at scale, in production.
Data lineage and traceability are the governance instruments that make data risk manageable. Only 30% of organisations have full visibility into their AI data pipelines — and lack of lineage is one of the top reasons AI audits fail. When a financial institution cannot explain why its AI model denied a loan, the root cause is often an undocumented, outdated third-party dataset silently introduced weeks earlier. The Risk Function’s data risk oversight responsibility is to prevent this class of governance failure.
Ethical AI & Fairness Risk Oversight
Ethical AI risk is not a soft commitment — it is a source of financial, regulatory, and reputational exposure that the Risk Function must govern with the same rigour applied to credit or operational risk. Unfair AI outcomes — a credit model that systematically disadvantages protected groups, a hiring tool that produces discriminatory rankings, a medical diagnostic tool with differential accuracy across demographic subgroups — create material liability under the EU AI Act, GDPR, and consumer protection law.
The Risk Function’s role as custodian of ethical risk governance means defining what fairness means in quantitative terms for each AI system — there is no single universal fairness metric, and the choice of metric has significant implications for model design and outcomes. Demographic parity, equal opportunity, predictive parity — each addresses a different aspect of fairness and each may be in tension with the others. The Risk Function must define acceptable thresholds, monitor detection results continuously, and escalate material bias indicators through the governance structure before they become incidents.
Incident Risk Assessment & Escalation
When AI systems fail — and they will — the Risk Function’s incident assessment obligation ensures that failures are not treated as purely operational events but are assessed for their full risk impact: financial, reputational, regulatory, and systemic. The Clinejection attack of February 2026, where a prompt injection in a GitHub issue title led to credential theft and a compromised npm package installed on 4,000 developer machines, illustrates how AI-related incidents can propagate far beyond their initial scope at machine speed.
Classification matters because escalation thresholds are classification-dependent. An AI incident classified as a minor operational event may receive an operational response. The same incident, properly classified as a model risk event with regulatory implications, receives a different escalation path — with Risk Function involvement, legal notification obligations, and Board awareness. The Risk Function’s role is to ensure that classification serves the actual risk impact, not the convenience of avoiding escalation.
Regulatory Compliance Risk
The regulatory landscape for AI is the most rapidly evolving compliance environment organisations have faced since the introduction of GDPR. The EU AI Act is the dominant reference — with high-risk system obligations fully applicable from August 2, 2026, fines reaching €35 million or 7% of global annual turnover, and extraterritorial scope that applies to any organisation whose AI affects EU users regardless of where the organisation is headquartered. NIST released an updated AI RMF profile in April 2026. ISO 42001 certification is increasingly a customer and regulator expectation.
The Risk Function’s regulatory compliance risk obligation includes both ensuring that current AI systems comply with applicable requirements and monitoring the regulatory horizon to anticipate obligations before they become enforcement events. Most organisations that will face enforcement actions in 2026 and 2027 are not failing to comply with requirements they understood — they are failing to comply with requirements they did not identify in time to address.
Participation in AI Governance Structures
A risk function that assesses risk in isolation but has no voice in governance decisions is a reporting function, not a risk function. The Risk Function’s participation in AI Governance Committees and Oversight Structures is what converts risk intelligence into decision influence — ensuring that risk perspectives are embedded in model approval decisions, use case prioritisation, and AI programme governance before commitments are made, not after.
The NIST AI RMF’s emphasis on the Govern function as the backbone of all other risk management activities reflects this logic: governance structures are where risk appetite is translated into operational constraints, where exceptional risks are adjudicated, and where independent challenge is most consequential. A Risk Function that is absent from these structures allows AI governance decisions to be made without independent risk perspective — which is precisely the failure mode that regulators identify when AI programmes produce unexpected harm.
Risk Culture, Training & Awareness
Frameworks, policies, and controls are necessary but not sufficient. Risk culture — the degree to which risk awareness is embedded in everyday decisions, not just formal governance processes — determines whether those controls are applied consistently in practice. For AI, culture is particularly important because deployment velocity outpaces governance: by the time a formal risk assessment is conducted for a new AI use case, employees may already be using unapproved tools to accomplish the same objective. The average enterprise now runs 66 GenAI applications, most of which were adopted without formal risk assessment.
The Risk Function’s role in culture and training is not to deliver a one-time compliance programme but to create the conditions under which risk-aware decisions about AI are the natural default — where business teams understand what makes an AI system high-risk, where they know how to initiate a risk assessment, and where they feel empowered to escalate concerns without fear of slowing down delivery. This is the difference between a risk function that catches failures after they occur and one that prevents them.
“AI is changing the risk landscape faster than traditional frameworks can adapt. The organisations that invest early in transparent governance, scenario analysis, and clear accountability will be best positioned to adopt AI safely — and to turn risk into a source of long-term advantage.”
Brent Rieth, Head of Global Cyber Solutions, Aon — AI Risk 2026: Practical AgendaAll 14 Functions — Accountability Summary
The Risk Function’s accountability across the AI lifecycle — who owns what, and what independent means in each domain.
| # | Function | Risk Function Role | Key Output | Regulatory Anchor |
|---|---|---|---|---|
| 01 | AI Risk Governance Framework | Framework Owner | AI RMF policy, taxonomy, appetite thresholds | NIST AI RMF · ISO 42001 |
| 02 | Risk Identification & Taxonomy | Risk Universe Custodian | AI risk register, complete taxonomy, emerging risks | NIST MAP · OECD AI Principles |
| 03 | Risk Assessment & Measurement | Assessment Owner | Pre-deployment assessments, scoring models, residual risk | EU AI Act Art. 9 · NIST MEASURE |
| 04 | Model Risk Management Oversight | 2LoD MRM Oversight | Validation review, approval gates, residual risk assessment | SR 11-7 · ECB MRM · EU AI Art. 14 |
| 05 | Monitoring, Reporting & Risk Metrics | Surveillance Authority | KRIs, risk dashboards, Board reports, emerging risk tracking | EU AI Act Art. 9 · NIST MANAGE |
| 06 | Validation Oversight & Challenge | Independent Challenger | Challenge opinions, adversarial test coverage assurance | EU AI Act Art. 10 · NIST AI RMF |
| 07 | AI Control Effectiveness Evaluation | Control Evaluator | Control effectiveness ratings, remediation recommendations | ISO 42001 · NIST GOVERN |
| 08 | Third-Party AI Risk Management | Vendor Risk Owner | Vendor assessments, contractual risk reviews, concentration risk | EU AI Act Art. 25 · DORA |
| 09 | Data Risk & Model Input Risk | Data Risk Oversight | Data quality risk assessments, lineage evaluations, bias reviews | EU AI Act Art. 10 · GDPR |
| 10 | Ethical AI & Fairness Risk Oversight | Ethics Custodian | Fairness metrics, bias monitoring reports, ethical risk indicators | EU AI Act · OECD Principles |
| 11 | Incident Risk Assessment & Escalation | Incident Assessor | Incident risk classifications, escalation decisions, risk reporting | EU AI Act Art. 73 · DORA |
| 12 | Regulatory Compliance Risk | Compliance Risk Oversight | Compliance gap assessments, regulatory change monitoring | EU AI Act · GDPR · Sector rules |
| 13 | Participation in AI Governance Structures | Governance Member | Risk opinions, model approval votes, independent challenge | NIST GOVERN · ISO 42001 |
| 14 | Risk Culture, Training & Awareness | Culture Driver | Training programmes, culture indicators, risk awareness campaigns | NIST GOVERN · OECD Principles |
Independence Is the Function. Challenge Is the Obligation.
The 14 accountability domains in this document share a common thread: they are all obligations that only the Risk Function can fulfil because of its structural independence from the first line. Management owns the strategy and executes the controls. The Board sets the appetite and holds ultimate accountability. The Risk Function’s value exists precisely in the space between those two — providing the independent identification, measurement, challenge, and monitoring that neither party can provide for itself.
Independence is not sufficient without competence. A Risk Function that does not understand adversarial prompt injection cannot challenge the adequacy of validation coverage that tested for it. A Risk Function that does not understand the EU AI Act’s August 2026 high-risk compliance obligations cannot assess whether the organisation’s regulatory exposure is material. The 14 accountability domains require not only structural independence but substantive AI literacy — an increasingly scarce and strategically important capability in the Risk Function itself.
The organisations that will navigate the 2026 AI regulatory environment with confidence are those where the Risk Function is already operating as described here: owning the framework, challenging model assumptions, monitoring continuously, governing ethically, and reporting objectively to the Board. Those who are still treating AI risk as an IT issue — governed by technology controls rather than enterprise risk discipline — are building the exposure that will define next year’s incident reports.
The Risk Function’s role in AI governance is not to slow down AI adoption. It is to make AI adoption sustainable — by ensuring that the systems organisations deploy are understood, controlled, monitored, and aligned with the risk appetite that the Board has approved. That is not a constraint on AI ambition. It is the architecture of AI confidence.