AML in the age of AI: when the machine flags it, who answers for it?
The following is a guest editorial courtesy of Joanna Frendo, Chief Risk and Compliance Officer at online broker Deriv Group.
There is a question compliance professionals are sitting with, even if few have said it plainly in public. When an AI system makes a compliance decision and something goes wrong, who is responsible? It is a governance question that does not yet have a clean, universal answer in global regulatory frameworks.
AI is increasingly embedded in the compliance operations of regulated financial firms worldwide. Functions like transaction monitoring, customer onboarding, risk scoring, and suspicious activity detection sit at the core of Anti-Money Laundering and Countering the Financing of Terrorism (AML/CFT) obligations. These are now partially or substantially driven by automated systems in many global institutions, which is largely a positive development. AI processes volumes no human team can match, identifies patterns no analyst would catch, and reduces the false positive burden that has long made compliance operations inefficient.
What it has not done is resolve the foundational question of accountability.
The gap nobody is talking about
AML frameworks are built on a foundational premise, a human being made a judgement.
Whether it is a compliance officer assessing risk or a Money Laundering Reporting Officer (MLRO) deciding to file a report, every step has a person attached to it who can be questioned and held to account.
AI changes that chain without replacing it. While systems flag, score, or escalate, the human review often becomes cursory due to high volumes, thinning the actual judgment behind the “human signature.” Regulators globally are beginning to address this, signaling that AI models must be reliable, transparent, and explainable. This raises a practical challenge for compliance officers. If a regulator asked you to explain why your system cleared a specific transaction months ago, could you do it in terms that satisfy scrutiny?
How the machine empowers the expert
To bridge this gap, the focus must shift from “the machine” making decisions to “the machine” empowering expert judgment. At Deriv, we have implemented a structured AI-driven workflow that ensures transparency and human-led oversight.
How it works is when our internal Client Risk Assessment (CRA) system detects a client whose activity elevates their risk, our AI-driven tool is automatically triggered. This tool generates a comprehensive client profile for immediate review, compiling all relevant personal details, as well as capturing trading activity from all our platforms. This ensures that nothing slips through the cracks, painting a real-time picture of the client’s behavior.
Sitting atop this profile is our specialised EDD Analyser Agent. This AI-driven agent conducts an initial assessment of the client’s profile, highlighting particular areas of concern and providing actionable insights. This means our Compliance Operations team can move quickly, armed with focused, prioritised information instead of sifting through mountains of data.
Further elevating our response, the same AI tool can draft Suspicious Activity Reports (SARs) or Suspicious Transaction Reports (STRs) in cases flagged for further regulatory attention. By automating this part of the process, we’re able to meet demanding deadlines (such as strict 24-hour submission requirements seen in various jurisdictions), ensuring that nothing stands in the way of swift, effective compliance.
This new approach shows how “the machine” doesn’t replace expert judgment, but instead empowers our team, helping them act faster, smarter, and more consistently in the fight against financial crime.
Where accountability actually sits
The answer to who is responsible is currently dispersed across technology vendors, compliance functions, and senior management. This ambiguity will not survive significant enforcement actions.
True accountability requires a governance layer that keeps pace with deployment. Every AI-assisted decision must sit within a defined category. Some can be actioned autonomously within pre-approved parameters, and others will require mandatory human review. Each category must have a named internal owner within the compliance function—not the technology function—who can explain, in plain terms, why a specific decision was made.
The human element is not decorative
Treating human oversight as a mere formality creates the appearance of accountability without the substance. Genuine compliance culture, capable of holding up under international regulatory pressure, is built on people who understand the why behind their actions.
The machine can flag, but it cannot be held to account. That responsibility remains with the people who build the governance framework around it, and that is exactly where it belongs.
