Industry Analysis

The Rise of Agentic AI and the 'All-Green' Fraud Problem

Agentic AIFraudFinancial ServicesNISTOWASPMITRE ATLASSynthetic IdentityRisk Management

Executive Summary

Financial crime is undergoing a structural shift. The rise of agentic AI — AI systems capable of planning, adapting, and executing multi-step actions — is transforming fraud from isolated events into coordinated, long-horizon sequences.

Traditional controls such as KYC, MFA, and transaction monitoring were designed to evaluate moments. Agentic fraud exploits the spaces between those moments.

Publicly documented cases already show this shift:

  • Federal Reserve: Synthetic identity fraud is one of the fastest-growing financial crimes
  • FBI IC3: Over $55B in cumulative global BEC losses since 2013
  • Hong Kong Police Force: A $25M deepfake-enabled fraud via AI-generated video participants (2024)
  • ONS Crime Survey for England and Wales: Fraud now accounts for approximately 40% of crime in the UK

These cases reflect the same pattern: fraud that is slower, cleaner, and harder to isolate.

To counter this, institutions must shift from point-in-time validation to:

  • Continuous behavioral verification
  • Cross-channel correlation
  • Agent governance aligned with NIST, OWASP, and MITRE

The future of financial security is not detecting bad transactions — it is detecting bad sequences of otherwise normal behavior.

The Shift Most Banks Are Underestimating

Financial institutions are no longer defending against isolated attacks. They are operating in an environment where AI systems can:

  • Chain actions across systems
  • Adjust behavior based on feedback
  • Operate over extended time horizons
  • Mimic human timing and interaction patterns

This is the transition to agentic AI.

Frameworks such as the NIST AI Risk Management Framework and the OWASP Top 10 for LLM Applications highlight risks tied to autonomy, excessive tool access, weak governance, and cross-system blind spots.

Fraud is no longer a single event. It is a sequence.

The Real Threat: Multi-Step, Coordinated Deception

Traditional fraud detection focuses on anomalies at a point in time. Agent-enabled fraud operates differently.

Adaptive — Attack patterns evolve across attempts. Failed actions become learning signals.

Cross-Channel — Fraud now spans onboarding, mobile apps, contact centers, email/SMS, and workflow automation.

System-Aware — Attackers exploit API integrations, orchestration gaps, and inconsistent controls between channels.

This aligns with the OWASP Top 10 for LLM Applications, which warns about autonomous systems performing multi-step actions beyond their intended scope — a risk category focused on excessive agency.

The “All-Green” Fraud Pattern

The most dangerous pattern is not noisy fraud — it is clean fraud, where every control returns “green.”

The “All-Green” Fraud Lifecycle

PhaseControlsStatusWhat Happens
1. Identity AssemblyKYC, ID Verification✔ All PassSynthetic + real data combined into a credible identity
2. Behavioral SeedingMFA, Session Validation✔ All PassNormal logins, small transactions, routine interactions
3. Trust AccrualTransaction Monitoring, Risk Scoring✔ All PassStable behavior builds statistical trust over time
4. Bust-OutLoss Detection, Recovery❌ Too LateRapid monetization across accounts and institutions

Key Insight: Every control returns green until the final phase. The fraud only appears when the sequence is connected.

Real-World Evidence: This Is Already Happening

Synthetic Identity Fraud — The Federal Reserve identifies synthetic identity fraud as one of the fastest-growing financial crimes in the United States.

Business Email Compromise (BEC) — The FBI IC3 reports over $55B in cumulative global BEC losses since 2013, making it one of the most financially damaging cybercrime categories.

Deepfake Financial Fraud — The Hong Kong Police Force confirmed a $25M deepfake-enabled fraud in early 2024, where a finance worker was deceived by AI-generated video participants impersonating company executives.

Industrialized Fraud — The ONS Crime Survey for England and Wales reports fraud accounts for approximately 40% of crime in the UK. Cifas Fraudscape reports provide additional detail on fraud filing trends across UK financial institutions.

Why Existing Controls Break

Most controls are designed for point-in-time validation:

  • KYC verifies identity at onboarding
  • MFA validates login
  • Transaction monitoring evaluates individual events

Each works in isolation. None answer:

  • Is this behavior consistent over time?
  • Does this identity behave the same across channels?
  • Does this sequence make sense?

This creates the “all-green” condition: no alerts, no failed checks, but real exposure.

The NIST AI RMF emphasizes that risk management must account for AI systems that interact autonomously over extended periods — a principle that extends to the threat side as well. When adversaries use agents capable of long-horizon planning, static point-in-time controls are unlikely to be effective.

What Actually Changes with Agentic AI

DimensionTraditional FraudML-Based DetectionAgent-Enabled Threat
ExecutionSingle eventPattern-basedMulti-step orchestration
AdaptationNoneLimitedContinuous, feedback-driven
ChannelsIsolatedPartially connectedFully coordinated
Detection GapModerateNarrowingExpanding
Core WeaknessStatic rulesModel driftControl fragmentation

Agentic AI doesn’t just attack systems — it exploits the gaps between them.

The Missing Layer: Continuous Behavioral Verification

To address multi-step deception, institutions must move toward continuous identity assurance, evaluating:

  • Behavior consistency
  • Session continuity
  • Transaction cadence
  • Cross-channel activity
  • Navigation patterns
  • Timing signatures
  • Interaction flows

This aligns with the NIST AI RMF, MITRE ATLAS, and the OWASP Top 10 for LLM Applications.

This is not foolproof — but it raises attacker cost, reduces scalability, and improves detection when combined with other signals.

Control Priorities That Actually Matter

Identity Continuity — Track identity across sessions, devices, and channels.

Cross-Channel Correlation — Unify signals across onboarding, digital banking, call centers, and payments.

Agent Governance — Per OWASP: restrict tool access, enforce least privilege, validate outputs.

Adversarial Testing — Using MITRE ATLAS: simulate multi-step attacks and test orchestration gaps.

Real-Time Payment Controls — Highest-risk area due to instant settlement and limited recovery windows.

What’s Next (2026–2028 Reality Check)

  • Fraud remains hybrid (human + AI)
  • Attacks become longer and cleaner
  • Synthetic identities become fully automated
  • Deepfake-enabled social engineering becomes routine
  • Regulators emphasize governance, explainability, accountability

Security is shifting from detecting bad transactions to detecting bad sequences of otherwise normal behavior.

Final Takeaway

The “all-green” problem is not a failure of detection — it is a failure of system design.

Fraud today is:

  • Slower
  • Cleaner
  • Harder to isolate

And more dangerous because nothing looks wrong — until everything is.


This analysis references publicly available sources: Federal Reserve — Synthetic Identity Fraud Mitigation, FBI IC3 Reports, ONS Crime Survey, Cifas Fraudscape, NIST AI RMF, OWASP Top 10 for LLMs, MITRE ATLAS, and Hong Kong deepfake fraud case (CNN, 2024). No vendor endorsement is implied.

Ask the Vault
Ask me anything about the published blog posts.