Regulations
EU AI Act — What Financial Institutions Need to Know in 2025
The EU AI Act is the world’s first comprehensive AI regulation. For financial institutions — whether headquartered in the EU or serving EU customers — understanding its requirements is no longer optional.
Risk-Based Classification
The Act categorizes AI systems into four risk tiers:
- Unacceptable Risk — Banned outright (e.g., social scoring, real-time biometric surveillance)
- High Risk — Subject to strict requirements (creditworthiness assessment, insurance pricing, fraud detection)
- Limited Risk — Transparency obligations (chatbots, AI-generated content)
- Minimal Risk — No specific obligations
Most AI systems in financial services fall into the High Risk category.
Key Requirements for High-Risk AI
- Risk Management System — Continuous identification and mitigation of risks throughout the AI lifecycle
- Data Governance — Training data must be relevant, representative, and free from errors
- Technical Documentation — Detailed records of system design, development, and testing
- Human Oversight — Mechanisms for human intervention and override
- Accuracy, Robustness, Cybersecurity — Systems must perform consistently and resist manipulation
Timeline
- August 2024 — Act entered into force
- February 2025 — Prohibited AI practices apply
- August 2025 — GPAI model obligations apply
- August 2026 — Full high-risk AI requirements apply
Action Items for Financial Institutions
Financial services firms should start now:
- Inventory all AI systems and classify them under the Act’s risk tiers
- Gap analysis against high-risk requirements
- Align with existing frameworks — NIST AI RMF and ISO 42001 map well to EU AI Act requirements
- Engage legal counsel on cross-border implications
The EU AI Act has extraterritorial reach. If your AI system’s output affects EU residents, you’re likely in scope — regardless of where your servers are located.
This analysis is based on the publicly available EU AI Act text (Regulation (EU) 2024/1689).