Frameworks

FS AI RMF 230 Control Objectives - Mapping to NIST AI RMF and Implementation Guide

FS AI RMFNIST AI RMFControl ObjectivesRisk AssessmentFinancial ServicesAI Governance

Executive Summary

In February 2026, the U.S. Department of the Treasury released the Financial Services AI Risk Management Framework (FS AI RMF), translating the NIST AI RMF’s four high-level pillars into 230 specific control objectives tailored for banks, insurers, and financial market utilities. As the Mythos AI crisis demonstrated, the threat landscape has shifted from theoretical to operational — and this framework is the regulatory response.

This article breaks down the 230 control objectives, maps them to the NIST AI RMF, and provides a practical guide for integrating them into your organization’s risk assessment process.

1. What is the FS AI RMF?

The FS AI RMF is a sector-specific extension of the NIST AI Risk Management Framework (AI RMF 1.0). While the NIST AI RMF provides a voluntary, principles-based approach applicable to any industry, the FS AI RMF operationalizes those principles into prescriptive control objectives designed for the unique risk profile of financial services.

Key Characteristics

  • 230 control objectives organized across the four NIST AI RMF pillars
  • Risk-tiered implementation — controls are prioritized by the criticality of the AI system
  • Interoperable with existing regulatory frameworks (NY DFS 23 NYCRR 500, SEC, DORA, EU AI Act)
  • Continuous monitoring mandate — moves the industry from point-in-time assessments to ongoing assurance

2. The 230 Control Objectives: Structure and Distribution

The 230 control objectives are distributed across the four NIST AI RMF pillars and further organized into subcategories:

NIST AI RMF PillarFS AI RMF Control AreasControl CountFocus
Govern (GV)Board Oversight, AI Policy, Roles & Accountability, ERM Integration, Third-Party Governance~55Who is accountable and how AI risk fits into enterprise governance
Map (MP)AI Inventory, Data Lineage, Impact Assessment, Stakeholder Analysis, Legal & Regulatory Mapping~60Understanding the AI landscape and its touchpoints
Measure (MS)Bias Testing, Adversarial Testing, Performance Monitoring, Explainability, Drift Detection~65Quantifying and tracking AI risk continuously
Manage (MG)Incident Response, Patch Management, Human-in-the-Loop, Model Decommissioning, Air-Gapping~50Acting on identified risks with defined protocols

3. Mapping FS AI RMF to NIST AI RMF

Govern → GV Controls

The NIST AI RMF’s Govern function establishes organizational AI risk governance. The FS AI RMF extends this with financial-sector specificity:

FS AI RMF ControlNIST AI RMF MappingWhat It MeansHow to Implement
GV-BOARD-01GV-1.1Board must approve the AI risk appetite statementAdd AI risk as a standing agenda item in board risk committee meetings. Document the approved risk appetite in the ERM policy.
GV-ACCT-03GV-1.2Designate a senior AI risk officer with direct board reportingAppoint a Chief AI Officer (CAIO) or assign AI oversight to the CRO. Ensure reporting lines bypass operational management.
GV-POL-05GV-1.3Maintain an enterprise AI policy covering development, procurement, and deploymentDraft a policy that covers model development standards, vendor AI due diligence, and acceptable use. Review annually.
GV-ERM-02GV-2.1Integrate AI risk into the enterprise risk taxonomyUpdate your risk taxonomy to include AI-specific risk categories: model risk, data risk, algorithmic bias, adversarial risk.
GV-TPR-04GV-6.1Third-party AI providers must meet equivalent control standardsInclude AI-specific clauses in vendor contracts. Require SOC 2 + AI-specific attestations. Conduct annual AI risk assessments of critical vendors.

Map → MP Controls

The Map function focuses on understanding the AI landscape. In the context of threats like Mythos, this is where you identify every surface that an adversarial AI could target.

FS AI RMF ControlNIST AI RMF MappingWhat It MeansHow to Implement
MP-INV-01MP-1.1Maintain a comprehensive AI inventoryCatalog every AI/ML model in production and development. Include model name, owner, data inputs, outputs, risk tier, and deployment date.
MP-DATA-03MP-2.1Document data lineage for all AI training and inference dataMap data flows from source to model to output. Identify PII, sensitive financial data, and cross-border data transfers.
MP-IMP-02MP-3.1Conduct AI impact assessments before deploymentPerform a structured impact assessment covering fairness, safety, privacy, and security for every model before production release.
MP-REG-01MP-4.1Map each AI system to applicable regulationsCreate a regulatory mapping matrix: which regulations apply to each model (NY DFS, GDPR, EU AI Act, SEC, DORA).
MP-CONN-04MP-5.1Identify all connection points between AI systems and critical infrastructureDocument every API, data feed, and integration point where AI interacts with core banking, payments, or customer-facing systems.

Measure → MS Controls

Measure is where the Mythos scenario becomes directly relevant. Continuous adversarial testing is no longer optional.

FS AI RMF ControlNIST AI RMF MappingWhat It MeansHow to Implement
MS-ADV-01MS-2.6Conduct continuous adversarial testing on all high-risk AI systemsDeploy automated red-teaming tools. Test for prompt injection, data poisoning, model extraction, and evasion attacks on a recurring schedule.
MS-BIAS-02MS-2.3Test for bias across all protected classes before and after deploymentRun fairness metrics (demographic parity, equalized odds) on training data and production outputs. Document results and remediation.
MS-DRIFT-01MS-2.5Monitor for model drift and data drift in productionImplement statistical drift detection (PSI, KL divergence) on model inputs and outputs. Set alert thresholds and remediation procedures.
MS-EXPL-03MS-2.7Ensure explainability for all customer-impacting AI decisionsImplement SHAP, LIME, or equivalent explainability methods. Maintain audit logs of explanations for regulatory examination.
ML-VULN-01MS-2.11Continuous vulnerability monitoring for AI systemsMaintain an AI vulnerability register. Move from point-in-time scans to continuous monitoring. Require documented HITL review for all AI outputs in banking operations.

Manage → MG Controls

Manage is the action layer — what you do when risks are identified.

FS AI RMF ControlNIST AI RMF MappingWhat It MeansHow to Implement
MG-INC-01MG-2.1Maintain an AI-specific incident response planExtend your existing IR plan with AI-specific playbooks: model failure, adversarial attack, data poisoning, biased output. Conduct tabletop exercises quarterly.
MG-PATCH-02MG-2.3Prioritize patching based on AI risk tierClassify AI systems by risk tier (critical, high, medium, low). Critical systems get 24-hour patch SLAs. High-risk systems get 72-hour SLAs.
MG-HITL-01MG-3.1Enforce Human-in-the-Loop for high-risk decisionsDefine which AI decisions require human review (credit decisions, fraud alerts, AML flags). Implement approval workflows with audit trails.
MG-DECOM-01MG-4.1Establish model decommissioning proceduresDefine end-of-life criteria for AI models. Ensure data retention, audit trail preservation, and safe shutdown procedures.
MG-AIRGAP-01MG-2.5Air-gap systems that cannot be adequately securedIf a legacy system cannot be patched against AI-driven exploits, isolate it from network-accessible AI systems. Document the compensating control.

4. How to Map the 230 Controls to NIST AI RMF

The FS AI RMF was intentionally designed as a layer on top of the NIST AI RMF. Every one of the 230 control objectives traces back to a specific NIST AI RMF subcategory. Here is a practical methodology for performing the mapping in your organization.

Understanding the NIST AI RMF Hierarchy

The NIST AI RMF is structured as:

  • Function (4): Govern, Map, Measure, Manage
  • Category (19): e.g., GV-1, GV-2, MP-1, MS-2, MG-3
  • Subcategory (72): e.g., GV-1.1, GV-1.2, MP-2.3, MS-2.6

The FS AI RMF’s 230 controls map to these 72 subcategories, with multiple FS AI RMF controls often mapping to a single NIST subcategory. This is because the NIST subcategories are broad and principles-based, while the FS AI RMF controls are prescriptive and banking-specific.

The Mapping Methodology

Step 1: Start with the NIST subcategory, not the FS AI RMF control.

For each of the 72 NIST subcategories, ask: “Which FS AI RMF controls operationalize this principle for our institution?”

Example:

NIST SubcategoryNIST DescriptionFS AI RMF ControlsBanking Context
GV-1.1Legal and regulatory requirements are understoodGV-BOARD-01, GV-POL-05, GV-REG-02Board approves AI risk appetite; AI policy addresses regulatory obligations; regulatory mapping is maintained
MP-2.3Scientific integrity and TEVV are prioritizedMP-DATA-03, MP-DATA-05, MP-VAL-01Data lineage is documented; training data quality is validated; model validation is independent
MS-2.6AI systems are evaluated for adversarial resilienceMS-ADV-01, MS-ADV-02, ML-VULN-01Continuous red-teaming; penetration testing of AI endpoints; vulnerability monitoring
MG-3.1Risks are responded to based on impactMG-HITL-01, MG-PATCH-02, MG-AIRGAP-01Human review for high-risk decisions; risk-tiered patching SLAs; air-gapping for unsecurable systems

Step 2: Identify one-to-many and many-to-one relationships.

Not every mapping is 1:1. The relationships fall into three patterns:

  • One NIST → Many FS AI RMF: A broad NIST principle (e.g., GV-1.3 “Processes and procedures are in place”) maps to multiple specific banking controls (AI policy, change management, model approval workflows, documentation standards)
  • Many NIST → One FS AI RMF: A comprehensive banking control like ML-VULN-01 (continuous vulnerability monitoring) satisfies elements of MS-2.5 (drift), MS-2.6 (adversarial), and MS-2.11 (security)
  • Direct 1:1: Some controls map cleanly, e.g., MG-DECOM-01 (model decommissioning) maps directly to MG-4.1 (manage model retirement)

Step 3: Build a cross-reference matrix.

Create a spreadsheet with the following columns:

ColumnPurpose
NIST FunctionGovern / Map / Measure / Manage
NIST Categorye.g., GV-1, MP-2
NIST Subcategorye.g., GV-1.1, MS-2.6
FS AI RMF Control IDe.g., GV-BOARD-01, ML-VULN-01
FS AI RMF Control DescriptionPlain-language description
Current StateImplemented / Partial / Not Implemented / N/A
Gap DescriptionWhat is missing
Risk TierCritical / High / Medium / Low
Remediation OwnerName and role
Target DateCompletion deadline

This matrix becomes your single source of truth for compliance tracking, audit readiness, and board reporting.

Step 4: Validate with existing regulatory mappings.

Many of the 230 controls overlap with requirements you may already satisfy under other regulations. Cross-reference your matrix against:

Existing RegulationOverlap Areas
NY DFS 23 NYCRR 500Risk assessment (500.9), access controls (500.7), incident response (500.16), third-party risk (500.11)
EU AI ActRisk classification (Art. 6), data governance (Art. 10), human oversight (Art. 14), transparency (Art. 13)
DORAICT risk management (Art. 5–16), incident reporting (Art. 17–23), third-party risk (Art. 28–44)
SEC Cybersecurity RulesIncident disclosure, board oversight, risk management policies
ISO 42001AI management system, risk treatment, performance evaluation

Controls already implemented for these regulations can be marked as “Implemented” or “Partially Implemented” in your matrix, significantly reducing the remediation effort.

Step 5: Prioritize gaps using the Mythos threat model.

Not all 230 controls carry equal urgency. Use the Mythos scenario as a threat model to prioritize:

  • Immediate (0–30 days): Controls related to vulnerability monitoring (ML-VULN-01), AI inventory (MP-INV-01), and incident response (MG-INC-01)
  • Near-term (30–90 days): Controls related to adversarial testing (MS-ADV-01), connection point mapping (MP-CONN-04), and patch management (MG-PATCH-02)
  • Medium-term (90–180 days): Controls related to board governance (GV-BOARD-01), third-party risk (GV-TPR-04), and explainability (MS-EXPL-03)
  • Ongoing: Controls related to drift detection (MS-DRIFT-01), continuous assurance, and model lifecycle management

5. Integrating into the Risk Assessment Process

Step 1: AI System Inventory and Classification

Before applying any controls, you need a complete picture. Conduct an enterprise-wide AI inventory and classify each system:

  • Critical: Directly impacts financial transactions, regulatory reporting, or customer data
  • High: Supports decision-making in lending, underwriting, or fraud detection
  • Medium: Internal analytics, operational efficiency tools
  • Low: Non-production, experimental, or sandbox environments

Step 2: Gap Assessment Against the 230 Controls

Map your current controls to the FS AI RMF objectives. For each control:

  1. Implemented — Control exists and is operating effectively
  2. Partially Implemented — Control exists but has gaps
  3. Not Implemented — Control does not exist
  4. Not Applicable — Control does not apply to your AI portfolio

Step 3: Risk Scoring and Prioritization

Use a risk-based approach to prioritize remediation:

  • Likelihood — How probable is the risk given your current environment? (The Mythos scenario elevated the likelihood of AI-driven zero-day exploitation from “low” to “high” overnight.)
  • Impact — What is the financial, operational, and reputational impact?
  • Control Effectiveness — How well do your existing controls mitigate the risk?

Step 4: Remediation Roadmap

Build a phased remediation plan:

  • Phase 1 (0–90 days): Address critical gaps — AI inventory, board reporting, incident response
  • Phase 2 (90–180 days): Implement continuous monitoring, adversarial testing, HITL workflows
  • Phase 3 (180–365 days): Full compliance — third-party governance, explainability, model decommissioning

Step 5: Continuous Assurance

The FS AI RMF explicitly requires ongoing monitoring, not annual checkboxes:

  • Quarterly adversarial testing
  • Monthly drift detection reviews
  • Annual board-level AI risk report
  • Continuous AI inventory updates as models are deployed or retired

6. Conclusion

The FS AI RMF’s 230 control objectives represent the most comprehensive AI governance standard for financial services to date. In the wake of the Mythos crisis, these controls are not aspirational — they are the baseline expectation from regulators.

For organizations beginning this journey, start with the AI inventory (MP-INV-01), establish board accountability (GV-BOARD-01), and implement continuous vulnerability monitoring (ML-VULN-01). These three controls alone address the most immediate risks exposed by the Mythos scenario and provide the foundation for full framework adoption.

The FS AI RMF is not a compliance exercise — it is a resilience strategy. Organizations that treat it as such will be better positioned to withstand the next Mythos-level event.

References