As the UAE prepares for Dubai AI Week 2026 (6–9 April), artificial intelligence (AI) in financial services is moving decisively from strategy and policy to supervisory scrutiny. What was once articulated at policy level – including through the UAE National Strategy for Artificial Intelligence 2031 – is now being translated into concrete regulatory expectations.
In February 2026, the Central Bank of the United Arab Emirates (CBUAE) issued its Guidance Note on the Consumer Protection and Responsible Adoption and Use of Artificial Intelligence and Machine Learning by Licensed Financial Institutions in the U.A.E. (the Guidance Note). While not legally binding, the Guidance Note clearly signals the direction of supervisory expectations and materially reshapes how boards and senior management should approach AI governance, risk and compliance.
This briefing summarises the key elements of the Guidance Note and outlines the practical implications for banks, insurers and other licensed financial institutions (LFIs) in the UAE.
From Strategy to Supervisory Framework
The UAE has long positioned itself as a global leader in AI. However, until recently, financial institutions operated without sector-specific AI guidance. The February 2026 Guidance Note represents a notable shift in approach.
The Guidance applies to all LFIs supervised by the CBUAE and, although framed through a consumer protection lens, extends into governance, model risk management, outsourcing and operational resilience. It supplements – rather than replaces – existing frameworks, including the CBUAE’s Model Management Standards (2022), Consumer Protection Regulation and outsourcing requirements.
In practical terms, while not legally binding, institutions should expect the Guidance Note to form part of supervisory dialogue and regulatory assessments going forward.
Scope: Broad Application and “High-Impact Decisions”
The Guidance Note adopts a broad definition of AI, capturing machine learning systems and generative AI tools, including large language models.
A central concept introduced is that of the “high-impact decision” – namely, any AI-driven determination that materially affects a customer’s access to financial products or services, such as credit approvals, pricing decisions or insurance claims outcomes.
The higher the potential impact on consumers, the greater the expectation for governance, documentation, oversight and human involvement. This risk-based framing is likely to shape supervisory engagement.
Governance & Accountability: A Board-Level Issue
One of the most significant developments is the explicit allocation of accountability to boards and senior management. Institutions are expected to take ownership of the outcomes generated by AI systems, the adequacy of governance and oversight structures, resourcing and internal capability and alignment with the institution’s risk appetite and legal obligations.
AI risk must be embedded within enterprise-wide risk management frameworks. Control functions – including risk, compliance, internal audit and IT – are expected to possess sufficient technical understanding to challenge AI-driven processes effectively.
In practical terms, institutions should be considering whether they have:
- A documented AI governance framework proportionate to their AI usage;
- Regular board reporting on AI performance, bias testing, model drift, complaints and incidents; and
- A comprehensive AI inventory capturing model purpose, risk classification and key metadata.
AI governance is now expected to be treated with the same rigour as credit, market or operational risk oversight.
Fairness, Bias & Ethical Use
The Guidance Note places strong emphasis on fairness and non-discrimination. AI systems must not result in discriminatory or manipulative outcomes, whether direct or indirect.
Institutions are expected to ensure:
- Use of accurate, relevant and representative training data;
- Periodic bias testing (at least annually and following material model changes); and
- Alignment with obligations to act honestly, fairly and in customers’ best interests.
Emerging areas of risk include proxy discrimination in credit scoring, automated product targeting that may result in unsuitable sales and the use of generative AI in customer-facing communications. Where AI affects access to financial products, independent validation and careful documentation of testing methodologies will be particularly important in managing regulatory and litigation risk.
Transparency & Explainability
Transparency is a recurring theme, particularly in high-impact contexts. Customers should be informed when they are interacting with AI systems and where decisions are AI-driven, provided with meaningful explanations. Institutions are expected to implement mechanisms that allow customers to seek clarification, challenge outcomes and request human review. Where third-party or “black-box” models are used, firms must ensure they have sufficient contractual access to documentation and model logic to meet these transparency expectations.
Data Governance, Privacy & Operational Resilience
AI deployment must comply with the UAE Personal Data Protection Law (Federal Decree Law No. 45 of 2021) and incorporate privacy-by-design and security-by-design principles.
Beyond data protection, the Guidance Note links AI governance to broader operational resilience obligations. This includes:
- Robust model validation and stress testing;
- Ongoing monitoring for model drift or unintended consequences; and
- Contingency planning and fallback arrangements.
AI risk therefore intersects directly with cybersecurity, data governance and operational risk management frameworks.
Human Oversight: Limits on Full Automation
The CBUAE distinguishes between varying degrees of human involvement – from “human-in-the-loop” systems requiring active approval, to “human-on-the-loop” monitoring models and fully autonomous “human-out-of-the-loop” systems.
Fully autonomous AI is expected to be limited to lower-risk processes. In high-impact decision-making, meaningful human oversight is required and customers must have access to review mechanisms. Fully automated credit or insurance decisions without the possibility of human intervention are unlikely to meet supervisory expectations.
Outsourcing & Third-Party AI Risk
As institutions increasingly rely on external AI vendors and cloud providers, outsourcing risk becomes central. The Guidance Note makes it clear that regulatory responsibility cannot be outsourced.
Institutions remain accountable for third-party AI systems and are expected to:
- Conduct thorough due diligence;
- Secure audit and information rights contractually;
- Maintain inventories of third-party models;
- Perform independent cybersecurity assessments; and
- Retain the ability to suspend or terminate systems if required.
Many LFIs may need to review and, where necessary, renegotiate existing AI and cloud contracts to ensure alignment with these expectations.
Integration with Existing Risk Frameworks
A central message of the Guidance Note is that AI risk should not sit in isolation, but rather-AI-related risks – particularly those affecting customers – should be incorporated into conduct risk, credit risk, operational risk and cybersecurity frameworks.
In practical terms, regulators are likely to assess AI through existing supervisory frameworks rather than establishing a wholly separate AI compliance regime.
Market Context: Growing Adoption
The Dubai Financial Services Authority (DFSA) has reported a marked increase in AI adoption across financial firms, including rapid expansion in generative AI use. As AI becomes embedded in core operational processes, supervisory expectations are evolving accordingly.
Institutions that view AI solely as an innovation initiative may find themselves exposed to governance and compliance gaps as regulatory scrutiny intensifies.
Recommended Next Steps for LFIs
In light of the Guidance Note, LFIs should consider undertaking a structured review of their AI frameworks.
Priority actions may include:
- Conducting an AI governance gap analysis;
- Enhancing board-level engagement and reporting on AI risks;
- Formalising AI inventories and risk classification methodologies;
- Reviewing vendor and outsourcing arrangements;
- Strengthening bias testing, validation and documentation processes; and
- Ensuring operational capability for meaningful human review of high-impact decisions.
Early engagement and proactive remediation will place institutions in a stronger position during supervisory reviews.
AI as a Regulated Operational Capability
The February 2026 Guidance Note marks a clear evolution in the UAE regulatory landscape. AI is no longer viewed purely as an innovation tool; but rather, it is increasingly treated as a regulated operational capability requiring structured governance, oversight and consumer safeguards.
Although formally non-binding, the Guidance Note establishes a clear supervisory trajectory. Institutions that proactively align AI strategy with enterprise risk management and governance frameworks will be better positioned to manage regulatory scrutiny, mitigate legal exposure and maintain customer trust in an increasingly AI-enabled financial sector.
Please contact our Banking team for advice on this or other Financial Services or Regulatory matters.
This article is intended for general informational purposes only and does not constitute legal advice. Readers should seek independent legal counsel in relation to their specific circumstances.