Artificial Intelligence

AI governance: Navigating EU compliance standards

To ensure compliance with Articles 1-5 of the EU AI Act, which came into effect on February 1, 2025, obliged entities need to focus on several key areas. These articles outline the fundamental principles and obligations related to the use of AI systems classified as high-risk or unacceptable risk.

Key focus areas for compliance

Understanding risk categories

Entities should categorize their AI systems according to the risk levels defined in the AI Act: unacceptable, high, limited, or minimal risk. This classification determines the specific obligations that apply. For instance, systems classified as unacceptable risk, such as those used for social scoring or widespread biometric identification, are entirely prohibited.
 

Technical documentation and compliance

Providers of high-risk AI systems must prepare comprehensive technical documentation demonstrating compliance with the requirements outlined in Articles 8 and subsequent sections. This documentation should include details on testing, conformity assessments, and risk management measures. Key elements include:

  • Quality management system: Establish a robust quality management system that aligns with regulatory standards. 
  • EU declaration of conformity: Draft and maintain an EU declaration of conformity for each high-risk AI system, which must be updated regularly.
     

Risk management

Obliged entities need to implement a targeted risk management system that identifies and mitigates risks associated with AI systems. This includes conducting a Fundamental Rights Impact Assessment (FRIA) for high-risk systems, particularly those used by public authorities.
 

Data governance

Entities must ensure high-quality data governance practices are in place. This involves using relevant and unbiased training datasets and maintaining traceability throughout the AI system's lifecycle. Specific requirements include:

  • Keeping records of data sources and ensuring datasets are representative and appropriately scrutinized 
  • Facilitating automatic event logging to support post-market surveillance.
     

Human oversight and transparency

The implementation of AI systems needs to allow for effective human oversight. This includes providing clear user instructions and ensuring operators can understand and interpret the system's outputs. Explainability and transparency are crucial for fostering trust and accountability.
 

Registration and CE marking

Before placing AI systems categorized as high risk on the market, entities must register these systems in the EU database as specified in Article 49 of the AI Act. Additionally, affixing a CE marking indicates compliance with EU legal standards.
 

Monitoring and reporting obligations

Entities need to establish continuous monitoring mechanisms to assess the performance of deployed AI systems. They are required to report any issues or non-compliance to national authorities as necessary. By focusing on these areas, entities can more effectively navigate the requirements set out by the AI Act, so their AI systems operate within legal frameworks while promoting safety and ethical standards in AI deployment.
 

Guidelines on prohibited AI practices

In February 2025, the EU Commission published guidelines on the EU AI Act's prohibited AI practices, providing additional guidance on “dos and don’ts.” While non-binding, these guidelines help obliged entities understand more clearly what they need to do under the AI Act.

Key points of the guidelines on prohibited AI practices

The AI Act prohibits AI systems that pose unacceptable risks, including those that manipulate or exploit individuals, conduct social scoring, or infer emotions in workplaces or educational settings. The ban applies to both companies offering and using such AI systems.

Examples of prohibited practices include AI systems that:

  • Use subliminal, manipulative, or deceptive techniques to materially distort behavior by impairing a person's ability to make informed decisions. 
  • Exploit vulnerabilities related to age, disability, or socio-economic status, with the aim of distorting their behavior. 
  • Use social scoring to evaluate or classify people based on social behavior or personal characteristics over a period of time. 
  • Leverage profiling techniques to predict the risk of criminal behavior. 
  • Create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. 
  • Infer emotions in the workplace or educational settings.


Responsibilities for AI providers

Providers of AI systems are responsible for ensuring their systems are not "reasonably likely" to be used for prohibited purposes and should adopt safeguards to prevent misuse. This includes technical safeguards, user controls, and use restrictions. They should also clearly exclude the use of their AI system for prohibited practices in their terms and provide clear instructions for appropriate oversight and prohibited practices.
 

Compliance

AI providers should ensure continuous monitoring and updates to AI systems they've placed on the market. If misuse is detected, they are expected to take appropriate measures as prescribed in the AI Act.
 

Enforcement

While the provisions banning certain AI practices took effect on February 2, 2025, penalties for breaches of Article 5 will only come into force on August 2, 2025. Fines may reach the higher of €35 million or 7% of total worldwide annual turnover.
 

Goals of the AI Act

The EU AI Act aims to create uniform regulations for the development, placement, and use of AI technology within the EU. AI systems are categorized into four risk levels: unacceptable, high risk, limited risk, and minimal risk. This classification helps ensure safe introduction, increase consumer trust, and promote acceptance of new technologies.
 

Scope and application

The AI Act applies to entities within the EU and those outside the EU if their activities or outputs impact the EU. Non-EU providers must appoint authorized representatives to work with the Union. Obligations for General Purpose AI models will be enforced from 12 August 2025.
 

Financial Services Sector Impact

Financial institutions using AI tools for credit scoring and insurance risk assessment must comply with stringent requirements regarding data quality, documentation, transparency, human oversight, and cybersecurity. Financial institutions must also consider how the AI Act interacts with other regulatory frameworks, such as anti-money laundering directives and anti-fraud programs.
 

Implications for Fraud Prevention and AML

The AI Act's focus on promoting trustworthy AI and adherence to fundamental rights and ethical principles will impact AI technology used for risk management, fraud prevention, and anti-money laundering. Organizations must continue monitoring regulatory developments and consider how these laws interact with other frameworks targeted at financial crime prevention.
 

Overall impact

The AI Act is intended to represent a significant step in promoting responsible AI use. By embracing trustworthy AI principles and ensuring compliance, businesses can adapt to changes and harness AI ethically and effectively to drive efficiencies in risk screening, fraud prevention, money laundering, and more.


Learn more

Leverage AI for risk and compliance

For more information on how Moody’s can support your risk and compliance processes, including automated screening that leverages AI, please get in touch – we would love to hear from you.