Two businessmen using computers at their desks

Generative AI in KYC workflows



Generative AI or GenAI is a seriously trending global topic. Its success is largely due to its ease of use in a multitude of cases. Four of the most common are: content creation; translation; writing code and debugging code; and good old-fashioned learning.

Although GenAI has taken the world by storm, it’s actually not a new concept. There have been models developed by OpenAI and others able to perform similar tasks for some time, so why does it have so much traction now?
One of the reasons for its popularity is because the latest GenAI, like ChatGPT, can readily understand natural language. Understanding and evolving responses based on user directions is something to behold. GenAI can determine the meaning as well as the intent of its users and that is a game-changer.

ChatGPT, for instance, can follow an entire conversation and maintain context, so a user could use pronouns, refer to various segments of the conversation, ask it to summarize the conversation or update the output to sound funny or poetic, and it will be able to follow and respond accordingly.
GenAI has demonstrated the effectiveness of a conversation-based paradigm for human learning and understanding in assisted writing and coding, creating diet plans, booking travel itineraries, and so on. And know your customer (KYC) processes can similarly benefit from GenAI chat-based workflows – providing a natural, human-friendly user experience.

With this kind of chat-based AI in a KYC workflow, interactive investigations and intelligent screening of entities becomes possible. It focuses on the strengths of human cognition – better posed to ask the right questions and form rational answers (as opposed to memorizing facts and code syntax).




Limitations of GenAI AI in KYC

Although GenAI such as ChatGPT are exciting technologies that understand natural language, there are limitations:

  • Recent Google results: Large Language Models (LLMs) will only have information about the data it was trained on. It may require regular updates.

  • No fact-checking: Where possible, accurate and verified KYC data should be included in the assessment process.

  • Logical understanding and proper evidence supporting claims: Sometimes the evidence and arguments provided by a GenAI tool are general and don’t justify claims being made.

  • Compulsive text generator: GenAI will always generate a response, regardless of how much it knows about a subject. LLMs may also be prone to hallucinations, which are false or inaccurate statements that may sound plausible – this is where having human judgment to review content is especially important.

  • Fragmented regulatory landscape: Regulations for AI differ from region to region, which may complicate the adoption of AI processes in KYC across jurisdictions. Most countries globally are seeking to enact laws or regulation surrounding AI usage, albeit in various stages of progress. Currently, the European Union’s AI Act is the most comprehensive regulation globally that establishes a risk-based approach for compliance for EU member states. Italy is the first EU country to pass a national AI Law, which came into effect October 10, 2025; Brazil’s Senate approved a draft of its AI Bill in December 2024; and Singapore, Saudi Arabia, and the United Kingdom are currently prioritizing sector-led frameworks for AI governance.

  • Navigating ethical concerns: The usage of AI for due diligence may also raise certain ethical considerations, such as the risk of bias when training AI models that could potentially undermine the integrity of decision-making processes.


Moody’s believes harnessing GenAI technology and integrating it with existing proprietary datasets and workflow technology adds value in supporting and developing smarter KYC workflows with natural-language-based solutions embedded.




Smarter, GenAI-enabled KYC

Public GenAI tools, like ChatGPT, only respond with publicly available information – for example, training data for ChatGPT that is on the web. These instances of GenAI don’t know and can’t therefore include information from KYC databases such as Moody’s Grid and Orbis, and our official registries datasets from Kompany, which we own. These datasets are Moody’s proprietary information. Integrating trusted external datasets into GenAI-enabled KYC workflows can help provide compliance teams with a more holistic picture of risk.

Ideal responses to KYC queries could include trusted, verified, and updated data, presented to risk and compliance professionals to support a well-judged and fair outcome. To achieve this, an AI chat interface within a KYC platform would need to be established to support and inform decision making.

Currently, a suite of Moody’s KYC products works to provide a structured input and output, with LLMs acting behind the scenes in various pipelines to support intelligent screening. Moody’s Agentic Solutions will offer companies the ability to leverage automated compliance workflows for customer due diligence and KYC checks. Powered by AI, these systems comprise coordinated agents working together to automate processes to inform decision-making.

In the future, there will be more customer experience improvements from natural-language-based querying and generative responses to agentic AI capabilities powered by Moody’s industry-leading data and KYC workflow automation.




AI, machine learning, and GenAI revolutionizing KYC

Using AI technology to enhance KYC processes includes GenAI, of course, but it encompasses a lot more besides.

Effective KYC processes are essential for compliance and third-party risk management in a range of sectors, from financial services to corporates to fintechs and beyond.

Manual KYC methods, which could be error-prone and are often time-consuming or labor-intensive, have been phased out in many industries. Through digital transformation and the introduction of RegTech, organizations can carry out automated risk assessments at onboarding and throughout a customer lifecycle.

KYC platforms, like Moody's, have revolutionized areas of anti-money laundering compliance, rules-based risk management, and transaction monitoring – by offering workflow automation, integration with global data, and full case management for human oversight.

Advancements in AI and machine learning (ML) are continuing this expansion of digital KYC – offering more solutions for automated identity verification, security enhancements such as liveness tests, and intelligent screening that leads to more efficient name matching and entity verification.

AI and ML technologies can help authenticate identities; screen government IDs and bank statements; and help identify patterns that could potentially indicate the presence of fraud. These technologies also enable continuous monitoring for suspicious activities or changes in risk profiles, flagging high-risk entities within a counterparty network who can then become subject to enhanced due diligence (EDD).




The current state of AI in risk-related compliance

Our latest study on AI in risk-related compliance revealed that AI is being implemented or considered for various uses, with the top few being KYC and screening, technology implementation, risk analytics, and data management.
Of the 600 professionals in risk and compliance surveyed globally, 84% stated they see significant benefits to using AI, and 62% expect AI will be widely adopted in the next three years. However, safeguards will need to be in place to mitigate the technology, privacy, accuracy, and security concerns of using AI in KYC. Firms will also need to address the challenges associated with adopting AI at scale: regulatory concerns, lack of internal expertise or skills, and difficulties in integrating with existing systems.

As the risk and compliance profession adapt to the changing AI landscape, it’s clear most participants believe AI will change their roles: 61% believe they will take on more strategic or advisory responsibilities, and 54% think they will collaborate more with technology teams to develop AI tools. Having a human in the loop to make judgments to identify and mitigate biases and mistakes remains essential.

As the technology progresses, we expect to see more developments, particularly around agentic AI, that could enhance compliance and KYC.




The future of KYC with AI: Combining proprietary data and workflows for smart screening

KYC processes are designed to help organizations understand who they are doing business with. They help prevent financial crime, like money laundering and fraud. And there are typically three steps in a KYC process, which form part of the first line of defense:

  1. Entity verification confirms that the entity of interest is a legitimate legal entity (individual or organization)
  2. Entity profiling creates and updates the entity risk profile using events associated with the entity
  3. Entity screening helps navigate business decisions regarding the entity based on its risk profile, government sanctions, and entity ownership and control

When it comes to due diligence, KYC professionals can utilize automation to process data differently, carrying out intelligent screening to detect patterns and analyze behaviors that indicate risk, rather than asking questions. Applications of AI and ML in the first line include:

  • Automated verification: AI systems can potentially verify customer identities by cross-referencing information from diverse sources, analyzing document authenticity, and matching biometric data, which speeds up customer onboarding while minimizing errors. And the process can be adjusted to each organization’s risk appetite.

  • Enhanced risk assessment: AI can enhance KYC by analyzing vast amounts of data to potentially detect suspicious connections, assigning risk scores to a profile, and identifying unusual patterns of behavior. This can support a more dynamic, tailored approach to KYC.


The question-and-answer format provided by GenAI chat could then help compliance teams deal with KYC alerts that require escalation to the second line of defense, which is where investigations happen, and practitioners have to interrogate data and risk profiles at a more in-depth level.

AI and ML in KYC workflows could support processing the vast amounts of data needed to understand a world of risk and generate profiles that enable better decision making. To supplement these workflows, GenAI’s chat-based model could be used as an interactive and transparent investigation and research tool surrounding an entity being assessed in an EDD process.

AI's role in KYC is set to expand, offering organizations significant benefits such as reduced costs, heightened accuracy, faster time-to-decision, and an optimized customer experience. By leveraging AI for identity verification and risk assessment, compliance teams can allocate resources more effectively, focusing on areas of highest risk and importance.




FAQs about AI and ML technologies

  1. What is the difference between AI and ML?

    AI and ML are closely related but distinct concepts within the field of computer science. AI refers to the broader discipline of developing intelligent machines capable of simulating human intelligence. AI aims to mimic or perform a task that would normally require human engagement to make decisions or take actions. ML, on the other hand, is a subset of AI that focuses on using large sets of data and the nuanced patterns within the data to generate software models used to solve problems and derive insights.

    ML algorithms can be updated to improve their performance over time through the addition of new data, allowing systems to recognize patterns, make predictions, or perform tasks without explicit instructions. While ML is a powerful tool in AI, it is just one component of the broader field, which also includes areas like natural language processing (NLP), computer vision, and robotics.

  2. How can AI/ML technology be adopted into a KYC process?

    Moody's AI Review technology can create an AI-powered Alert Score for each entity that an organization, such as a bank or corporation, wants to screen. This 0-1 score represents the confidence level of the screened name, with 0.00 as a no-match and 1.00 as a match.

    Moody’s Grid dataset feeds AI Review. More than 12.1 million rows of data were used to train our AI Review global model. Organizations can choose from pre-configured screening alerts or customize alerts according to their risk policies, using AI Review to reduce false positives and irrelevant hits sent to the compliance team for analysis.

    Setting an alert threshold to adhere to a firm’s unique needs
    The Alert Score can also be used to filter results. For example, firms can choose to further analyze only results that have a score of greater than 0.25 to help them reduce false positives. This tunable screening, defined specifically by each company, can be configured to drive efficiency, while maintaining control over their screening process.

    As mentioned, alerts are scored by the model on a scale of 0–1. An alert with a score closer to 1 signals that the screened name could present a higher risk, which can then be sent to level 2 analysts for additional investigation. By setting a threshold, scores on the lower end of the scale can signify less likelihood of an alert which can be automatically filtered. This means the ML technology can provide repeatable alerts that can be used to sort out false positives, so compliance teams can devote more time to important work of investigating higher risk alerts.

    Monitoring the ML model to implement proper governance
    It is important that any company deploying AI/ML technology understands the inner workings of how it is being used within the organization’s processes and procedures, and to have governance around its design and implementation.

    Our approach is to deploy models that are static, meaning the model does not learn from customer decisions in real-time without human intervention. This is purposeful and by design. Otherwise, the continuous, automated training that the ML model does could present a significant oversight burden for regulated financial institutions and multi-national companies conducting KYC/AML processes. Instead, we monitor our models for any drift that may indicate the model is no longer fit for purpose.

  3. What are the common challenges for adoption of AI in KYC workflows

    The adoption of AI into KYC processes is influenced by several barriers. Here are some common challenges:
  • Implementation costs and integration: Deploying AI systems can involve significant costs, including technology infrastructure, data management, model development, and ongoing maintenance. Organizations need to invest in skilled resources, robust IT infrastructure, and seamless integration with existing systems and processes.

  • Data quality and availability: AI systems heavily rely on high-quality and comprehensive data for effective training and decision-making. However, AML data can be fragmented, incomplete, or of varying quality.

  • Regulatory compliance: Implementing AI solutions while adhering to regulatory frameworks and guidelines can be complex. Organizations must ensure that AI systems are transparent, auditable, and compliant with regulatory standards such as explainability, fairness, and non-discrimination.

  • Interpretability and explainability: AI models, particularly complex deep learning models, are often considered "black boxes" as they lack interpretability. Understanding how an AI system reaches its decisions or predictions is crucial in the AML industry to justify outcomes and provide explanations for regulatory compliance.



Get in touch

Using innovative technology while keeping humans in the loop is a great way to help you answer three key questions:

  1. Who am I doing business with?
  2. What are the risks of working with them?
  3. How can I address this at scale?

Moody's is creating solutions so you can better understand risks and make decisions with confidence. Get in touch any time to talk to us about how we can help with your KYC workflows – we would love to hear from you.