Developer looking at code on computer

Blog

Synthetic identities and why they are important in today’s digital landscape



Synthetic identities are created by combining real and fictitious information to generate a new persona by intentionally leaving a false digital footprint in social media and on public records. These identities are then used by bad actors to commit various types of misdirection, fraud, unauthorized access, and other illegal acts.

Unlike traditional identity theft, which involves stealing an existing person’s identity, synthetic identity fraud creates entirely new identities that may not be linked to a real individual at all, even if some of the biographical or demographic information might actually be true.

Fraudsters go to some lengths when creating synthetic IDs to make them appear authentic and legitimate, fabricating substantial and historical details. Methods known as “Backstopping” for instance see thieves and bad actors develop credible backstories - way beyond just using a SSN and credit history - to ensure their fake identities are plausible and credible, blending actual and fictional information to make them appear more convincing, while also making them more difficult to detect.

Synthetic IDs can be leveraged in numerous ways to defraud businesses and individuals, and they are being used with increasing frequency. While it is unclear exactly how much is lost to synthetic identity fraud, a 2023 report from Thomson Reuters estimated losses in the range of “$20 to 40 billion and growing.”

Here are 4 examples of fraud using synthetic IDs that have been observed:

  1. Fraudulent access: Synthetic identities used to open bank accounts, apply for loans, and commit credit card fraud, causing significant financial losses to institutions and individuals.
  2. Regulatory evasion: Bad actors using synthetic identities to evade sanctions and regulatory scrutiny, making it difficult for authorities to track and prevent illegal acts.
  3. False qualifications: Exploiting medical records and creating fake qualifications to obtain fraudulent claims for disabilities and illnesses, furthering a variety of healthcare frauds.
  4. Influence campaigns: Synthetic identities can be leveraged to manipulate public opinion and spread misinformation, perhaps for political or economic gain.

There are key sectors that are potentially more prone to the risk of being targeted by criminals using synthetic IDs. Technology (hardware), financial services, shipping and transportation are among the most exposed sectors to synthetic ID fraud, as customer and supplier interactions are typically established digitally rather than in person. The complex nature of third-party supplier networks in certain sectors, comprising multiple layers, can further exacerbate vulnerabilities.




Synthetic identity fraud: Obfuscation of beneficial ownership and corporate structuring

Synthetic IDs can be leveraged in a business to business or corporate setting to hide beneficial ownership. This might be done to evade sanctions for example. A specially designated national (SDN) - otherwise known as a sanctioned individual - may try to obscure their involvement in a business during onboarding or monitoring by using a synthetic ID. Or a SDN may attribute partial or full ownership of an entity to a fake ID to try to show no association or a reduced amount of controlling stake.

Equally, those who have been found guilty of a serious crime, such as money laundering or fraud, can attempt to hide their past when setting up a new business enterprise by using a synthetic ID. This could help them gain access to the legitimate financial system or form a new business with their past undetected.




The evolving threat of synthetic identity fraud in a digital age

Challenges created by synthetic identities present a growing and shifting threat in the digital age, as technological innovation brings with it new opportunities that can be exploited by criminals. As far back as 6 years ago, the Federal Reserve published an online report, flagging the rise of synthetic fraud ID as the fastest growing type of fraud in the US. And in November 2024, FinCEN issued an alert on fraud schemes using deepfake media to target financial institutions.

The advent of deepfake video and audio, along with the introduction of Generative AI (GenAI) has made it easier to create convincing synthetic identities, which means anti-fraud strategies need to be adapted to improve detection and prevent new threats.

This is also a supranational issue, as criminals can exploit gaps between national systems, databases, and government agencies to carry out their schemes, making synthetic ID fraud a truly global issue.

Organizations can reduce risk exposure by sourcing methods of identifying and blocking synthetic identities before they are used for fraudulent purposes. Although this may be easier said than done, as both automated solutions and human analysts can find it difficult to interpret fact from fiction. 




Unified, global dataset for a holistic picture of risk

AI-led systems can help detect "tells” or indicators left behind in digital footprints, and inconsistencies or outliers can be a particular trigger for understanding whether an identity is real or synthetic. This means it is important to amass as much information as possible to have a large enough sample size to make it valid.

While the process of identity verification in Know Your Customer (KYC), enhanced due diligence (EDD), and onboarding/monitoring has adapted to automation, the advent of synthetic identities means a human/machine hybrid approach may serve organizations better, providing back up for each other. Organizations can combine everything automation has to offer through data, online documentation, screening, and workflows checks with AI-powered filtering and human analysts conducting their EDD to make the final judgement call.

  • Diverse datasets can be leveraged to create an overall risk profile of an individual or entity, as well as the level of risk they pose based on an organizations’ risk appetite
  • Machine learning tools can help filter potential profiles to present a “true match” against an individual name
  • Organizations can develop pre-calculated risk criteria applied to determine whether an individual looks high, medium, or low risk
  • Automated EDD can request additional documentation, information, or data checks
  • Technology using facial recognition for “liveness” tests can support authentication
  • New validation technologies can be combined with existing ones - for example, voice recognition tele-assistants may instruct people not to just “read a phrase”, but to answer a security question on file in their true voice – to provide two forms of identification in one
  • Specialist partners can be engaged to analyze video and audio for signs of deepfakes

In the instances of higher risk individuals who may appear problematic for a combination of reasons, human analysts can be brought in to conduct the final investigation and review to make the ultimate determination as to whether an entity is to be trusted. 




Regulation, data privacy, and ethical concerns

Legislation governing synthetic ID fraud is currently limited, sitting within existing laws for identity theft and fraud prevention – totally separate and broader risk typologies businesses need to manage and consider within their control frameworks.

Compliance related to onboarding, due diligence and monitoring don’t make specific provision for synthetic ID fraud, although under laws such as the UK’s economic crime and corporate transparency act, failure to prevent fraud is now an offence.

Rather than being driven by laws and regulation, understanding and addressing synthetic ID fraud is driven by the need to manage and mitigate the risk of financial loss, malpractice, and reputational damage.




Combating synthetic identity fraud with advanced technologies

Synthetic identities, enhanced by sophisticated backstopping and innovative use of technologies to blend real and fictitious information, pose a significant challenge in today's digital risk and compliance landscape. These identities are increasingly employed by bad actors for the purposes of financial fraud, regulatory evasion, healthcare fraud, influence campaigns, and more.

The rise of synthetic identity fraud, highlighted by the Federal Reserve as the fastest-growing type of fraud in the US, has been accelerated by the abuse of deepfakes and GenAI.

To mitigate risks, organizations can consider adopting these same technologies to tackle the problem when onboarding or conducting ongoing due diligence on customers and suppliers.

Combining this in a hybrid approach can play to the strengths of automated systems paired with human oversight. Advanced analytics and AI tools can bring together diverse datasets, then help detect inconsistencies and outliers in an individual’s or entity’s profile. While machine learning can then filter based on screening and enhance identity verification processes. Human analysts can then play a crucial role in conducting enhanced due diligence for high-risk cases.




Get in touch

For more information about how Moody’s AI-led solutions can integrate leading datasets, automate due diligence workflows, and support intelligent screening as part of preventing fraudulent activity, please get in touch – we would love to hear from you.