Rise of Deepfake Scams: A Beginner’s Guide

The Coalition of Cyber Investigators examine the operation of deepfake scams, the technology that underpins them, detection and prevention efforts, and measures people can take to avoid this threat.

Paul Wright, Neal Ysart & Claudia Tietze

5/30/202515 min read

ADVANCED TECHNIQUES TO IDENTIFY AI-GENERATED FAKES

AI fakes must be identified using deep-level mechanisms rather than surface verification. This involves employing an end-to-end approach that includes cross-verifying metadata, verifying sources, and checking behavioural context. AI fakes will be identified through subtle inconsistencies that create an incorrect feel, rather than through visual differentiation.

Digital fingerprinting and watermarking technologies are taking centre stage in this effort. They can detect compression, colour gradients, or metadata differences and verify source integrity using blockchain-derived digital signatures. Complementing these efforts is AI-driven forensic analysis, which is now a reliable measure. This process scans for patterns to identify artefacts created by artificial sources and applies adversarial training to recognise GAN-generated content.

Behavioural analysis is another critical thinking step in searching for inconsistencies in something being done or said and examining whether the content logically fits with other information. Hardware-based forensic techniques extract distinctive noise patterns from images and examine inconsistencies in file creation and modification dates, providing hints about the authenticity of the content. Image-centric forensics tests round out the list, checking EXIF data[37] for discrepancies, searching for pixel-level inconsistencies, and performing reverse image searches to confirm the origin of images.

The Coalition for Content Provenance and Authenticity (C2PA) has developed technical standards to certify the source and history of media content, aiming to combat the spread of misleading information online. In 2024, OpenAI integrated C2PA media provenance metadata into images generated by DALL·E 3 through ChatGPT and its API, enhancing transparency and trust in digital media. This development helps distinguish between AI-generated and human-created content, making it easier to verify the authenticity of online media[38].

Error-level analysis (ELA) is another efficient forensic technique for exposing the areas of tampering in an image. It accomplishes this by re-saving the image with decreased quality and quantifying the difference between the re-saved and original images. ELA can identify inconsistencies in facial expressions, lighting, and shadowing in deepfake images or videos, reveal regions where text or signatures have been added or altered digitally in synthetically created documents, and discover persistent noise patterns or other indications of artefacts in artificially created photographs.

Noise Level Analysis (NLA) is a complement to ELA that examines the noise level in different regions of an image. NLA may reveal inconsistencies in texture incompatible with the rest of the image, detect inconsistencies in background noise compared to the subject, and even noise patterns that are not typical of natural images, all symptoms of potential AI creation.

With such sophisticated methods, forensic investigators and examiners can better identify genuine content and separate it from highly sophisticated AI-generated fakes, maintaining the trust and integrity of digital content[39].

STRENGTHENING DEFENCES AGAINST DEEPFAKES

With deepfake threats growing daily, organisations must implement substantial defence systems to protect their business and data. This begins with training staff members to identify and take necessary actions against potential attacks. Organisations should continue practicing mock attack drills, replicating their accurate cybercrime procedures to enhance their response systems. Highly publicised deepfake attacks enable companies to base their simulation exercises on real scenarios, providing insight into the resilience of their controls against similar incidents.

Multi-factor authentication and conditional access policies are stronger authentication controls that protect against unauthourised access using compromised credentials. An in-depth approach to defence must also be followed by organisations with multiple security controls and alerting mechanisms to avert suspected breaches if an initial defence is compromised. Regular security audits and third-party penetration testing must provide an adequate security posture, independent assurance, and defensive measures against evolving threats. [40]

Furthermore, there are tools designed to protect images from being used as training data, which would also prevent them from being used to create deepfakes. While not all instances of a likeness to an individual on the internet can be controlled in this manner, the low-hanging fruit of corporate websites, personal and professional posts, and official releases can[41], [42]

CONCLUSION

Deepfake fraud is a geometrically growing cybersecurity threat that requires an end-to-end, multi-layered detection and mitigation solution. Using OSINT, Cyber Threat Intelligence, and Digital While forensic investigators can proactively track potential fraud deepfake content, identify impending scams in the development stage, and respond before any extensive damage can be done, these efforts must be aligned with a risk-aware corporate culture, ensuring that employees are provided sufficient training and are encouraged to behave security-consciously.

However, as technologies improve, scammers will become increasingly skilled at maintaining their anonymity. Therefore, it is crucial to continuously enhance forensic methods, invest in high-quality AI-powered detection tools, and improve public awareness education programmes. Law enforcement agencies, cybersecurity firms, and regulatory bodies must collaborate to establish guidelines, develop legislative policies, and enhance coordination to combat deepfake-based scams.

Ultimately, only a combination of technological innovation, robust policy frameworks, and widespread digital literacy initiatives can help individuals and organisations recognise and resist synthetic media manipulation. Fostering a culture of greater awareness combined with security-conscious behaviours will help make life more difficult for deepfake fraudsters and reduce the opportunities for exploitation. The more companies invest in educating their employees, the better equipped society becomes to protect itself against the rapidly developing threat of synthetic media.

Authored by: The Coalition of Cyber Investigators with contributions from guest author Claudia Tietze, Community Manager at Valinor Intelligence and Senior Managing Director at Farallon LLC.

© 2025 The Coalition of Cyber Investigators. All rights reserved.

The Coalition of Cyber Investigators is a collaboration between

Paul Wright (United Kingdom) - Experienced Cybercrime, Intelligence (OSINT & HUMINT) and Digital Forensics Investigator; and

Neal Ysart (Philippines) - Elite Investigator & Strategic Risk Advisor, Ex-Big 4 Forensic Leader.

With over 80 years of combined hands-on experience, Paul and Neal remain actively engaged in their field.

They established the Coalition to provide a platform to collaborate and share their expertise and analysis of topical issues in the converging domains of investigations, digital forensics and OSINT. Recognising that this convergence has created grey areas around critical topics, including the admissibility of evidence, process integrity, ethics, contextual analysis and validation, the coalition is Paul and Neal’s way of contributing to a discussion that is essential if the unresolved issues around OSINT derived evidence are to be addressed effectively. Please feel free to share this article and contribute your views.

The Coalition of Cyber Investigators, with decades of hands-on experience in investigations and OSINT, is uniquely positioned to support organisations targeted by deepfake fraudsters. Our team’s expertise is not just theoretical—it’s built on years of real-world investigations, a deep understanding of the dynamic nature of digital intelligence, and a commitment to the highest evidential standards.

[1] Charlwood, R. (2025b, February 12). IProov study reveals deepfake blindspot: Only 0.1% of people can accurately detect AI-Generated Deepfakes |. iProov. https://www.iproov.com/press/study-reveals-deepfake-blindspot-detect-ai-generated-content Accessed 30 May 2025)

[2] Interfax-Ukraine. (2025, February 19). Ukrainian Red Cross warns of fake regarding cash payments from organization. Interfax-Ukraine. https://en.interfax.com.ua/news/general/1049123.html (Accessed 30 May 2025)

[3] Cloud & More. (2024, December 9). Deepfakes and Phishing scams: How AI Is changing cyber security. https://cloudandmore.co.uk/deepfakes-phishing-scams-ai-cyber-security/ (Accessed 23 February 2025)

[4] King, J., & Croucher, S. (2025, May 30). White House responds to attempts to impersonate Trump advisor Susie Wiles. Newsweek. https://www.newsweek.com/white-house-susie-wiles-trump-impersonate-fbi-2078802 Accessed 30 May 2025)

[5] Environment, A. C. (2025, February 5). Innovating to detect deepfakes and protect the public. GOV.UK. https://www.gov.uk/government/case-studies/innovating-to-detect-deepfakes-and-protect-the-public (Accessed 30 May 2025)

[6] What is a Deepfake Attack? | CrowdStrike. (n.d.). https://www.crowdstrike.com/en-us/cybersecurity-101/social-engineering/deepfake-attack/ (Accessed 30 May 2025)

[7] ‘This happens more frequently than people realize’: Arup chief on the lessons learned from a $25m deepfake crime (February 4, 2025) https://www.weforum.org/stories/2025/02/deepfake-ai-cybercrime-arup/ (Accessed 30 May 2025)

[8] Cloud & More. (2024b, December 9). Deepfakes and Phishing scams: How AI Is changing cyber security. https://cloudandmore.co.uk/deepfakes-phishing-scams-ai-cyber-security/ (Accessed 30 May 2025)

[9] Deepfake colleagues trick HK clerk into paying HK$200m - RTHK. (n.d.). https://news.rthk.hk/rthk/en/component/k2/1739119-20240204.htm (Accessed 30 May 2025)

[10] Richter, A. (2025). How bad actors exploit weak fraud prevention measures. Enformion. https://www.enformion.com/blog/how-bad-actors-exploit-weak-fraud-prevention-measures/ (Accessed 30 May 2025)

[11] Vousinas, G. L. (2019). Advancing theory of fraud: The S.C.O.R.E. model. Journal of Financial Crime, 26(1), 372–381.

[12] EY. (n.d.). Preventing and detecting fraud: How to strengthen the roles of companies, auditors, and regulators. https://www.ey.com/en_ao/insights/assurance/preventing-and-detecting-fraud-how-to-strengthen-the-roles-of-companies-auditors-and-regulators (Accessed 30 May 2025)

[13] FBI. (2022, July 1). FBI warns that scammers are using deepfakes to apply for sensitive jobs. WilmerHale Privacy and Cybersecurity Law Blog. https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20220701-fbi-warns-that-scammers-are-using-deepfakes-to-apply-for-sensitive-jobs (Accessed 30 May 2025)

[14] Wright, P. & Ysart, N., The Coalition of Cyber Investigators (2024, September 30). Black OSINT vs. white OSINT: The dual-use dilemma. https://www.linkedin.com/pulse/black-osint-vs-white-dual-use-dilemma-f2onc (Accessed 30 May 2025)

[15] Deepfakes: The latest weapon in the cyber security arms race | beazley. (2024, October 4). Beazley Insurance. https://www.beazley.com/en-US/news-and-events/deepfakes-the-latest-weapon-in-the-cyber-security-arms-race/ (Accessed 30 May 2025)

[16] Godwin, C. (2021, May 18). Elon Musk impersonators earn millions from crypto-scams. BBC News. https://www.bbc.co.uk/news/technology-57152924 (Accessed 30 May 2025)

[17] Election Commission of India. (2024). Social media manipulation during elections. https://elections24.eci.gov.in/docs/2eJLyv9x2w.pdf (Accessed 30 May 2025)

[18] McCrank, J. (2023, October 5). AI is making bank scams worse—and it’s just the beginning. Fortune. https://fortune.com/article/ai-makes-bank-scams-worse/ (Accessed 30 May 2025)

[19] Harwell, D. (2023, June 9). A viral hoax about the Pentagon spread. Then Elon Musk tweeted about it. The Washington Post. https://www.washingtonpost.com/technology/2023/06/09/viral-hoax-pentagon-twitter/ (Accessed 18 May 2025)

[20] Kaur, A. (2023, April 27). Jerome Powell was pranked by Russian comedians posing as Ukraine’s president. CNN Business. https://edition.cnn.com/2023/04/27/business/jerome-powell-prank/index.html (Accessed 18 May 2025)

[21] Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang, M., & Ferrer, C. C. (2023). "The DeepFake Detection Challenge (DFDC) Dataset." arXiv preprint arXiv:2006.07397. https://arxiv.org/abs/2006.07397 (Accessed 30 May 2025)

[22] DeepFaceLab - The most popular deepfake creation tool. Available at: https://github.com/iperov/DeepFaceLab

[23] FaceSwap - Open-source deepfake software. Available at: https://github.com/deepfakes/faceswap

[24] First Order Motion Model - AI-based motion transfer model. Available at: https://github.com/AliaksandrSiarohin/first-order-model

[25] Real-Time Voice Cloning (RTVC) - Open-source AI for synthetic voice generation. Available at: https://github.com/CorentinJ/Real-Time-Voice-Cloning

[26] FakeYou - Deepfake text-to-speech service. Available at: https://fakeyou.com

[27] ElevenLabs - Advanced AI-based voice cloning. Available at: https://elevenlabs.io

[28] Matsakis, L. (2023, May 17). I cloned myself with AI. She fooled my bank and my family. The Wall Street Journal. https://www.wsj.com/articles/i-cloned-myself-with-ai-she-fooled-my-bank-and-my-family-356bd1a3 (Accessed 18 May 2025)

[29] RunwayML - AI-driven video generation tool. Available at: https://runwayml.com

[30] Kapwing - Online video editor used for deepfake manipulation. Available at: https://www.kapwing.com

[31] Stable Diffusion Video - AI-generated video synthesis. Available at: https://stablediffusionweb.com

[32] Burt, T., & Horvitz, E. (2025, February 18). New steps to combat disinformation. Microsoft on the Issues. https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/ (Accessed 30 May 2025)

[33] Law enforcement - Sensity AI. (n.d.). Sensity. https://sensity.ai/use-cases/law-enforcement/ (Accessed 30 May 2025)

[34] Hurler, K. (2022, November 17). Intel says its deepfake detector has 96% accuracy. Gizmodo. https://gizmodo.com/intel-deep-fake-ai-1849795542 (Accessed 30 May 2025)

[35] Amped Software. (n.d.). Amped Authenticate: Photo and video analysis and tampering detection. from https://ampedsoftware.com/authenticate (Accessed 30 May 2025)

[36] InVID Project. (n.d.). InVID verification plugin. https://www.invid-project.eu/tools-and-services/invid-verification-plugin (Accessed 18 May 2025)

[37] EXIF stands for Exchangeable Image File Format. It's a standard way to store information about digital images, including the camera and lens used, the date and time, and the shooting settings.

[38] Center for News, Technology & Innovation. (2024, December 3). Watermarks are Just One of Many Tools Needed for Effective Use of AI in News - Center for News, Technology & Innovation. https://innovating.news/article/watermarks-are-just-one-of-many-tools-needed-for-effective-use-of-ai-in-news/ (Accessed 30 May 2025)

[39] Williams, J., & Williams, J. (2025, February 7). Tackling AI threats. Advanced DFIR methods and tools for deepfake detection | Pen Test Partners. Pen Test Partners. https://www.pentestpartners.com/security-blog/tackling-ai-threats-advanced-dfir-methods-and-tools-for-deepfake-detection/ (Accessed 30 May 2025)

[40] Law Society of Scotland. (n.d.). A Deep Dive into Deepfakes. https://www.lawscot.org.uk/news-and-events/blogs-opinions/a-deep-dive-into-deepfakes/ (Accessed 30 May 2025)

[41] OpenDataScience. (2023, August 8). 3 tools to safeguard images from AI scraping. https://opendatascience.com/3-tools-to-safeguard-images-from-ai-scraping/ (Accessed 18 May 2025)

[42] Hernandez, D. (2023, August 1). These new tools could help protect our pictures from AI. MIT Technology Review. https://www.technologyreview.com/2023/08/01/1077072/these-new-tools-could-help-protect-our-pictures-from-ai/ (Accessed 18 May 2025)

Rise of Deepfake Scams: A Beginner’s Guide

This article examines the operation of deepfake scams, the technology that underpins them, detection and prevention efforts, and the measures individuals and organisations can take to safeguard themselves against this emerging threat.

INTRODUCTION

Deepfake technology has emerged as a powerful tool for cybercrime. It allows criminals to create highly sophisticated yet fake content that manipulates individuals and organisations into taking actions they would not usually take

The essence of these scams lies in the large-scale exploitation of public trust and emotions. They target those who are most vulnerable and least likely to possess the resources to combat the fallout. Deepfake technology can worsen the issue as Artificial Intelligence (AI) becomes more sophisticated, realistic, and accessible. Scams have progressed from basic manipulations and fake media content to more advanced forms, utilising Deepfake technology and creating unprecedented challenges for cybersecurity experts and law enforcement worldwide.

Deepfakes are successful because our understanding has not yet caught up to this new reality. We tend to trust video content more than other forms of dissemination and have higher confidence in our ability to identify fake content. A report by Australia’s Security Brief, based on research from iProov, states that only 0.1% of people can identify a deepfake. [1] To complicate matters further, older individuals tend to be less aware of deepfake technology than their younger counterparts.

These scams are not always for financial gain. They can also target information, intellectual property, or, as in the case of the Ukrainian Red Cross Society, sow seeds of distrust, create chaos, and set the groundwork for future fracture points. In the Ukrainian Red Cross scam, vulnerable people were targeted by deepfake technology and the promise of a UAH 2,500 payment. Fraudsters called these people to file applications with their local branch or contact a hotline. The results are an artificial strain on Ukrainian Red Cross resources, as well as a sense of betrayal among the population that can later be exploited through further disinformation techniques, such as the evolution of conspiracy theories[2].

HOW DEEPFAKE SCAMS WORK

The emergence of deepfake scams can be explained by combining old fraud schemes with the latest advancements in AI, “Old Crimes – New Tools”. While institutions have had decades to shore up traditional security models, individuals, despite being educated and aware of various scams, remain vulnerable to deepfakes, which bypass these measures with a more convincing sense of authenticity. However, criminals can now expertly craft unique hybrids of traditional crimes and modern technology to bypass most competent professionals. Understanding how scams work, their mechanics, and the countermeasures that can be taken is the first step in tackling the problem [3].

Technical Implementation

Deepfake scams typically follow a structured process. Fraudsters gather video and audio samples of their target, often public figures[4], executives, or humanitarian workers, from social media, press conferences, and news reports[5]. These samples are then used to train AI models, employing Generative Adversarial Networks (GANs) and other AI frameworks to replicate the target's voice, facial expressions, and mannerisms[6]. Once trained, the AI generates synthetic videos where the impersonated individual delivers tailored messages designed to elicit trust and urgency. Finally, fraudsters distribute these deepfake videos through fake websites, phishing emails, and social media accounts to maximise their reach[7].

Social Engineering Tactics

Fraudsters' psychological manipulation techniques amplify the effectiveness of deepfake scams. By impersonating trusted figures, scammers exploit the principle of authority, making their fraudulent messages appear legitimate. These scams often coincide with crises, such as natural disasters, pandemics, or economic instability, when victims are particularly vulnerable[8].

Additionally, fraudsters create artificial urgency, pressuring targets to act quickly under the pretence of limited-time offers or deadlines, which prevents them from verifying the authenticity of the claims. Many deepfake scams also involve directing victims to fraudulent websites, where they are prompted to enter personal data and banking details or upload identity documents under the guise of verification. In more advanced operations, victims are manipulated into downloading seemingly harmless applications or forms that contain malware designed to steal sensitive information or deploy ransomware.

Social engineering presents fewer risks than previous work that involved hacking, such as accessing a CEO’s emails. Hacking leaves trails that can be traced, while social engineering often leaves less evidence for investigators to follow.

In 2024, scammers sent an email request for money transfers to a branch employee of a Hong Kong financial institution. When the employee questioned the email, a deepfake video conference call was set up with the company's CFO and other employees. This false sense of authenticity led the branch employee to make 15 transfers totalling $25 million to the scammers. Before deepfake technology, such a stunning graft would have required access to multiple technology systems. In this case, it only requires an email and a Zoom call[9].

Background or Inside Knowledge

Social engineering tactics can also be deployed to obtain what could be known as the “deepfake scammer's secret weapon”—detailed background or inside knowledge of the company or individual they are targeting[10]. For example, suppose a scammer knows when billing cycles occur and is aware of the identity of the beneficiaries and the payment justifications. In that case, it can elevate the realistic nature of the message to such an extent that the targets may expect to receive that type of instruction.

Many other methods exist for criminals to obtain this type of information. In addition to social engineering, these include:

  • Insider collusion, such as disgruntled employees or those offered inducements, threatened or otherwise coerced staff, or friends or associates of the scammers[11].

  • Former employees with sufficient retained knowledge to make the premise of a deepfake scam highly realistic.

  • Cybersecurity breaches enable scammers to gather relevant background information.

  • Exploiting weaknesses or breaches in third parties, such as service providers or other commercial partners[12].

  • Surveillance – including eavesdropping in public places to gather sensitive information and technical exploits.

  • Deepfake scammers use job applications and interviews to gather helpful background information[13].

  • OSINT, or what The Coalition of Cyber Investigators refers to as “Black OSINT”[14], can provide a treasure trove of information that could be helpful to a scammer from sources as varied as family social media or identification of clubs and hobbies of targets.

In summary, background or inside knowledge can significantly increase the success rates of deepfake scammers and is one of the significant indicators that investigators look for when responding to fraud.

Financial Exploitation

Deepfake scammers employ multiple methods to extract financial information. Some scams require victims to make upfront payments, disguised as processing fees, to receive a more significant promised benefit. Others use deepfake videos to recruit individuals into unknowingly serving as money mules, laundering illicit funds for cybercriminal networks. Cryptocurrency scams have also surged, with deepfake-generated endorsements from high-profile figures, such as Elon Musk, convincing victims to invest in fraudulent digital assets. Personal data harvested through deepfake scams is also frequently sold on dark web marketplaces, which can be used for identity theft or targeted phishing attacks[15].

CASE STUDIES: HIGH-PROFILE DEEPFAKE SCAMS

Recent incidents demonstrate the evolving sophistication and impact of deepfake fraud:

1. UK Crypto Scam (2023): A deepfake video of Elon Musk was used to promote a fake cryptocurrency investment scheme, resulting in millions of dollars in losses for unsuspecting investors. The operation involved creating dedicated websites that resembled legitimate financial platforms and using targeted social media advertising to reach potential victims. [16]

2. Indian Election Disinformation (2024): Deepfake videos of politicians were circulated to spread false campaign promises and misinformation during the elections. Investigation revealed a network of inauthentic social media accounts with coordinated posting patterns designed to maximise the reach of the synthetic content[17].

3. Banking Verification Scam (2023): Criminals created deepfake customer service representatives from major banks and conducted video calls with customers to "verify account details" following purported security breaches. The operation harvested banking credentials and authentication codes from thousands of victims across multiple countries[18].

4. Image of Pentagon Explosion (2023): A video circulating on Twitter (now X) from a pro-Russian account shows an explosion at the Pentagon, which reportedly caused a significant drop in the Dow Jones. The stock market dropped 85 points in four minutes. [19]

5. US Federal Reserve Chair (2023): Jerome Powell, the US Federal Reserve Chairman, had a video conversation about politics and the global economy with a deepfake of Ukrainian President Volodymyr Zelensky. Russian pranksters who previously targeted the President of the International Monetary Fund and Angela Merkel recorded and released the discussion in pieces[20].

TOOLS USED IN DEEPFAKE CREATION AND DETECTION

The technological arms race between deepfake creators and defenders continues to escalate, with both sides leveraging increasingly sophisticated tools and techniques. Understanding the capabilities and limitations of these tools is essential for developing effective detection and prevention strategies [21].

Deepfake Creation Tools

To understand and detect deepfake scams, it is crucial to recognise the tools fraudsters use. DeepFaceLab[22], one of the most widely used open-source tools, enables face-swapping with minimal technical expertise but leaves detectable artefacts, such as inconsistencies around the jawline and eye region. FaceSwap[23], another deepfake creation tool, utilises multiple GAN architectures and often produces distinct blending patterns around facial borders, making it possible for forensic experts to identify manipulated content. The First Order Motion Model [24], a deepfake tool that animates still images, is frequently exploited to create "talking head" videos but leaves behind warping artefacts that forensic analysts can use to verify authenticity.

Voice synthesis technology is also commonly used in deepfake scams. Real-Time Voice Cloning (RTVC)[25] can generate synthetic speech from just a few minutes of audio. However, it typically struggles with natural prosody and emotional inflexion, making it detectable through acoustic analysis. FakeYou[26], a commercial deepfake text-to-speech service, specialises in celebrity and fictional character voice cloning, though it often exhibits spectral irregularities that can be identified using forensic techniques. More advanced tools, such as ElevenLabs[27], produce highly realistic synthetic speech but incorporate subtle watermarking that investigators can detect through acoustic analysis. In 2023, a Wall Street Journal reporter cloned her voice and fooled her bank’s identity verification system, demonstrating how tools combined to create deepfake videos can also be used independently by criminals[28].

Deepfake video manipulation is facilitated by tools such as RunwayML[29], which enables motion synthesis and video generation. While its outputs are visually convincing, forensic examiners can detect synthetic content by analysing temporal inconsistencies in background elements. Kapwing[30], a web-based video editor, is frequently used to blend deepfake elements with legitimate footage; however, forensic timeline analysis can reveal composition artefacts at edit points. Additionally, Stable Diffusion Video[31] enables the creation of entirely synthetic video content that mimics real-life scenes, although it produces characteristic frame transition patterns that forensic investigators can analyse.

Deepfake Detection Tools

Law enforcement and cybersecurity professionals use advanced detection tools to counteract deepfake scams. Microsoft Video Authenticator[32] scans videos for subtle blending artefacts that indicate manipulation. Sensity AI[33], a leading deepfake detection platform, specialises in identifying face swaps and voice cloning, making it a valuable tool for law enforcement. Intel FakeCatcher claims a 96% accuracy rate in deepfake detection by analysing blood flow patterns in video footage, a method currently available to select security partners[34].

In addition to automated detection, forensic analysis platforms play a crucial role in identifying deepfakes. Amped Authenticate[35] is professional forensic software that detects inconsistencies in lighting and reflections, often telltale signs of manipulated content.

The InVid browser plugin helps journalists quickly fact-check content, placing heavy emphasis on identifying manipulated and AI-generated images and videos. It includes filters that create a heat map of potential areas of manipulation and artefacts, allowing users to magnify areas of interest using keyframes [36].

Rise of Deepfake Scams: A Beginner’s Guide

This article examines the operation of deepfake scams, the technology that underpins them, detection and prevention efforts, and the measures individuals and organisations can take to safeguard themselves against this emerging threat.

INTRODUCTION

Deepfake technology has emerged as a powerful tool for cybercrime. It allows criminals to create highly sophisticated yet fake content that manipulates individuals and organisations into taking actions they would not usually take

The essence of these scams lies in the large-scale exploitation of public trust and emotions. They target those who are most vulnerable and least likely to possess the resources to combat the fallout. Deepfake technology can worsen the issue as Artificial Intelligence (AI) becomes more sophisticated, realistic, and accessible. Scams have progressed from basic manipulations and fake media content to more advanced forms, utilising Deepfake technology and creating unprecedented challenges for cybersecurity experts and law enforcement worldwide.

Deepfakes are successful because our understanding has not yet caught up to this new reality. We tend to trust video content more than other forms of dissemination and have higher confidence in our ability to identify fake content. A report by Australia’s Security Brief, based on research from iProov, states that only 0.1% of people can identify a deepfake. [1] To complicate matters further, older individuals tend to be less aware of deepfake technology than their younger counterparts.

These scams are not always for financial gain. They can also target information, intellectual property, or, as in the case of the Ukrainian Red Cross Society, sow seeds of distrust, create chaos, and set the groundwork for future fracture points. In the Ukrainian Red Cross scam, vulnerable people were targeted by deepfake technology and the promise of a UAH 2,500 payment. Fraudsters called these people to file applications with their local branch or contact a hotline. The results are an artificial strain on Ukrainian Red Cross resources, as well as a sense of betrayal among the population that can later be exploited through further disinformation techniques, such as the evolution of conspiracy theories[2].

HOW DEEPFAKE SCAMS WORK

The emergence of deepfake scams can be explained by combining old fraud schemes with the latest advancements in AI, “Old Crimes – New Tools”. While institutions have had decades to shore up traditional security models, individuals, despite being educated and aware of various scams, remain vulnerable to deepfakes, which bypass these measures with a more convincing sense of authenticity. However, criminals can now expertly craft unique hybrids of traditional crimes and modern technology to bypass most competent professionals. Understanding how scams work, their mechanics, and the countermeasures that can be taken is the first step in tackling the problem [3].

Technical Implementation

Deepfake scams typically follow a structured process. Fraudsters gather video and audio samples of their target, often public figures[4], executives, or humanitarian workers, from social media, press conferences, and news reports[5]. These samples are then used to train AI models, employing Generative Adversarial Networks (GANs) and other AI frameworks to replicate the target's voice, facial expressions, and mannerisms[6]. Once trained, the AI generates synthetic videos where the impersonated individual delivers tailored messages designed to elicit trust and urgency. Finally, fraudsters distribute these deepfake videos through fake websites, phishing emails, and social media accounts to maximise their reach[7].

Social Engineering Tactics

Fraudsters' psychological manipulation techniques amplify the effectiveness of deepfake scams. By impersonating trusted figures, scammers exploit the principle of authority, making their fraudulent messages appear legitimate. These scams often coincide with crises, such as natural disasters, pandemics, or economic instability, when victims are particularly vulnerable[8].

Additionally, fraudsters create artificial urgency, pressuring targets to act quickly under the pretence of limited-time offers or deadlines, which prevents them from verifying the authenticity of the claims. Many deepfake scams also involve directing victims to fraudulent websites, where they are prompted to enter personal data and banking details or upload identity documents under the guise of verification. In more advanced operations, victims are manipulated into downloading seemingly harmless applications or forms that contain malware designed to steal sensitive information or deploy ransomware.

Social engineering presents fewer risks than previous work that involved hacking, such as accessing a CEO’s emails. Hacking leaves trails that can be traced, while social engineering often leaves less evidence for investigators to follow.

In 2024, scammers sent an email request for money transfers to a branch employee of a Hong Kong financial institution. When the employee questioned the email, a deepfake video conference call was set up with the company's CFO and other employees. This false sense of authenticity led the branch employee to make 15 transfers totalling $25 million to the scammers. Before deepfake technology, such a stunning graft would have required access to multiple technology systems. In this case, it only requires an email and a Zoom call[9].

Background or Inside Knowledge

Social engineering tactics can also be deployed to obtain what could be known as the “deepfake scammer's secret weapon”—detailed background or inside knowledge of the company or individual they are targeting[10]. For example, suppose a scammer knows when billing cycles occur and is aware of the identity of the beneficiaries and the payment justifications. In that case, it can elevate the realistic nature of the message to such an extent that the targets may expect to receive that type of instruction.

Many other methods exist for criminals to obtain this type of information. In addition to social engineering, these include:

  • Insider collusion, such as disgruntled employees or those offered inducements, threatened or otherwise coerced staff, or friends or associates of the scammers[11].

  • Former employees with sufficient retained knowledge to make the premise of a deepfake scam highly realistic.

  • Cybersecurity breaches enable scammers to gather relevant background information.

  • Exploiting weaknesses or breaches in third parties, such as service providers or other commercial partners[12].

  • Surveillance – including eavesdropping in public places to gather sensitive information and technical exploits.

  • Deepfake scammers use job applications and interviews to gather helpful background information[13].

  • OSINT, or what The Coalition of Cyber Investigators refers to as “Black OSINT”[14], can provide a treasure trove of information that could be helpful to a scammer from sources as varied as family social media or identification of clubs and hobbies of targets.

In summary, background or inside knowledge can significantly increase the success rates of deepfake scammers and is one of the significant indicators that investigators look for when responding to fraud.

Financial Exploitation

Deepfake scammers employ multiple methods to extract financial information. Some scams require victims to make upfront payments, disguised as processing fees, to receive a more significant promised benefit. Others use deepfake videos to recruit individuals into unknowingly serving as money mules, laundering illicit funds for cybercriminal networks. Cryptocurrency scams have also surged, with deepfake-generated endorsements from high-profile figures, such as Elon Musk, convincing victims to invest in fraudulent digital assets. Personal data harvested through deepfake scams is also frequently sold on dark web marketplaces, which can be used for identity theft or targeted phishing attacks[15].

CASE STUDIES: HIGH-PROFILE DEEPFAKE SCAMS

Recent incidents demonstrate the evolving sophistication and impact of deepfake fraud:

1. UK Crypto Scam (2023): A deepfake video of Elon Musk was used to promote a fake cryptocurrency investment scheme, resulting in millions of dollars in losses for unsuspecting investors. The operation involved creating dedicated websites that resembled legitimate financial platforms and using targeted social media advertising to reach potential victims. [16]

2. Indian Election Disinformation (2024): Deepfake videos of politicians were circulated to spread false campaign promises and misinformation during the elections. Investigation revealed a network of inauthentic social media accounts with coordinated posting patterns designed to maximise the reach of the synthetic content[17].

3. Banking Verification Scam (2023): Criminals created deepfake customer service representatives from major banks and conducted video calls with customers to "verify account details" following purported security breaches. The operation harvested banking credentials and authentication codes from thousands of victims across multiple countries[18].

4. Image of Pentagon Explosion (2023): A video circulating on Twitter (now X) from a pro-Russian account shows an explosion at the Pentagon, which reportedly caused a significant drop in the Dow Jones. The stock market dropped 85 points in four minutes. [19]

5. US Federal Reserve Chair (2023): Jerome Powell, the US Federal Reserve Chairman, had a video conversation about politics and the global economy with a deepfake of Ukrainian President Volodymyr Zelensky. Russian pranksters who previously targeted the President of the International Monetary Fund and Angela Merkel recorded and released the discussion in pieces[20].

TOOLS USED IN DEEPFAKE CREATION AND DETECTION

The technological arms race between deepfake creators and defenders continues to escalate, with both sides leveraging increasingly sophisticated tools and techniques. Understanding the capabilities and limitations of these tools is essential for developing effective detection and prevention strategies [21].

Deepfake Creation Tools

To understand and detect deepfake scams, it is crucial to recognise the tools fraudsters use. DeepFaceLab[22], one of the most widely used open-source tools, enables face-swapping with minimal technical expertise but leaves detectable artefacts, such as inconsistencies around the jawline and eye region. FaceSwap[23], another deepfake creation tool, utilises multiple GAN architectures and often produces distinct blending patterns around facial borders, making it possible for forensic experts to identify manipulated content. The First Order Motion Model [24], a deepfake tool that animates still images, is frequently exploited to create "talking head" videos but leaves behind warping artefacts that forensic analysts can use to verify authenticity.

Voice synthesis technology is also commonly used in deepfake scams. Real-Time Voice Cloning (RTVC)[25] can generate synthetic speech from just a few minutes of audio. However, it typically struggles with natural prosody and emotional inflexion, making it detectable through acoustic analysis. FakeYou[26], a commercial deepfake text-to-speech service, specialises in celebrity and fictional character voice cloning, though it often exhibits spectral irregularities that can be identified using forensic techniques. More advanced tools, such as ElevenLabs[27], produce highly realistic synthetic speech but incorporate subtle watermarking that investigators can detect through acoustic analysis. In 2023, a Wall Street Journal reporter cloned her voice and fooled her bank’s identity verification system, demonstrating how tools combined to create deepfake videos can also be used independently by criminals[28].

Deepfake video manipulation is facilitated by tools such as RunwayML[29], which enables motion synthesis and video generation. While its outputs are visually convincing, forensic examiners can detect synthetic content by analysing temporal inconsistencies in background elements. Kapwing[30], a web-based video editor, is frequently used to blend deepfake elements with legitimate footage; however, forensic timeline analysis can reveal composition artefacts at edit points. Additionally, Stable Diffusion Video[31] enables the creation of entirely synthetic video content that mimics real-life scenes, although it produces characteristic frame transition patterns that forensic investigators can analyse.

Deepfake Detection Tools

Law enforcement and cybersecurity professionals use advanced detection tools to counteract deepfake scams. Microsoft Video Authenticator[32] scans videos for subtle blending artefacts that indicate manipulation. Sensity AI[33], a leading deepfake detection platform, specialises in identifying face swaps and voice cloning, making it a valuable tool for law enforcement. Intel FakeCatcher claims a 96% accuracy rate in deepfake detection by analysing blood flow patterns in video footage, a method currently available to select security partners[34].

In addition to automated detection, forensic analysis platforms play a crucial role in identifying deepfakes. Amped Authenticate[35] is professional forensic software that detects inconsistencies in lighting and reflections, often telltale signs of manipulated content.

The InVid browser plugin helps journalists quickly fact-check content, placing heavy emphasis on identifying manipulated and AI-generated images and videos. It includes filters that create a heat map of potential areas of manipulation and artefacts, allowing users to magnify areas of interest using keyframes [36].