Fifty Years of Deception: From Forged Letters to Synthetic Identities

To mark their 50th published article, The Coalition of Cyber Investigators look back at the evolution of deception over the last 50 years and the role that OSINT now plays in unmasking it.

Paul Wright & Neal Ysart

9/21/20259 min read

Fifty Years of Deception: From Forged Letters to Synthetic Identities

INTRODUCTION

This is the fiftieth article published by The Coalition of Cyber Investigators, and it feels right that this milestone is used not just to share further open-source intelligence (OSINT) and investigation techniques and guidance but also to pause and reflect.

Fifty is a number with gravitas. Our cofounders are well into their fifth decade of being active in intelligence and investigations. They have spent years pursuing criminals and witnessing deception evolve from “no-tech” to “hi-tech.” Therefore, this fiftieth paper provides the perfect reason to look backwards and forward and consider what lessons can be learned.

So instead of tracking a single case, we’ve chosen deception itself as the subject. Not one scam or threat, but the historical lineage of the con. How did deception look fifty years ago? How has it changed in the decades since? And what is its significance to the world we live in now, where we see deceptive threats such as AI-generated profiles, cloned voices, synthetic news anchors, deepfake videos, or scam press releases published on official channels? To what extent do these threats resemble old-school forged letters, doctored photographs, and fake company prospectuses from 50 years ago?

These parallels matter to investigators working with OSINT because open sources provide many clues that can link old deceptions to their modern incarnations.

This article, therefore, marks our fiftieth publication with fifty years of context.

Deception has never been static. It shifts its surface constantly, but the underlying mechanics remain familiar: manipulating trust, exploiting fear, and exploiting the gap between appearance and reality.

COLD WAR DECEPTION AND THE AGE OF FORGERIES

During the Cold War, deception as a weapon belonged primarily to nation-states. It was strategic, slow-moving, and often aimed at shaping international opinion. The Soviet bloc ran what it called active measures (in Russian, “operativnye meropriyatiya / оперативные мероприятия”), which were covert campaigns ranging from the planting of false stories to complete document forgeries. One famous case from 1983 was “Operation Denver”, where Soviet services planted the claim that HIV/AIDS had been deliberately created in a US military laboratory. The story appeared first in a smaller newspaper before rippling globally, amplified through sympathetic outlets and activist groups.

The British ran their own deceptions. Declassified files at the UK National Archives describe black propaganda operations in Africa, designed to undermine Cold War enemies by encouraging anti-communist ideals and inciting racial violence and tensions. These operations relied on the authority of print. If a forged letter appeared in a respected newspaper, it took on a legitimacy that was hard to contest. Much in the same way that a fake press release published on a respectable news site, such as the London Stock Exchange’s Regulatory News Service, looks genuine to a boiler room investment scam victim – a topic we’ll explore in more detail in an upcoming paper from The Coalition of Cyber Investigators analysing the amplification of fraud.

However, the hallmarks of this older deception were patience plus a need for significant resources such as time, funding, skilled forgers, and access to the right printing machinery – compare that to the almost instantaneous result that online services can provide today. Forging a letter meant finding the right typewriter, ink and seals. Placing stories required bribed journalists or organisations to act as a front. Individuals did not easily do it. Mass deception was complex and expensive, requiring collusion in those days. Yet even then, traces of these operations surfaced publicly, for example, it could have been newspaper cuttings, questionable pamphlets, or even simple rumours - the kind of fragments that in today’s terms would be classed as OSINT.

THE SCAMMERS OF THE 80S AND 90S

By the 1980s and 90s, mass deception began to drift out of the hands of governments and into the marketplace. Fax technologies, mobile phones and the rise of cheap print allowed fraudsters to run investment scams at scale.

Boiler room scams became infamous in London, Hong Kong, New York and beyond. Victims received slick brochures, signed letters and follow-up calls from “brokers” offering exclusive deals. Regulators started to publish lists of scam firms that used cloned company identities and falsified registration numbers, a practice that continues today on the websites of several international financial services regulators.

The scammers' underlying technique mirrored Cold War deception: mimic authority, mix the false with the real, apply pressure and pull psychological levers such as the fear of missing out (“FOMO”). But the environment had changed. Instead of a forged document drifting across embassies, the weapon was now a glossy prospectus landing in a letterbox, followed by a telephone call. Distribution became the lever of power.

In parallel, investigators increasingly relied on open source records - company filings, regulatory warnings or returns, and news reports - to help untangle these scams long before OSINT became the discipline it is today.

The arrival and mass adoption of email in the mid-1990s supercharged this. The so-called “Nigerian 419” or “advance fee fraud” letters are a cliché now, but they were ground-breaking. Mass deception could be automated for the first time, reaching hundreds of thousands of potential victims at minimal cost. The scams barely needed to be convincing. The operation made its money as long as a small fraction of recipients fell prey.

ONLINE COMMUNITIES AND HIJACKING TRUST

The early Internet of the late 1990s and early 2000s brought forums, chatrooms, and blogs. Here, deception took on a communal quality. Misinformation spread not only because state actors placed it but also because online groups and activist communities had started to generate, share, and maintain it.

Studies of early online discussions showed how conspiracy narratives could snowball. Cass Sunstein’s work on the “cascade” effect of conspiracy theories describes how groups start with minor confusion, then reinforce each other’s doubts until a complete theory is built, regardless of evidence. It’s almost as if the internet were tailor-made for misinformation campaigns.

This was a different texture of deception: less polished, more participatory. Nobody needed to print a forged government memo or a counterfeit private letter between a politician and their secret lover. A community, convinced of its logic, could generate mistruths from misinterpretations of grainy video or selective statistics. The issue was no longer spotting a forged document but discerning when thousands of people persuaded each other into false beliefs. For opensource investigators, this meant learning to watch patterns, monitoring who reposted a particular viewpoint and tracking when a forum’s conspiratorial narrative first appeared and who interacted with it. It could be said that those habits became the early building blocks of modern OSINT practices.

SOCIAL MEDIA AS AN AMPLIFIER

From the mid-2000s onward, social media reshaped everything. Facebook, Twitter and YouTube gave deception a turbo-charged amplifier. Bot accounts inflated narratives, hijacked hashtags, and memes linked visual punch with ideological claims.

The most studied case remains Russian influence operations run by the Internet Research Agency, headquartered in St Petersburg. The US Senate Intelligence Committee’s 2019 report describes how trolls and automated accounts seeded stories during the 2016 US presidential election. Entire Facebook pages and advocacy groups were fabricated, and millions of genuine users later shared their posts.

European governments faced similar episodes. Researchers traced bot-driven disinformation during the Brexit referendum campaign, where networks of automated and semi-automated accounts amplified division. The techniques echoed the past, but with speed unthinkable during the Cold War. Now, an operation could scale from a single account to millions of impressions overnight. Without monitoring, these networks would remain invisible. Still, OSINT techniques such as mapping account behaviour, analysing event chronology, linking connections and tracking amplification routes gave investigators a way to start building a picture of the machinery behind the noise.

SYNTHETIC MEDIA AND THE DECEPTIVE POLISH OF AI

The last five years have added yet another layer. Generative AI has made deception both shinier and cheaper. Where the boiler room scams of the 1990s struggled with broken English and blurry photographs, today’s fraudsters can produce flawless prose, glossy websites and realistic synthetic personas, complete with profile pictures, at almost no cost.

The Coalition of Cyber Investigators' work has already highlighted the dangers that criminal use of AI presents. Boiler room operators use AI to draft grammatically correct, persuasive pitches, generate authentic-looking company websites, and create persuasive LinkedIn profiles furnished with AI-generated portraits. Once, an awkward comma or pixelated headshot might have been an obvious red flag, but now the fronts look polished enough to pass at first glance, and awkward grammar is a thing of the past. These scams aren’t new in method, but they are massively improved in presentation. What can limit their success, however, is OSINT - from satellite imagery and street views to reverse image checks, from domain and technical configurations to metadata analysis, and crossplatform verification - all of these techniques and more can help provide a way to dull the shine that AI adds to scammers who exploit its capabilities.

The criminal use of AI is so concerning that Government agencies have openly warned about it. For example, Europol published a 2023 report on the impact of Large Language Models (LLMs) such as ChatGPT for criminal use, particularly in scam automation, persona synthesis, and powering misinformation campaigns. The US Federal Trade Commission (FTC) has also sounded alarms over AI-driven voice cloning used in exploits such as “grandparent scams”, where criminals ring targets with voices cloned from real relatives to demand urgent fund transfers. The extent of the FTC’s concerns can be demonstrated by the introduction of a Voice Cloning Challenge – a competition designed to encourage innovative ways to tackle AI-enabled voice fraud and protect consumers.

Academic research has also started mapping these risks. A 2019 study showed that human participants already find it difficult to discern AI-generated text from genuine journalism, especially when time pressure is introduced. While detection technologies exist even in 2025, they lag behind the enthusiasm of criminals who still have no deterrent that could persuade them to moderate their output.

CONCLUSION - REVIEWING THE LAST 50 YEARS

Across fifty years of case studies, the continuity is striking. Deception does not need technological brilliance. It thrives when it can hang on to the coattails of authority, anxiety or the hunger for belonging.

The printed copy of a 1980s forged investment prospectus and the slick AI-generated websites of a 2025 boiler room investment fraud operation rely on the same psychological architecture: exploit every weakness, induce urgency and pressure, and cloak the true intentions in borrowed trust. The only difference is that technology, especially AI exploitation, is used as an accelerant to be poured on a familiar fire.

The forged prospectus from the 1980s did not need to stand up to close forensic scrutiny - it only needed to pull the FOMO lever, the fake brokers on the end of the phone would do the rest. Today, a bogus “broker” from a boiler-room fraud operation does not need to be universally accepted. They just need a plausible enough backstory and supporting visuals to persuade their victim to enter into dialogue where they can begin pulling further scam levers.

Reflecting on five decades makes it clear that we are not facing novelty so much as repetition dressed in new clothes. Today's deceptions carry familiar DNA but are faster and more convincing.

Fifty articles in, The Coalition of Cyber Investigators has seen deceptions in every shape - from forged Cold War documents to AIpolished frauds dressed up as legitimate businesses. What stands out across those cases is not how clever the lies are, but how persistent the traces they leave are. This is where OSINT proves its worth. Open sources, whether a scrap of metadata, a recycled image, or the synchronised posting across a network, allow investigators to see connections others miss. The methods have changed across fifty years, but the principle holds: deceptions always leak signals into the open.

The investigator's task is to know where to look, test appearances, and read those signals with care. OSINT is not just another tool in that effort - it is the field where many of the most telling indicators of deception first appear.

Authored by: The Coalition of Cyber Investigators

Paul Wright (United Kingdom) & Neal Ysart (Philippines)

©2025 The Coalition of Cyber Investigators. All rights reserved.

The Coalition of Cyber Investigators is a collaboration between

Paul Wright (United Kingdom) - Experienced Cybercrime, Intelligence (OSINT & HUMINT) and Digital Forensics Investigator;

Neal Ysart (Philippines) - Elite Investigator & Strategic Risk Advisor, Ex-Big 4 Forensic Leader; and

Lajos Antal (Hungary), Highly Experienced Cyber Forensics, Investigations and Cybercrime Expert.

The Coalition unites leading experts to deliver cutting-edge research, OSINT, Investigations, & Cybercrime Advisory Services worldwide.

Our co-founders, Paul Wright and Neal Ysart, offer over 80 years of combined professional experience. Their careers span law enforcement, cyber investigations, open source intelligence, risk management, and strategic advisory roles across multiple continents.

They have been instrumental in setting formative legal precedents and stated cases in cybercrime investigations, as well as contributing to the development of globally accepted guidance and standards for handling digital evidence.

Their leadership and expertise form the foundation of the Coalition’s commitment to excellence and ethical practice.

Alongside them, Lajos Antal, a founding member of our Boiler Room Investment Fraud Practice, brings deep expertise in cybercrime investigations, digital forensics and cyber response, further strengthening our team’s capabilities and reach.

If you've been affected by an investment fraud scheme and need assistance, The Coalition of Cyber Investigators specialise in investigating boiler room investment fraud. With decades of hands-on experience in investigations and OSINT, we are uniquely positioned to help.

We offer investigations, preparation of investigative reports for law enforcement, regulators and insurers, and pre-investment validation services to help you avoid scams in the first place.