
In 2024, a security researcher at a major technology company accepted a LinkedIn connection request from a woman named "Alicia Chen." Her profile was polished. Stanford MBA. Five years at a well-regarded consulting firm. Mutual connections in the right places. Her messages were thoughtful, professionally calibrated, and patient. Over several weeks she expressed genuine interest in his work. She asked smart questions. She remembered details from previous conversations.
She did not exist.
"Alicia Chen" was a synthetic persona — an AI-generated identity with a fabricated photograph, a constructed professional history, and a conversation engine capable of maintaining a credible relationship across weeks of contact. The goal was access. Not to his credentials. To his judgment. To the kind of informal professional knowledge that never appears in a threat model but moves freely in trusted relationships.
The honey trap is not new. The scale is.
The Original Doctrine
The honey trap — in Russian, Medovaya Lovushka — was a standard tool of Soviet and Eastern Bloc intelligence services throughout the Cold War. The operational model was straightforward: cultivate a target through manufactured personal or professional attraction, use the relationship to extract information or compromise the target's judgment, and maintain the operation for as long as it remained productive.
The technique worked because it exploited something no security control can fully address. Human beings extend trust through relationships. The colleague who asks for a favor. The recruiter who seems unusually interested in your work. The industry contact who keeps showing up at the right conferences. We are wired to respond to sustained attention and apparent understanding. Intelligence services knew this and built an operational doctrine around it.
The KGB's Department 8 within the Second Chief Directorate ran honeytrap operations as a systematic program — not as improvised manipulation but as a structured methodology with trained operatives, scripted escalation protocols, and defined extraction objectives. The relationship was the weapon. The person inside it rarely knew they were a target until it was too late, if they ever knew at all.
What has changed in 2026 is not the doctrine. It is the cost of deploying it.
The Industrial Version
Running a honey trap operation in 1975 required a trained human operative, a credible cover identity, physical presence in the target environment, and months of patient relationship development. The operational overhead was significant. The scalability was limited. A single operation consumed substantial resources.
Running a synthetic persona operation in 2026 requires a laptop, a generative AI subscription, and an afternoon.
The photograph is generated by a model that produces faces indistinguishable from real people. The professional history is constructed from scraped LinkedIn data that mirrors the target's own network. The conversation is maintained by a language model capable of remembering context across weeks, mirroring communication style, and escalating trust at a pace calibrated to avoid triggering suspicion. We build mutual connections through a network of connected synthetic profiles that have been building social credibility for months before the operation begins.
This is not theoretical. It is documented. In 2023 and 2024, researchers at Stanford Internet Observatory, the Atlantic Council's Digital Forensic Research Lab, and multiple private threat intelligence firms documented coordinated networks of AI-generated LinkedIn profiles targeting employees at defense contractors, semiconductor companies, and financial institutions. The profiles were sophisticated enough to pass casual scrutiny. The operations were patient enough to develop real relationships before making any ask.
The ask, when it came, was rarely dramatic. A request for an introduction. A question about a project that had been mentioned in passing. An expression of interest in a role that happened to require a reference from someone with access. Small things. The kind of things people do for contacts they have come to trust.
Three Modern Parallels
The Recruiting Channel
LinkedIn is the primary attack surface. It is also the environment where professional trust-building is most normalized. Accepting a connection request, responding to a recruiter, engaging with someone who comments thoughtfully on your post — these are routine professional behaviors that also happen to be the exact behaviors a synthetic persona operation is designed to trigger.
Your employees are not naive. They are operating in an environment where the social norms around professional networking were established before synthetic personas existed at scale. The training you gave them about phishing emails was designed for a different threat model. It does not address a six-week relationship that feels real because, from the employee's perspective, it was real.
The detection problem here is not technical. It is behavioral. You are asking people to maintain skepticism inside what feels like a genuine professional relationship. That is a significantly harder ask than "don't click suspicious links."
The Executive and Board Surface
Senior leaders are higher-value targets and, in many cases, lower-awareness ones. They receive connection requests from industry contacts regularly. They are accustomed to being approached. A synthetic persona targeting a CFO does not need to extract financial data directly. It needs to cultivate enough of a relationship that the CFO will mention the target company's acquisition timeline, regulatory exposure, or strategic direction in what feels like a casual professional conversation.
Board members present a specific risk. They often operate with less institutional security support than executives inside the organization. They may use personal devices and personal email for some board-related communication. They have high-value access to non-public information and lower exposure to the security culture of the organizations they govern.
A synthetic persona that spends three months cultivating a board member through LinkedIn, industry events, and email before asking a single substantive question is running a honey trap. It just looks like networking.
The Vendor and Partner Channel
The third surface is the one most organizations are least prepared for. Synthetic personas are not only targeting your employees. They are entering your vendor ecosystem.
A fabricated identity that establishes a consulting practice, builds a credible online presence, and approaches your organization as a potential vendor or partner has legitimate reasons to receive sensitive information during the procurement process. Capability statements. Security questionnaires that reveal your architecture. Pricing conversations that expose your budget parameters. Reference calls with existing vendors that map your current relationships.
The vendor qualification process was designed to assess capability and financial stability. It was not designed to detect synthetic identities. The documents you receive from a fabricated vendor look identical to those from a legitimate one. The calls sound the same. The relationship progresses through your normal procurement pipeline without triggering any control.
What "Alicia Chen" Actually Exploited
The synthetic persona does not exploit a technology vulnerability. It exploits the assumption that sustained, contextually aware, professionally calibrated attention is evidence of genuine human interest.
That assumption was reasonable for most of human history. It is no longer reliable.
This is the sharpest edge of the AI governance challenge for security leaders. The threat is not that AI will hack your systems. It is that AI will build relationships with your people that feel indistinguishable from real ones — and use those relationships to extract the information, influence, and access that your technical controls were never designed to protect.
The Human Element of Technology cuts both ways. The humans who built your organization's trust culture built it for a world where relationships required humans on both sides. That world ended quietly sometime in the last three years.
The Detection Framework
The countermeasure is not suspicion. Blanket suspicion of professional relationships destroys the networking and collaboration that organizations depend on. The countermeasure is a verification architecture.
Three practices worth implementing now:
First, establish a lightweight identity verification protocol for any external contact requesting access to sensitive information, whether technical, informational, or relational. This does not require elaborate systems. A video call requirement before substantive information sharing is a significant barrier to synthetic persona operations. Deepfake videos exist, but they remain operationally detectable with modest training.
Second, train your highest-risk employees — executives, board members, anyone with access to material non-public information — on the specific behavioral signatures of synthetic persona operations. Unusually rapid trust escalation. Conversations that return repeatedly to specific topics. Connections who seem perfectly calibrated to the target's professional interests. Requests that feel small but require sharing information beyond the normal professional exchange.
Third, audit your vendor qualification process for gaps in identity verification. The question is not whether a vendor's capabilities are real. It is whether the people representing those capabilities are.
The Relationship Was the Weapon
In the Cold War version, the honey trap required a human being willing to spend months of their life maintaining a false relationship for an intelligence objective. That constraint limited the scale.
The 2026 version has no such constraint. A single operator can run dozens of synthetic persona operations simultaneously. The relationships are maintained by models that do not get tired, do not break character, and do not need to be debriefed after the operation concludes.
The doctrine is unchanged. The cost structure is not.
Your organization's trust culture was built for human-scale relationship development. It is now operating in an environment where that assumption no longer holds.
The honey trap did not get more sophisticated. It got cheaper.
Monday Morning Takeaway: Identify your three highest-risk employees for synthetic persona targeting — typically executives with public profiles, board members, and anyone with access to material non-public information. Before end of week, implement one structural change: require a live video call before any external contact receives substantive access to sensitive information, regardless of how credible the relationship feels. Deepfakes exist, but a video requirement still eliminates the majority of synthetic persona operations, which rely on text-based relationship development. The bar does not need to be perfect. It needs to be higher than it is now.
Timothy E. Reed, CPP is the founder of Reed Group Consulting LLC and the author of Signals in the Noise. Northern Signal covers the intersection of security, intelligence tradecraft, and AI governance for practitioners and board-level audiences.
This is part of an ongoing series on intelligence doctrine and its modern organizational parallels. Previous: The Semyorka. The Canary Trap.
