In 2018, I sat across from a CISO who had just received a polished threat intelligence report. Forty pages. Well-sourced. Professionally formatted. He thanked the team and moved to his next meeting.
The report was wrong about the most important thing in it. Not fabricated — just wrong. But it looked finished, so it was treated as finished.
That gap between looking right and being right is not a technology problem. It is a judgment problem. And it is about to get significantly worse.
The Polished Output Trap
Anthropic studied 9,830 conversations on Claude.ai and found something that should stop every security leader in their tracks. When Claude produces a polished artifact — a document, a report, a structured analysis — users become 5.2 percentage points less likely to identify missing context, 3.7 percentage points less likely to fact-check, and 3.1 percentage points less likely to question the reasoning.
The output looked finished. So people treated it as finished.
This is not a user behavior anomaly. It is a predictable cognitive response to aesthetic completeness. And in the security and intelligence space, where the deliverable is always a polished artifact — a threat assessment, an executive briefing, an incident report — it is a structural vulnerability.
When everything looks like expert output, nothing gets scrutinized like expert output.
Your board is not going to stop trusting polished analysis because you told them to. They are going to trust the analysis that carries a signature they recognize. That signature is not your title. It is not your certification. It is the accumulated, documented, operationally grounded pattern recognition that only you have built across the years you have been doing this work.
Perfection Is Cheap. Provenance Is Not.
AI-generated analysis already looks like expert human analysis. The formatting is correct. The citations are present. The structure follows best practices. At the surface level, it is indistinguishable.
That means the surface level no longer differentiates you. What differentiates you is what lives underneath it.
The scar tissue from a breach investigation that went sideways before it went right. The pattern you learned to recognize after seeing it fail three times across three different organizations. The disagreement you had with a CISO who turned out to be wrong, and why you were right, and how that shaped your model. None of that is in any training dataset. None of it can be.
The provenance of an insight matters as much as the insight itself.
Think of it this way: a well-prompted model can produce a threat briefing that reads like yours. It cannot produce the specific reasoning chain that generated yours, because that chain was built from operational experience that has never been written down anywhere. The model has no access to what you actually saw, what you got wrong, what you corrected, and what you learned from the correction.
That is the moat. It is not your output. It is your origin.
The Credibility Battlefield
The sequence plays out the same way every time a new information technology matures. First, the new format looks suspicious. Then it looks acceptable. Then it looks identical to the real thing. Then the market shifts — and credibility becomes the only remaining differentiator.
We are entering that third phase with AI-generated analysis right now.
Security leaders who understand this early will make a specific move: they will stop competing on output and start competing on provenance. They will document their pattern library — the specific cases, observations, and judgment calls that have shaped how they analyze risk. They will publish their reasoning, not just their conclusions. They will create friction deliberately, demanding that AI explain its logic and propose alternatives rather than using it as a delegation target.
AI can produce your conclusions. It cannot produce your reasoning process. The analysis that shows how you got there is worth more than the one that only shows where you landed.
The leaders who do this will carry a signature that cannot be forged. The ones who do not will look identical to a well-prompted model. That is not a sustainable position in an economy where judgment is the scarce resource.
What 26 Years Leaves Behind
Signals in the Noise took 26 years to write, even though the writing itself took far less time. Every pattern in it came from operational reality — from investigations that did not go the way the textbook said they would, from protective intelligence assessments where the answer required judgment that no rubric could have produced, from the specific friction of working inside organizations that had every incentive to ignore what the data was showing them.
A model trained on all publicly available security writing could produce a book that structurally resembles it. It could not produce the specific pattern recognition inside it. That source material does not exist anywhere outside the work.
That is not a marketing claim. That is the point of the whole argument.
Can you create something only you could make? That is the question. Your perspective. Your story. Your voice. That is the answer.
Monday Morning Takeaway
Audit one piece of analysis you produced this month. Ask: could a well-prompted model have written this? If the answer is yes, something is missing. The missing element is you — the specific operational experience, the pattern that took years to build, the disagreement that sharpened your judgment. Put that back in. Not as decoration. As the argument.
