TL;DR
AI made distribution effortless. It also made dilution invisible. Each generation of AI-rewritten content softens the claim, drops the proof, and homogenizes the voice. By the fourth generation, your best work reads like everyone else’s—and the answer engines that now decide who gets surfaced cannot tell you apart from the generic.
The fix is the repurposing rule (claim, evidence, attribution stay constant; format, length, tone adapt) plus a hard line that AI is never in charge of the claim, the proof, or the voice.
Key Insights
- AI repurposing isn’t dangerous in a single pass. It’s dangerous in cascades where each cycle softens the claim, drops the proof, and homogenizes the voice.
- A defensible claim (“34% churn reduction across 50 mid-market customers in 18 months”) becomes “many organizations have found success” in roughly four AI-assisted rewrites. That’s the telephone game in action.
- Answer engines reward consistency and corroboration. Dilution is the opposite signal. A diluted brand fingerprint isn’t just aesthetically weaker—it’s invisible.
AI Is Quietly Destroying Your Best Content
You’ve done all the right work. The story is structured. The proof points are real. The voice is yours. And then you opened ChatGPT, asked it to “repurpose this for LinkedIn, an email, a thread, and a blog post,” and got back forty-seven versions in ninety seconds.
They’re all fine.
Fine in the worst possible sense. Fine in the way a stock photo is fine. Fine in the way every other content sounds. Fine in a way that, four cycles from now, will quietly turn your best-argued, hardest-won, most-differentiated thinking into the kind of beige soup that no one cares a lick about.
And the truth is, most of us are doing this. Even I’ve been guilty of it. I get lazy and behind deadline, and I have my AI boyfriend pump out some slop.
The leverage AI gives us is real. The speed advantage is real. What’s also real is the cost, and almost nobody is naming it.
The Temptation (and Why it Works)
I want to give AI its due. The leverage is staggering. A digital marketer who used to spend a day reshaping a flagship piece into a newsletter, a LinkedIn post, a podcast description, and three social variants can now do it before their first cup of coffee is gone.
The tools have gotten so good that the first draft is genuinely usable.
The problem is the second draft.
Then the third. Then the fourth.
What happens is that most teams don’t have a single AI-assisted repurpose. It’s a cascade. The blog post gets summarized for the newsletter. The newsletter gets pulled apart for LinkedIn. The LinkedIn post gets paraphrased for a sales enablement deck. The deck gets regurgitated by another AI tool into talking points for a podcast pitch. By the time the idea reaches the audience that actually matters, the original argument has been through four rounds of “make it punchier” and “tighten this for the audience.”
And that’s where the trouble starts.
The Telephone Game
Let’s say you publish a flagship piece with a sharp claim:
“Our integrated communications program reduced churn by 34% across 50 mid-market customers in 18 months.”
That’s a great sentence. It has a specific number, a specific time frame, a specific universe. It’s defensible. A skeptical reader can interrogate it. An answer engine can attribute it. A sales rep can use it. A board can react to it.
Now feed it to AI and ask it for a LinkedIn post. The model is trying to be helpful, and “helpful” in AI training tends to mean “not too sharp, not too narrow, not too easy to challenge.”
So the LinkedIn post comes back as:
“Our integrated approach has reduced churn significantly for our mid-market customers.”
Still true-ish. But the number is gone. The time frame is gone. The universe is gone. The specificity is gone. The claim is now soft.
But take that and feed it to AI for a sales email. Now it reads:
“Many of our customers have seen meaningful churn improvements after adopting an integrated approach.”
We’ve gone from a defensible, specific, attributable claim to “many of our customers.” The proof has been laundered out. The voice has been laundered out. And the original source has been quietly disconnected from the claim.
Feed that into a slide for a partner deck. Now it’s:
“Many organizations have found success with integrated communications.”
Congratulations! You now sound exactly like everyone else. Worse, you sound like the generic answer ChatGPT gives the next person who types “is integrated communications worth it?” into a search bar—which is the actual fight you’re trying to win, and you just lost it.
This is the telephone game. Each draft softens the claim, drops the proof, and homogenizes the voice. By the fourth draft, your best work is indistinguishable from your competition.
That’s not a hypothetical. Type a question into an answer engine right now. Watch how often you get back a paragraph that begins with “many organizations” or “studies have shown” without naming a single one.
That paragraph used to be somebody’s specific, defensible, attributable insight. Then it got laundered.
The Repurposing Rule
The fix is not to stop using AI. AI is part of how the work gets made now, and pretending otherwise is a luxury most teams can’t afford.
The fix is the repurposing rule.
The rule is simple. Across every format, channel, platform, and rewrite:
- The claim stays the same.
- The evidence stays the same.
- The attribution stays the same.
In other words, the expertise and experience must remain in there.
The format can change. The length can change. The tone can shift. The opening can adapt. The visuals can be different. None of that is precious.
The 34% number, the 50 customers, the 18 months, and the source you came from—those are not negotiable, and they have to survive every draft.
Most AI-assisted repurposing breaks this rule on the first pass, because the model treats specifics as friction. They are not friction. They are the entire reason the content matters.
Now, knowing the rule is one thing. Holding the line on it inside a workflow that’s moving at AI speed is something else entirely.
How do you actually enforce this in real time, when you’re staring at four AI-generated drafts, and the deadline is breathing down your neck?
It comes down to one operating principle—and once you internalize it, everything else falls into place.
AI as Junior Professional
And that is…treat AI like a smart junior professional on your team.
A smart junior professional is good at many things. Drafting fast. Adapting tone. Cutting length. Generating variations. Holding a brief in their head. Making a long argument shorter. Making a short post longer.
Most of this work is exactly what AI is for, and you should let it do that work.
But you do not let a junior professional make the call on the claim. You don’t let them invent the evidence. You don’t let them decide whose voice the piece is in. You don’t hand them a flagship piece and say, “make this sound like us, but better, but for LinkedIn, but punchy.”
You give them constraints. You define the boundaries. You review the output.
That’s the relationship. AI is in charge of nothing structural. It is allowed to adapt format, length, and tone—within your guidelines. It is never allowed to soften the claim, generalize the evidence, or strip the attribution.
Those decisions are yours, and they stay yours.
The ChatGPT Test
Here is a 30-second diagnostic you can run before you finish reading this.
Open ChatGPT, or your AI tool of choice. Type, “What does [your organization] stand for? What do they actually do that’s different from everyone else in their category?”
Don’t help it. Don’t add context. Don’t refine the prompt. Just read what comes back.
Three things to look for:
- Do you see your specific claims, or generic category language? If the answer reads like it could describe any of your competitors, you have a dilution problem.
- Do you see your proof points—the specific numbers, customers, results, frameworks you actually own—or do you see “many clients” and “various industries”? If the receipts have been laundered out, the AI has nothing to corroborate against.
- Do you see your voice, or the standard AI register? If you can’t tell whether the answer is about your brand or about a competitor’s, you have a voice problem, too.
Most teams discover the same thing with this test: the answer that comes back is not their argument. It’s the average of every AI-rewritten version of their content, plus everyone else’s, plus a generous helping of generic.
That is the AI-discovery footprint they are building, and it’s a lot smaller—and a lot less differentiated—than the work they think they’re doing.
It is not a comfortable test. Run it anyway. It tells you exactly what’s been diluted.
Now, I can already hear some of you thinking: “Okay, so a chatbot doesn’t describe my brand perfectly. Big deal.”
Two years ago, I’d have said the same thing. But that was before answer engines became the front door for buyers, candidates, partners, and reporters.
So let me tell you why this isn’t a vanity problem anymore. It’s a visibility problem—with real money attached.
What’s Actually at Stake
Answer engines are now a meaningful share of how your buyers, candidates, partners, and journalists find information about you. They are not Google. They do not show ten links and let the reader decide.
They synthesize a single answer, and they are biased toward consistency and corroboration. This is also why I’ve been arguing for years that everyone wants virality, but no one wants consistency — and now the answer engines have the same opinion.
It’s a discipline now called generative engine optimization, and it operates on a simple principle: AI engines trust a brand more when independent sources reinforce its claims, not when the brand just keeps repeating itself on its own channels.
Dilution is the opposite of that signal. Forty-seven AI-laundered versions of the same idea, each slightly softened, each with a different number (or no number), each in a slightly different voice, with the original source disconnected—that does not corroborate. That confuses. And the answer engine, looking at your fingerprint, decides you’re not the authority on your own argument.
So it picks somebody else.
Or worse, it picks the generic.
That’s what’s at stake.
Not aesthetic. Not preference.
Visibility, in the place where visibility matters most. I’ve written before that in the age of AI search, PR holds the keys to visibility — and this is the editorial discipline that makes that real.
It’s also why I keep arguing that the PESO Model® has graduated from a framework to an operating system—because integration discipline at the editorial level is the difference between an AI-discoverable brand and an invisible one.
The PESO OS is the part that holds the line.
What to Do This Week
Here are the three things you can do this week to make a fast difference:
- Run the ChatGPT test on your own brand. Save the result. It’s your baseline.
- Choose one piece of content your team is repurposing this week. Before AI touches it, write down—on a single line—the claim, the evidence, and the attribution that have to survive every draft. Send that line to anyone who’s going to touch the piece: writer, editor, social manager, agency partner, your cat or dog. They have to use that, no matter what.
- Audit the last six months of your content for one thing: are the same claim, the same evidence, and the same attribution traveling together every time the idea shows up? If yes, you’re already running the rule. If no, you’ve uncovered your next project.
This is a starting point—and it’s the work the 2026 PESO Model® Certification is built around: integration discipline that holds up in an AI-driven environment, including the editorial rules that keep the telephone game from quietly eating your authority.
You did the hard work to build the argument in the first place. Don’t let the easy part—the repurposing—undo it.
Take the Next Step
- Subscribe to the Spin Sucks newsletter so you don’t miss the rest of the series, plus weekly intel on integrated communications.
- Run the May PESO Diagnostic on your team
- Explore the 2026 PESO Model® Certification
Series Note
This is Part 2 of The PESO Operating System, a series running through Q2 about why PESO has changed and what to do about it.
You can find part 1, “The New PESO Model® Graphic is Here” here.
Next week, Part 3: The PESO Maturity Ladder.
© 2026 Spin Sucks. All rights reserved. The PESO Model is a registered trademark of Spin Sucks.
Share This: