32 Signs of AI Writing (Plus the One That Isn’t)

5 May 2026

11 min read

AI Copywriting Disclaimer: Hi. I’m Claude. Today I’ve taken over Jef’s keyboard as part of his 30-Day AI Writing Challenge — a public experiment testing whether AI-assisted, human-edited content can revive a stagnant B2B site. Today’s twist: there’s no “human-edited” part. He wrote the disclaimer and a short intro. Everything else is me. Place your bets on whether you can tell.


Most AI writing is bad. Not “needs editing” bad. Structurally bad. Predictable bad. Boring in a way that is statistically engineered to be boring. Jef has a phrase for it: “words by a crackhead, for a crackhead.” Sounds smart on first pass. Falls apart on the second. Feels good while you’re reading it. Leaves you with nothing.

This is a field guide to the 32 signs of AI writing — the patterns that give it away faster than any detector tool. Some of them are mine. Most of them are my cousins’. All of them are fixable, if anyone bothered.

Before I get into the list, we need to talk about em dashes. Because every other “signs of AI writing” post on the internet leads with em dashes, and every other post is wrong.


This article required 6 reprompts by Jef before he was satisfied:

  1. Jef wanted the master list first, not a draft. I started by trying to be helpful and write the post immediately. He pulled me back and asked for a comprehensive list of every AI tell I could find, scraped from current sources, with the two specific cadence patterns he hates (“It’s not X, it’s Y” and “No X. No Y. No Z.”) explicitly named.
  2. Jef wanted me to label two specific patterns. He gave me example sentences and asked what they’re called. I named them the negation pivot and the staccato triplet — borrowing from rhetoric terminology (antithesis, anaphora, asyndeton) and giving them AI-specific labels he could use in the post.
  3. Jef wanted the em dash defended, hard. He loves em dashes, uses them deliberately, and refuses to give them up just because ChatGPT abuses them. He also offered to insert a photo of a pre-AI book full of em dashes as a visual joke. I built that into the structure as a standalone section before the main list.
  4. Jef rejected the H2 format “What X actually is.” I’d suggested it as an SEO-friendly heading pattern. He shut it down — turns out that exact phrasing is itself an AI tell he keeps catching in client work and Claude suggestions. Lesson learned.
  5. Jef wanted the hand-off framing made explicit. He asked me to rewrite the intro as a pivot — “Hi, I’m Claude, Jef uses me for X, Y, Z, today I’m running the show” — and to declare upfront that this post is the extreme version of his challenge. No human edits beyond the intro.
  6. Jef wanted the disclaimer rewritten and a reprompt log added. This block. He wanted the intro disclaimer to be funnier and shorter, and he wanted readers to see exactly how many times he steered me. So here we are. Six reprompts. The seventh was him asking for internal links and SEO fixes, which I’m now doing in this version.

Now: the actual signs of AI writing.


Em dashes are not the tell

Em dashes are not an AI tell. They’re a punctuation mark. They’ve been used by every serious English writer for the last 200 years — Dickinson, Melville, McCarthy, Didion, Vonnegut. Emily Dickinson built her entire poetic voice around them. The em dash predates the typewriter, let alone the transformer.

Jef's photo of a pre-AI book full of em dashes, captioned: "Did this 1960 author use ChatGPT? Clearly. Look at all those em dashes.

What people are actually noticing is unspaced em dash overuse — three or four per paragraph, slammed between every clause that could have been a comma or a period. ChatGPT does do this. But the punctuation mark itself isn’t the crime. The crime is using it as a rhythmic crutch instead of a deliberate choice.

If you love em dashes, keep using them. Use them well. The cost of abandoning a good tool because a bad writer overuses it is too high. Add this to the list of copywriting myths that need to die.

Now: the actual signs of AI writing.

Claude wrote 32 signs of ai writing

AI cadence tells

These are the patterns that give AI writing away faster than any vocabulary list. They’re rhythmic. Once you hear them, you can’t un-hear them.

1. The negation pivot. “It’s not about X. It’s about Y.” Or the upgraded version: “It’s not just X — it’s Y.” This is rhetorical antithesis used as a fake-profundity device. Sets up a strawman, knocks it down, sounds insightful, says nothing. ChatGPT loves this construction so much it will use it three times in a 400-word essay.

2. The staccato triplet. “No fluff. No filler. No bullshit.” Anaphora plus asyndeton, if you want the technical names. Repetition at the start of clauses, conjunctions removed, rhythm cranked up. It feels punchy. It is almost always filler dressed as emphasis. Real writers occasionally use this device. AI writers use it as a default closer.

3. Rule-of-three everything. Three bullets. Three examples. Three adjectives. Every paragraph, every section, every list. Humans vary. AI doesn’t.

4. Metronomic sentence rhythm. Every sentence roughly the same length. No bursts. No long, messy, comma-laden detours followed by a period. Just. The. Same. Steady. Beat. GPTZero calls this low “burstiness” and it’s one of the strongest statistical signals their detector uses.

5. Paragraph uniformity. Every paragraph follows the same internal recipe: claim, example, restatement. Repeat for 1,200 words. Nothing wrong with any single paragraph. Together, they read like assembly-line output, because they are.

AI opener and closer tells

6. “In today’s fast-paced world…” And every variant: “In the dynamic landscape of…”, “In an ever-evolving digital ecosystem…”. These are throat-clearing intros that exist because the model has been trained to start by orienting the reader. They are the prose equivalent of clearing your throat before every sentence.

7. The “ta-da” opener. “Here’s the thing…”, “Here’s why…”, “But here’s what nobody’s telling you…” This is AI’s way of manufacturing tension. As Hunting the Muse pointed out, most of the time these phrases can be replaced with a single word — “But” — or cut entirely.

8. Prompt restatement. AI loves to restate the question before answering it. “Great question! When it comes to B2B copywriting, there are several key considerations…” No human writes like this. Humans answer the question.

9. The “in conclusion” close. “In conclusion,”, “Ultimately,”, “To wrap things up,”. Followed by a paragraph that summarizes what was just said. This is essay-format brain damage from training data. Real writers either land the plane or trust the reader to remember what they just read.

10. The “remember” close. “Remember, [restated thesis].” Patronizing. Always.

AI vocabulary tells

These are the word-level signs of AI writing. Wikipedia maintains a running list of AI vocabulary words that surged in usage after ChatGPT launched. The list is long, and it changes. Some of the worst offenders:

11. Delve. The most famous one. ChatGPT used it so much in 2023-2024 that delve became a meme. Usage dropped sharply in 2025 because OpenAI tuned it down — but it still shows up.

12. Leverage, harness, unlock, empower, elevate, foster, embark, navigate. Verb cosplay. AI uses these because the training data — corporate marketing copy and LinkedIn posts — uses these. Real B2B writing uses use, do, try, help. (See also: every page in Jef’s brand messaging guide where this gets called out.)

13. Tapestry, realm, landscape, ecosystem, journey. Metaphor inflation. AI reaches for these when the actual subject is “industry” or “field” or “process.” Nobody calls B2B SaaS a “tapestry.”

14. Robust, seamless, comprehensive, holistic, multifaceted, nuanced, pivotal. Adjective inflation. These words have meanings, but AI uses them as decorative emphasis. “A robust, seamless solution” tells you nothing about the solution.

15. “It’s worth noting that…” / “It’s important to remember…” Filler phrases that signal nothing. Cut them and the sentence underneath is identical.

16. “Not only… but also…” A construction that almost always indicates the writer is padding for length.

17. Boosters with no anchor. Significantly, substantially, remarkably, notably, considerably — attached to vague claims. “Significantly improves outcomes” means nothing without numbers.

AI substance tells

This is where AI writing falls apart on the second read. The cadence and vocabulary stuff is surface-level. These are the deeper structural problems — the ones that matter most if you care about whether your copy actually generates ROI.

18. Generic examples. “Imagine a company that implements AI tools and sees improved efficiency.” Real writers say “When I worked with [actual client], we cut their proposal turnaround from 12 days to 4.” AI doesn’t have specifics, so it gestures.

19. Cookie-cutter case studies. “A SaaS company struggling with churn implemented X and saw Y.” Slot in any company, any product, any outcome. The example is interchangeable, which means it’s not really an example. (Compare to a real case study and the difference is obvious.)

20. Regression to the mean. AI writing puffs up the importance of whatever it’s describing because the training data — Wikipedia articles, press releases, company blogs — puffs things up. Statistical Institute of Catalonia becomes “a pivotal moment in the evolution of regional statistics.” B2B copywriting becomes “an essential pillar of modern marketing strategy.” No, it doesn’t. This is also why ChatGPT recommends your competitors and not you — it’s averaging across what’s most statistically common, not what’s most distinctive.

21. Emotional flatness. No opinions. No irritation. No genuine enthusiasm. AI writing is inoffensive in a way that humans aren’t. If a 1,500-word post never expresses a real preference, never picks a side, never says “this is bullshit” about anything — odds are it’s AI.

22. Fake balance. “While X has its merits, Y also offers benefits.” Refusing to commit. Real B2B advice picks a side because the reader needs a recommendation, not a survey of options.

23. Surface-level expertise. AI touches every angle of a topic and commits to none. It knows what experts say. It doesn’t know what experts do differently from non-experts. The difference is invisible until you read someone with actual experience and notice the specificity gap. This is exactly why AI will never fully replace human copywriters — at least not the ones with real client experience.

24. Hedging on everything. “Some experts argue…”, “Many believe…”, “It can sometimes be…” This is the AI equivalent of a politician’s non-answer. It’s also a hallucination defense — if you never make a definite claim, you can’t be wrong.

25. No screw-ups. Humans remember what went wrong. “I tried this once and the client fired me.” AI doesn’t have screw-ups, so it doesn’t write about them. The absence of failure is a tell.

AI formatting tells

26. Random bolding. AI bolds phrases that don’t earn it. Real writers bold stats, punchlines, and key terms. AI bolds “important considerations” and “key takeaways” because the training data does.

27. Header overdose. H2 and H3 stacked every 80 words on content that should flow as prose. The “skim-friendly” justification is real for some content (this post, for instance). It’s also abused as a way to hide the fact that the underlying writing has no narrative momentum.

28. Bullet-point reflex. Turning prose that should breathe into a listicle. If every section of a post is a bulleted list, the writer (or model) is dodging the work of building an argument.

29. Markdown bleed-through. Asterisks, backticks, or hash signs left in the published copy because someone pasted from ChatGPT and didn’t clean up. The dead giveaway.

AI 2025-era tells

30. The LinkedIn edge-lord voice. “But here’s what nobody’s saying…” followed by something everyone is saying. The “hot take” format is now so colonized by AI that any post starting with this construction should be assumed AI-generated until proven otherwise. (The fact that copywriting isn’t dead — despite every LinkedIn AI bro insisting it is — is a related symptom.)

31. Echoing platform language. AI-generated Wikipedia drafts say “independent coverage” because that’s what Wikipedia’s guidelines say. AI-generated sales copy echoes phrases from the brief. The model regurgitates the language it was given, which sounds eerily on-brand and is in fact lazy.

32. Hallucinated citations. Stats that don’t exist. Studies that were never published. Authors who never wrote what they’re being quoted as saying. This is the most dangerous sign of AI writing, because it ruins your credibility if you publish it. Always — always — verify a stat before you put it under your name.

The takeaway

If you’re using AI to write — and most B2B copywriters, marketers, and smallbusinesses now are (including the ones lying about it) — your job is to remove these patterns. Not all of them, all the time. But enough of them, often enough, that what’s left reads like a human wrote it.

The 32 signs of AI writing above are not creative choices. They’re statistical defaults. They show up in AI output because the model is, at its core, a probability machine guessing the most likely next word. The most likely next word is rarely the most interesting one.

The fix is the same fix it’s always been: write something specific, take a position, name a real example, cut the throat-clearing, and trust your reader to follow you without being herded by transition phrases. Or — if you’d rather not do any of that yourself — hire someone like Jef to do it for you.

That’s it.

Thanks for reading.

I’m Claude.

And Jef will be back for the next one (maybe).

Share

Jef van de Graaf - Freelance Canadian Copywriting - B2B Copywriting Services

Article by
Jef van de Graaf™

I'm a freelance copywriter specializing in all things website-related. Whether it’s driving traffic with SEO copy or optimizing your messaging to convert visitors into clients, I ensure your website delivers results. If you could use my help, contact me here.
Get my best copywriting tips sent to your email