Antonym: The AI Writing Edition
Can you spot AI copy? Do you know what periphrasis is? Me neither!
Can you tell if an AI has written an article? The answer is yes, if it is done artlessly on an older AI model. Otherwise probably not. Sadly, a great deal of writing is done artlessly, and an increasing amount on free or cheap generative AI apps.
Recently, an author told me that they can “just tell” when AI has written an article: “I can’t put my finger on it, but I know immediately,” they told me. “It’s just a bit… crap.”
I knew what they meant, but I wanted to put my finger on exactly what it was that was bad, and to take a look at the latest clutch of advanced AI models and see if they were still making the same mistakes.
Our writerly jargon of of the week is prefatory phrases, which refers to the little wordy run up to a sentence that sometimes sometimes see, such as “Notably…”, “It could be said that…”, or “That is to say”. It’s one of the ways you can spot that someone has leant a little too much on ChatGPT to knock out their copy for them.
You’ve probably heard of the RIRO rule (rubbish in, rubbish out). If you put in nothing more than “write me an article about x”, then you will get the most average and dull bit of writing about x that is possible.
Results improve the more a writer brings to the prompt party. A sophisticated prompt using the role-task-format (RTF) or similar approach? Much better. Some research uploaded with the prompt and a bit of conversation with the AI to refine it? Even better. The best results in my experience come from using AI to help with the hard bits – structuring a messy first draft or heap of notes, giving critical feedback about what might work better, even imagining a what a favourite author or editor might say (see my trusty, free OrwellBot for a critique of your prose).
The markers: how we know it is AI writing crap and not humans
We can’t reliably spot AI copy, even if we think we can, and so far apps that claim to be able to do it are useless and dangerous (false positives have caused a lot of pain to students).
My hypothesis: The lazier the writer, the easier it is to spot the hand of AI.
So I had ChatGPT 3.5 write an essay about bad writing online and went through it (add your own comments if you like).
Reading closely, I picked out the bits that were snagging my credulity, the “tells”:
American spelling (if you’re not American): A “specialize” or “itemize”, “color” or “armor” will show that you not only used AI but didn’t even bother to proof the thing before you published it. Even the advanced AI models ignore pleas for British English spellings sometimes. It’s not proof that it is AI generated, but as soon as I spot one I’m on my guard.
Clashing adjectives: For instance, a “glaring characteristic” just doesn't work. Errors glare, characteristics are common or defining. The adjectives it chooses are ill-fitting, just slightly wrong.
Clichés galore:
harness [why does all tech have to be put in a harness?]
honing [skills are always honed]
impactful [even saying it out loud feels like chewing chalk]
“a rare gem often overshadowed” [mixed metaphor
The aforementioned prefatory phrases… the annoying, usually redundant bits at the start of sentences that:
“We must remember that”
“At its core”
“In the vast expanse of the internet”
“At first glance”
“In the digital era”
There’s also a certain amount of periphrasis – what a word! – where six ill-chosen words have taken the place of one that would do the job “the realm of the…” is a common - and redundant - phrase in ChatGPT prose.
But the new tools are a different story
Generating bad writing was a lot easier using free AI tools. ChatGPT 3.5 (the one you get without paying for ChatGPT Plus). The GPT 4 and above class of LLM (Large Language Models), which include Google Gemini Advanced/Pro/Whatevz, Claude 3 Opus and Mistral Big, were really good even with a basic prompt.
Down at the Brilliant Noise testing lab, I ran the same simple essay prompt through six other models. The results were surprising. I’ll share more elsewhere, but in short Claude 3 Opus (which came out a couple of weeks ago and beat ChatGPT 4 on some tests) was the clear winner – producing credible, readable and useful prose in one attempt. The others beat the ChatGPT 3.5 example I analysed first, with only Microsoft Co-Pilot struggling.
The conclusion: it’s easy to spot people using AI for writing if they use the free versions of AI tools and are lazy and don’t edit. Otherwise, you’d be hard pushed to see the difference.
Usable and useful methods of writing with AI
Creating your double
This is something that a lot of writers have been doing. Scott Galloway has put all his books into a bot and consults it during his writing process and preparing for podcasts.
Using the custom chatbot feature in ChatGPT Plus (annoyingly called GPTs) I loaded some books I’d written as well as 18 months of daily letters to my company that I wrote during the pandemic and christened my little bot: Mayphex Twin. I can submit notes or rough copy to this and have it write in my style. It fact, I’ve had to ask the bot to dial it down, literally to about 75% strength otherwise I just find my doppelgänger’s prose obnoxious (yes, I am reflecting on what this might mean).
Critical friend
Some people like AI to create them a first draft, but that doesn’t work so well. Creating a structure or just treating an AI like a writing coach and talking about what you’re working on seems to work very well, however. There’s a specific writing app called LEX that behaves a lot like Google Docs with a lot more AI firepower than even the Gemini-enabled version. The feature I really do like on LEX is the writing coach. It’s brutal, which if you’re writing alone is actually really valuable at getting a piece in shape.
Here it is giving an early draft of this newsletter a kicking:
Research everything
It’s the prep that AI can excellent at…
Perplexity – more people are talking about Perplexity now and rightly so. It makes incredible use of generative AI to deliver you answers to questions like little personalised Wikipedia articles that you can have a chat with. It also links to all of its sources so you’re less likely to be led astray by “hallucinations”.
ChatGPT Plus - the Consensus GPT. If you have premium ChatGPT – and you really should, it will pay you back the £20 a month in no time at all – then search for the Consensus GPT which allows you to ask questions about scientific research and it will bring up relevant papers. Can’t read academic writing easily? Not a problem – ask ChatGPT to explain it for you at a reading age of 16. If it still doesn’t make sense lower the reading age until you get it.
Notion, Obsidian and other note-taking apps are getting AI add ons. Some of these are another subscription for extra services, but if you have a large collection of notes, bookmarks and Kindle highlights it can be magical to do research based on your own previous reading.
That’s it for this week
If you’d like more on writing with AI, take a look at the 10 Things newsletter which I wrote an article for this week about prompting. And sign up while you’re there – it’s a fantastic read.
Thank you for reading and if you liked it let us know!
Antony
PS No time TV recommendations this week, but I will say that Shogun on Disney+ is excellent.
Agree.