Antonym: The Not Brain Rot Edition
Your brain isn't rotting. But your information diet might need work.
Dear Reader
This week, I am recharged and full of revolutionary fervour, so brace yourselves for:
Instructions on how to re-mix your content for fun and profit
The end of the advertising world as we know it
Consultants cutting prices
How not to brain rot
But first, a bouche non amusee (opposite of an amuse bouche?), in the form of an ad for an American gambling site. Watch it, then we’ll talk:
"The world's gone mad, trade it."
Translated into British English ad-speak.
"End of the world? Have a flutter."
This is so Douglas Adams that it makes me laugh, but also so desperately dystopian that I cringe from its unintentional satire.
Ad giants no longer fighting on the beaches
Maybe not the end of the world. But the end of advertising as we know it? Again? For real this time?
Three years ago my creative director colleague at Brilliant Noise shaved a day off her process by using MidJourney to create mood boards of images. Today it's hard not to create the whole pitch and a production ready campaign in a day or so. Sometimes.
And it's not just creative. Back from the inspiration-or-misery pit of the Festival of Advertising Creativity in Cannes the verdict was clear for many: the war is over. Tech won.
The advertising agencies are in retreat (literally - they used to rent sections of the beach now they are relegated to hotel suites, while Google, TikTok and especially Meta expand their waterside sales rooms).
And it's not just the creative that's suffering. Last December we started telling clients about the prospect of marketing services companies being "Sherlocked" by Meta (agency business models becoming free features of big tech adtech platforms). Six months later and it's moving fast. Meta is set on launching ad creative+planning+optimisation with AI-driven processes by next year.
On an episode of Pivot this week Scott Galloway (also just back from Cannes):
[Meta has] already gone after the big guys. They've already gutted WPP, OmniComm and to a lesser extent, Publicis [...].
They're going to go after regional agencies now. They're going to go after small producers that produce these little commercials for Toyota of Northern California. They're going to say, using our AI, you don't need to go anywhere else.You don't need creative, you don't need media planning [....]
The tech stack they're offering to marketers right now? Good luck competing with that. The only people that can compete with it are Alphabet. Pretty soon, Meta's Beach is going to be San Tropez.
So what?
So what can agencies do? Procurement, budget and re-pitch cycles usually slow things down as well, in terms of clients changing service providers. But if savings and performance are high enough organisations will find ways to extract themselves from multi-year deals and bring their tech in house.
Get big? The strategy of the large holding cos has been to get bigger, but that's doomed because it makes them slower and because a giant in the marketing services is a dwarf in big tech. Meta has 10-12 times the revenue of the largest ad group (Publicis) and more than double its profit margin (40% vs 18%). Publicis is spending €100 million a year on its AI transformation programme. Meta probably just paid a $100 million signing bonus for one AI expert employee (a ChatGPT scientist who helped build their o1 reasoning model).
Niche up? For mid-size and smaller agencies one strategy has been to niche – find a sector and be the best at servicing it. It offers some protection. That won't work for much longer as Meta's AI self-serve ad platforms come online because the clients are also sector experts and will be able to plug in what they know direct into the system.
Productise? Marketing services and communications consultancies are launching products at an unprecedented rate. It's a smart move – licence fees, lock-in, differentiation are the pay-offs if they get it right. But that's a big "if". Agencies don't have a product mindset or experience, even if they see the attraction of breaking away from time and materials, they need to come to terms with the risks and processes of developing software, managing technical debt, and a thousand other details. Nonetheless, becoming something more like product companies is likely a route to survival and perhaps even thriving for some agencies.
Consultants cutting prices
PwC just admitted something most firms won't: AI is making them so efficient, they're cutting prices.
"Clients would hear us talking about using AI and say, 'We want our fair share of those efficiencies,'" PwC's Chief AI Officer Dan Priest told Bloomberg. Translation: we can do the work faster, so why should you pay the same?
It's a rare moment of honesty in an industry built on billable hours.
The Efficiency Trap
Here's the thing about productivity gains: they're hard to hide. When your team can complete a three-week project in five days using AI, clients notice. They start asking uncomfortable questions about why they're still paying three-week prices.
PwC found themselves in this exact position. Their AI tools were saving significant time on routine tasks—document analysis, report generation, data processing. The kind of work that traditionally ate up junior consultant hours.
But clients aren't stupid. They could see the same presentations getting delivered faster. The same insights arriving ahead of schedule. Eventually, someone was going to ask: "If this takes you half the time, shouldn't it cost half as much?"
What This Signals
PwC's price cuts aren't just about one consulting firm. They're a preview of what happens when AI productivity gains become impossible to ignore.
Sam Altman has predicted that AI will drive down prices across many sectors. Not because companies want to be generous, but because competitive pressure will force them to pass savings along.
We're seeing the early stages of this dynamic. When your competitor can deliver the same quality work for less—thanks to AI—you either match their efficiency or lose the business.
The Broader Picture
This puts traditional service businesses in an interesting bind. The billable hour model assumes time equals value. But AI breaks that equation. Value increasingly comes from insight, strategy, and judgment—not the hours spent generating it.
Smart firms will use this transition to focus on higher-value work. Instead of junior consultants spending weeks on PowerPoint decks, they can tackle more strategic challenges. The humans do what humans do best; the AI handles the grunt work.
But it requires rethinking how you price and position your services. PwC is getting ahead of this curve by acknowledging the efficiency gains upfront.
What's Next
Expect more firms to follow PwC's lead. Not because they want to cut prices, but because clients will demand it. When productivity gains are obvious, hiding them becomes impossible.
The firms that thrive will be those that use AI to deliver better outcomes, not just faster ones. Speed is a commodity. Insight isn't.
The consulting industry just got its first taste of AI-driven deflation. It won't be the last sector to experience this pressure.
Your clients can see the efficiency gains too. The question is whether you'll acknowledge them before they ask.
Practical: Content cookery
Most marketing content teams use AI like a faster typewriter. They're missing the opportunity. It degrades them and the quality of the content they are working with. It’s a task-level way of thinking about AI that can be upgraded by thinking of the work through the frame of Data-Process-Output.
Last Sunday’s Antonym was about the need for urgent action on AI adoption and acceleration by leaders. BCG also published a thorough but traditional report about "AI-first" companies, which gratifyingly makes the same case to chief executives as we did (TL;DR: get on with it!).
I’d summarise it here, but let’s have some fun with it instead.
Like many trad reports it is written in PowerPoint format, but as a document meant to be read; which results in strange design choices. So let's not treat it as a dish to be eaten as is – let's think of it as data, or an interesting ingredient to work with to take up a culinary metaphor.
The BCG brains behind the report have done good work gathering data and offering sensible advice to their readers. What can we do to bring those textures and flavours to life and make the whole thing a delicious read.
In the process we might learn a thing or two about how to process any useful bit of information.
Here’s the video version:
Full instructions including suggested prompts on how to remix whitepapers using everyday tools can be found on this website made with Gamma:
But essentially:
Get your report turned into plain text (Gemini is great for this.
Use Gamma and Claude / ChatGPT / Gemini to remake the content for different audiences and platforms
And the results?
Bite-size: A carousel for LinkedIn
Click on the image to view this Gif-fest:
A presentation remix with Gamma:
Or hand the raw report over to Claude and have it create an instant website:
Brain rot. Not.
A friend sent me a viral post about an MIT study finding "terrifying brain rot" from ChatGPT? It's bollocks. Not linking to it. But you've probably seen it (or similar). "MIT scanned people's brains… shocking results!" followed by claims about cognitive collapse, brain atrophy, and the death of human thinking. Shared thousands of times. Quoted in meetings. Used to ban AI tools in schools.
Here's the problem: it's a masterclass in how to take legitimate research and twist it into clickbait fear-mongering.
As the report’s authors say (to no avail):
What Actually Happened
MIT researchers did scan brains while people wrote essays. They compared three groups: no AI help, search engines, and ChatGPT users. The study found interesting patterns about memory and neural activity.
But "terrifying results"? Not quite.
The viral post claims 83% of ChatGPT users couldn't quote their own essays in a test. True figure. But it ignores that this gap narrowed dramatically over time. By the third session, most could quote their work fine.
It's like testing someone's ability to drive a manual car on their first lesson, then declaring they'll never master it.
The post screams about "47% brain connectivity collapse." Sounds alarming. Except that figure doesn't appear in the paper. It's likely a misinterpretation of one data point from one frequency band in one session.
The actual EEG results were complex and variable. Some sessions showed recovery. Others showed adaptation. The researchers themselves describe this as "cognitive offloading" – not brain rot.
Meanwhile, the productivity paradox claim mixes findings from the MIT study with different research, without attribution. It's like blending ingredients from different recipes and calling it a single dish.
What the Study Really Shows
The MIT research suggests three things worth noting:
Early AI use correlates with lower memory recall. When people first use ChatGPT for writing, they remember less about their own work. Not shocking—if a tool does the heavy lifting, you're less engaged with the process.
Search engines sit in the middle. Better memory than ChatGPT users, not as strong as writing unaided. This gradient matters more than the binary framing suggests.
Strategic use might work best. The group that started without AI, then switched to it, showed promising results. High recall, good essays, engaged brains.
None of this screams "cognitive apocalypse." It suggests we need better AI literacy and smarter integration strategies.
The viral post commits three sins that plague AI discourse:
Sensationalism over substance. "Collapsed connectivity" grabs more attention than "complex patterns requiring further study."
Binary thinking. AI good or AI bad. No middle ground. No nuance. No recognition that tools can be used well or poorly.
Cherry-picking panic. Take the scariest data points, ignore the context, skip the limitations, and present it as definitive truth.
This isn't just sloppy science communication. It's actively harmful. Schools are banning AI tools based on misunderstood research. Teams are making policy decisions based on viral misinformation.
Ask better questions. Not "Is AI bad for brains?" but "How can we use AI tools to enhance rather than replace thinking?"
The MIT study is valuable because it's nuanced. It suggests we need strategic approaches to AI in learning and work. That's useful. That's actionable.
Fear-mongering about brain rot? That's noise.
AI tools change how we think and work. That's worth studying carefully. But viral posts that strip research of context, add inflammatory language, and ignore inconvenient findings aren't helping anyone make better decisions.
The real cognitive threat isn't ChatGPT. It's the willingness to share scary-sounding claims without checking if they're true.
Your brain isn't rotting. But your information diet might need work.
Claude apps
Try sharing a ChatGPT custom GPT with someone who doesn't have a premium account. They'll hit a wall of warnings about "limited features" and upgrade prompts. It's like inviting someone to dinner, then making them stand outside the restaurant explaining why they can't afford the full menu.
Claude's approach? Here's the link. It works. Done.
Here’s a Morning Mind Coach for daily reflection. I lifted the prompts straight out of my ChatGPT custom bot version and it is much nicer (and easier to access).
Or try the NOT OFFICIAL website graphic version remix of the BCG report (I asked Claude to sort their brand out while building it). Again, completely accessible.
The best AI tools are useless if people can't access them. Claude's artifacts remove the friction that kills adoption. No premium subscriptions blocking the door. No feature comparisons making people feel excluded.
It's democratisation through design. And it makes sharing AI-powered tools as easy as sharing a Google Doc.
Other nice things
🔭 Rubin Observatory: "First Look" Images Reveal Stunning Details of Trifid and Lagoon Nebulae
The NSF-DOE Vera C. Rubin Observatory has released new images showing extraordinary detail of the Lagoon Nebula region, including the Trifid Nebula and several star clusters. These "First Look" images showcase the observatory's capabilities ahead of its full Legacy Survey of Space and Time.
📊 Financial Times: Researchers Face Mounting Challenges in Understanding How AI Models Actually Work
As AI systems become increasingly complex and widely deployed, scientists and engineers struggle to develop effective methods for interpreting and explaining the internal workings of large language models, raising questions about transparency and accountability.
🎬 LinkedIn: Tianyu Xu Introduces CASCADE Framework for More Effective VEO Video Prompts
A new prompt structure called CASCADE is helping creators improve their success rate with VEO 3, the AI video generation tool. The framework breaks down prompts into seven key elements: Camera, Ambiance, Subject, Context, Action, Dialogue, and Emotion, with special emphasis on camera work, subject definition, and dialogue for vlog-style content.
That’s all for this week
Thank you for reading. There are more than 800 subscribers to this newsletter now and I couldn’t be prouder to share some thinking with all of you every week.
As ever, if you enjoyed it a “like” or a comment is very much appreciated.
Antony