Dear Reader
I’ve got an amazing demo for you, plus how to clear your 1,000-email backlog, some more detail about how Klarna is reinventing itself with AI, AND…
It’s launch season at Brilliant Noise. There’s newness aplenty – we’ve got new products and services, and I’m publishing a new paper on AI literacy in the next few weeks. As part of launch season, I’ve got an invite to an online briefing on October 2nd and an opportunity to take part in a beta test of a new service (with some free advice).
A low f***-up ratio
We’ll start with this account of Kamala Harris’s social media strategy. In the late 20-teens Trump emerged as the unexpected master of Twitter. But in 2024, Kamala Harris rules on TikTok. According to the Washington Post, contrary to boomer instincts, letting young people loose with your presidential candidate’s reputation has not been harmful (yet).
In 2016, a single Hillary Clinton tweet might have required 12 staffers and 10 drafts; today, many of Harris’s TikTok videos are conceived, created and posted in about half an hour.
It’s all about trusting the TikTok natives: all the staffers are under 25, have short turnaround times on content and have a lot of latitude.
Deputy campaign manager Rob Flaherty, who has described them as a pack of “feral 25-year-olds,” said the campaign started developing the strategy last year, worried voters had forgotten who Trump was and that the campaign needed “a voice that was more aggressive and hard-hitting” to remind them.
The team faces minimal content-approval checks and “barring objection, we’re gonna go. Everything goes on a five-minute warning,” Flaherty said. “You just gotta trust your people. Our f---up ratio [is as low] as if there were 19 layers of approval.”
Sam Ogborn, a marketing professor, has an excellent commentary about this:
The ultimate back-to-school AI tool (which also makes podcasts)
This is incredible. You have to try it. (I know I sound like a giddy fool high on hype, but seriously.)
Google has been developing an AI-powered notebook for a good year or so called NotebookLM. Now it is out there for us all to try and it has some tricks that will trigger major AI vertigo moments for many.
In NotebookLM you create “notebooks” of links, notes, PDFs, images and whatever you collect and it will help you order them, make sense of them, turn them into study guides and…
Podcasts.
I don’t mean it will read you the notes. That’s now a commonplace feature (iPhones will now read any web page to you if you have the latest updates). No, it generates a conversation between two upbeat American podcast hosts. The effect is beyond uncanny.
In the video here, NotebookLM has taken last week’s Antonym and made it into a podcast. To be clear, I didn’t do any production here. No choosing voice actors, no script writing, no video editing, nothing. It did it all for me.
The implications of this demo are worth dwelling on for a moment:
It’s another example of the fluidity of formats that is coming to content – writing can be instantly re-written for a different audience, turned into audio, audio can be turned into video and back into words and so on. “Re-publishing” is the technical term in media, but it’s so much more than that. We’re used to content being a thing that attention is spent on, but something fundamental about how that dynamic is changing.
Like all AI apps this is the worst version of it that we will ever see - it’s not hard to imagine video or animated and illustrated versions of this making content out of notes.
The format is surprisingly useful. I listened to a 20 minute podcast that it generated of a hefty research report I had written for a client. It was an easy way to recap on the insights and ideas in that fairly dense document. A way back into a subject, a way to revise, or just do your meeting prep without having to sit down with a stack of PDFs.
For learning and research this shape of an app is very useful. It’s doing a lot of things that make working with AI so powerful, connecting lots of different documents, summarising, suggesting, coaching and all in a calm, useful interface.
Call for volunteers to test an AI power hour
At Brilliant Noise, we’ve developed a new way to overcome hesitancy and delay for leaders to use and understand AI. The sessions will be called “AI Accelerator Hour” and the format is to find a knotty task e.g. a big report, pitch, plan, literature reviews etc, get on a video call and work on it together for an hour using generative AI.
How far into the task or problem can we get working and thinking with AI tools like ChatGPT? It’s like a fun game, where the prize is getting your time back.
We’re going to start officially offering these sessions soon, but first want to test the the process from now until October. I’ve three beta test sessions left next week – which will be free and confidential – if you want one, reply to this email.
This is the WhatsApp message from a very senior marketing executive who got a whole afternoon of work cleared after the AI Accelerator Hour:
Klarna’s AI lessons
After last week’s special I came across this interview with the Klarna CEO, Sebastian Siemiatkowski, on a video series from Sequoia, a venture capital firm. Worth a watch for the account of a company going all in on discovering how generative AI can help them. Five things I notes:
Just get on with it: Don’t wait for the "perfect" AI solution or for someone to tell you to get started. Siemiatkowski's interest in AI was sparked by trying ChatGPT after seeing it discussed on Twitter. This simple act of curiosity led to a decisive partnership with OpenAI.
Start exploring AI tools now. Familiarise yourself with platforms like ChatGPT and experiment with their capabilities. You’ll learn faster by getting hands-on experience.
Prioritise clarity and data quality: Siemiatkowski emphasises that clear instructions and high-quality data are paramount for both human and AI success. Just like onboarding a new employee, AI systems need well-defined processes and accurate information to perform effectively. Experiments are fine but once you begin to scale or share AI solutions across companies you have clear documentation, established workflows, and access to relevant, high-quality data.
View AI as an accelerator, not a job terminator: AI shouldn't be feared as a job destroyer but embraced as a powerful tool to enhance human capabilities. Klarna discovered that AI and human agents can coexist and improve each other. While AI handles routine tasks more efficiently, human agents provide nuanced support and complex problem-solving. (How Klarna reducing headcount is murky – but they are emphasising turnover and reskilling.)
Address societal impacts with empathy and foresight: Siemiatkowski acknowledges the potential for job displacement due to AI and stresses the need for societal support in the form of retraining and adaptation initiatives. He also discusses the importance of ethical considerations early on, such as establishing a global electronic identification system to combat AI-generated fakes. As leaders, it’s crucial to proactively address the societal implications of AI, advocating for responsible development, ethical guidelines, and support systems for those whose roles might be impacted.
Think beyond immediate applications and experiment relentlessly: Don't limit your thinking to AI's current capabilities. Siemiatkowski envisions a future where AI powers personalised financial assistants and even generates bespoke products on demand. Klarna actively tests AI applications in various departments, from customer support to marketing, fostering a culture of continuous learning and discovery.
Embrace experimentation and think expansively about AI's potential. Encourage your team to explore AI tools, test new ideas, and discover innovative ways to integrate AI into your organisation.
Dream big, experiment often: Klarna’s not stopping at customer service. They’re dreaming up AI financial advisors and custom product generators. The key? Constant experimentation.
The lesson seems to be: embrace AI with open arms (and a healthy dose of common sense). It’s not about replacing humans; it’s about supercharging what we can do. The reward can be a huge amount of time and money saved.
Post-holiday inbox zero: a practical solution
Ah, the elusive Inbox Zero. It’s like the Loch Ness monster of productivity - often talked about, rarely seen. But productivity writer Oliver Burkeman has some refreshingly sane advice on the FT Working It podcast:
Quarantine the Old Stuff: Create an “Old Email” folder and dump everything in there. Start fresh. It’s like decluttering, but for your digital life.
New Emails Are VIPs: Focus on the fresh stuff. It’s like triage for your inbox.
Embrace Imperfection: Not every email needs a response. Some will solve themselves. It's an email, not a moral obligation.
Remember, the goal isn’t an empty inbox - it’s a manageable one. Your sanity will thank you.
Speaking of limits, Burkeman dropped this gem: “There’s something about knowledge work that I think makes it really hard to keep our limits in sight.” Ain’t that the truth? It’s like our brains are all-you-can-eat buffets, but we forget we have finite plates. Maybe it’s time we embrace our limits instead of constantly trying to “hack” our way past them. After all, even AI has its limits… for now.
One more ad…
“Prepared Minds” briefing Live Session
Join us as we share insights from our new paper about AI literacy, “Prepared Minds: How to Start Your Team’s AI Revolution”.
AI as a Language – Learning to use AI effectively is compared to learning a new language, requiring practice and immersion rather than simple technical training.
AI Literacy Boosts Innovation – AI-literate teams don’t just automate tasks; they invent entirely new processes and products, transforming routine activities into creative, high-value work.
Emotional Resistance to AI – Despite clear productivity gains, many people feel emotionally resistant to using AI, fearing its impact on jobs and organisational dynamics.
Date: Wednesday, 2nd October
Time: 11:00 am BST
Platform: YouTube Live
Can't make it live? Reply to this email and I’ll send you the paper upon release.
What I’m Watching
The “competence porn” of Shogun
I recently re-watched Shogun (Disney+) for the second time and loved it all over again, so it was great to see the series get so many Emmies this week. I also enjoyed this Substack by Matt Alt about why the series was so popular in America (hat tip to Helen Lewis:
Hiroyuki Sanada, who took home the Lead Actor Emmy on Sunday evening, plays would-be Shogun Yoshii Toranaga with such flair that it borders on competence porn: the thrill of watching a talented statesman-warrior navigate an even worse time period than our own.
Succession again
There’s little grabbing my attention on streaming right now, so I’m working through Succession and VEEP from beginning to end, again. This is no chore. Not at all. The writing in both is brilliant, although after reading the above about Shogun I expect that they both fall into the new genre of “incompetence porn”. I know they share some writer DNA and connections, but both are really all about people who think they are brilliant but are repeatedly forced to confront their own lack of competence.
That’s all for this week…
Remember, in the face of AI revolutions and inbox avalanches, a little curiosity and a lot of systems thinking go a long way. Keep experimenting, keep learning, and for goodness’ sake, don’t let your email rule your life!
If you enjoyed this, give us a “Like” or a share. Ah, g’wan. Makes my day.
Antony
What a rattling good read this was, Antony, thank you!