Antonym: Tsunami of Lies Edition
Five minutes will get you to the end of this newsletter. If you tread on a link you may be in it for the rest of the day.
Dear Reader
This week’s reflection collection includes a bit about being wrong, a surprise history of balloon bombs, a very expensive wrench and more from the rapidly moving front-line of here-and-now AI and the information wars.
Speaking of AI, I tried to get an imitation of Stephen King’s take on the metaverse via GPT-3. I suspect it’s on point but it started to try and censor itself:
Pandora’s box of disinformation
In previous weeks, I’ve mentioned the wonders of the GPT-3 artificial intelligence (AI) project and its sibling-programme, the DALL-E 2 image generator. Two important and deeply unsettling things to note this week.
While GPT-3 use is with the permission and oversight (kind of) of the company that developed it, OpenAI, a rival “large language model” platform was released by Meta (formerly Facebook):
Meta is making its model, called Open Pretrained Transformer (OPT), available for non-commercial use. It is also releasing its code and a logbook that documents the training process. The logbook contains daily updates from members of the team about the training data: how it was added to the model and when, what worked and what didn’t.
The thinking behind making the platform available to all comers is partly about learning faster and partly about transparency. One downside of the availability of such technology to everyone is that disinformation efforts will get a huge boost.
In September 2020 essay in the Atlantic, Reneé DiResta was writing about earlier versions of GPT and could already see the exponentially expanding mass of writing on the wall, hence the title of the piece: “The Supply of Disinformation Will Soon Be Infinite”. Basically, if you can use these tools to generate writing and images that seem to come from real humans it will make the job of disinformation peddlers that much easier. The danger is that masses more fake news will make it harder to trust anything.
For the moment, at least, it seems unlikely that generative media will be effective in the same way as traditional media at promoting political messages. However, that’s not to say that it couldn’t be. What it will do is muddle the waters, making it much harder to tell what’s real and what’s not.
Remember, the point of disinformation is not to fool you, necessarily, but to bore, confuse and exhaust you so that you miss the information that’s really important, or just give up trying to understand the world.
What a tool!
If you suspect you’re not spending enough on torque wrenches then the self-satirising “How To Spend It” supplement to the current FT Weekend has Jony Ive’s pick: a $450 tool by Snap On. Buying Jonny’s 12 “tools of the trade” will set you back a few grand more than a MacBook Pro.
How to be wrong
Adam Grant explored the difficulty of changing your mind in Think Again. We often think of what we know as part of our identity, which makes us more likely to defend beliefs than reconsider them in light of new evidence.
I may have a long–form bit of writing brewing about “how to be wrong”. This week a couple of passages caught in my field-of-focus and gave me the structure of the argument.
1. We don’t know anything for sure…
The future is not simply unknown, which suggests a lack of information, but unknowable – what will be is uncertain. There is no information. — Designing Organisations, by Naomi Stanford
2. But we think we do…
The most pervasive obstacle to good thinking is [confirmation bias], which refers to the human tendency to search only for evidence that confirms our preferred beliefs. Even before the advent of social media, search engines were [supercharging] confirmation bias, making it far easier for people to find evidence for absurd beliefs and conspiracy theories, such as that the Earth is flat and that the U.S. government staged the 9/11 attacks. But social media made things much worse. — “Why the Past 10 Years of American Life Have Been Uniquely Stupid”, by Jonathan Haidt
As the physicist Richard Feynman said: “The principle is that you must not fool yourself, and you are the easiest person to fool.” His recommended technique for spotting when we are fooling ourselves was to write out an explanation of the the thing you think you understand, then marking where you struggled to explain something. Those are the things you only think you understand.
3. We need people to disagree with us…
People who think differently and are willing to speak up if they disagree with you make you smarter, almost as if they are extensions of your own brain. People who try to silence or intimidate their critics make themselves stupider, almost as if they are shooting darts into their own brain. — “Why the Past 10 Years of American Life Have Been Uniquely Stupid”, by Jonathan Haidt
My colleague, Gareth James, a veteran creative director, is very clear on the importance of being able to “stand side by side and look at the work on the table”. According to Gareth, the best work comes from a process of collaboration, coming up with different ideas and then figuring out which ones are worth developing. The process falls apart as soon as someone starts to identify too much with a particular concept. Someone asking him to defend a creative idea is a warning sign that things are off-course.
This insistence on getting some distance between yourself and an idea or a belief or a piece of knowledge was echoed in a technique the Centre for Teams uses, that they call the team eye.
Other bits…
Writing
Currently in progress:
How to be wrong. Got to expand on the notes above and get this thing out of my head.
Metaversus reality. I’ve been commissioned to give a talk on the concept of the Metaverse, and how not to be fooled by the hype, so this really will get written now.
Global-Local brand marketing. Due done this week, and looking at how test-and-learn methods can improve operational performance.
Books
It’s been a quiet week for book-reading: How to Animal (deep thinking about what it is to be human), Designing Organisations (the best and most brilliant book about org design) and Razorblade Tears (post-Trumpain revenge thriller) are all doing good duty.
Inside the Trump White House
“This was Trump pulling a Putin” is a brilliant in-depth piece from the New York Times, talking to US foreign policy advisor Fiona Hill. She served in the Trump administration and testified at his impeachment. A principled, highly intelligent professional, her accounts of Trump’s behaviour in the White House are incredible and fascinating. Most chilling of all, though was a comment from another of Trump’s advisors, John Bolton:
When I asked whether he believed Trump could be viewed as an authoritarian, Bolton replied, “He’s not smart enough to be an authoritarian.” But had Donald Trump won in 2020, Bolton told me, in his second term he might well have inflicted “damage that might not be reparable.” I asked whether his same concerns would apply if Trump were to gain another term in 2024, and Bolton answered with one word: “Yes.”
Trump is currently odds-on to win another presidency.
Meanwhile in advertising…
The S4 Capital results finally came out on Friday, a month after they were due. Sir Martin was embarrassed by the delay.
The FT said the issues behind the delay were “control weaknesses, staff turnover and a lack of documentation, particularly relating to revenue and cost of sales recognition”. Poor admin, then?
The results showed growth, but not profitability:
Losses before tax swung to £56mn, compared with a profit of £3mn the year before, and the company stated that it would have liked to improve its operational earnings before interest, taxes, depreciation and amortisation margin, “which was impacted by the significant investment required to bed down our growth”.
Fu-Gos there?
In random history news, I chanced upon a World War II story I’d never heard of before this week: the Japanese Fu-Go balloon bombing campaign. An audacious military failure, incendiary and high explosive bombs were carried by paper balloons on the Gulf Stream with the intention of spreading panic, and starting fires in forests and crops. The only fatalities came after the campaign had finished, when a bomb which had landed intact in a forest in Oregon killed a woman and several children out for a picnic. A very weird detail is that one of the bombs landed on the Los Alamos site in April 1945, where the atomic bombs that would be used on Japan were being developed.
Watching
Open Range (Prime) now into its season one final quarter feels like it is teetering on the edge of the massive hole at the centre of the plot (no, there literally is a massive hole) and could be amazing or lose itself in the last two episodes due out this week.
The French Dispatch (Disney+) is an episodic bit of Wes Anderson whimsy. It’s no Grand Hotel Budapest, but is joyful and diverting and I’ll happily watch it again.
Paddington (Disney+)was watched as an antidote to the massively stressful (but brilliant) Boiling Point (Netflix). Stephen Graeme hits the peak of his gift of radiating intense rage and frustration in the latter.
Clark (Netflix) is a kind of Flashman-meets-Scandi-crime caper. A riotous and giddy first episode, I’m looking forward to seeing where it goes.
Listening
Podcast of the week is Kermode and Mayo’s Take. The BBC film review show it replaces was one of the first podcasts I ever listened to, during some long commutes in the mid-2000s, and I’ve not stopped since. Free of the shackles of broadcasting standards and time-limits, they’re now taking in TV series. But the show’s as much about the bickering and the relationship as anything.
Quote of the week
The future is not simply unknown, which suggests a lack of information, but unknowable – what will be is uncertain. There is no information. — Designing Organisations, by Naomi Stanford.
Thank you for reading. I hope there was something here that piqued your curiosity.
Antony