Dear Reader
“Why aren’t there more books about AI?” I wondered while assembling my 2024 books list.
I realised they were all connected to this theme: the coming to terms philosophically and practically with living through the beginning of a revolution in human thought. Where fiction was helping me through ideas and feelings about the sense of self and the unsettling of certainties about society, many of my non-fiction books were about the practicalities, the what-must-be-dones of it all.
Artificial intelligence is unsettling not just as a force of economic and cultural change, but as a technology that holds a mirror up to cognition, a frightening process that starts with us asking hard questions about whether a machine system can think or be self-aware or create, and then awkwardly shift in our seats as the questions left hanging in the air start to feel like accusations against our own assumptions and narratives about the world.
An Immense World, Ed Yong
Do you remember reading an amazing book full of facts when you were a child and excitedly telling your parents or siblings about what you’d learned? The sheer thrill of the idea that a T-rex had 60 razor-sharp teeth all the size of a banana! The jaw-dropping revelation that Krakatoa’s eruption was the loudest noise ever made on earth as it exploded with the power of 15,000 nuclear bombs!
Recapturing that feeling is the first reason to read An Immense World. Ed Yong’s mission is to have us understand the vast range of ways of experiencing the world through senses, and how we and every other creature are trapped in ways of seeing the world that are peculiar to us.
The book takes us through the incredible science of how different animals experience the world. Ultimately we learn that all creatures, humans included, live in a world defined by and made by what they are able to perceive. We are led to imagine what an electric eel can “touch” through its charged field, how a spider “sees” its prey through vibrations in a web or a dolphin perceives our skeleton and internal organs as well as our external shape when it uses its echolocation abilities.
When we have conjured these strange ways of seeing as best you can, it becomes much easier to understand the previously unnoticed limits and biases of our own senses and how they skew our understanding of the world.
Early on, he introduces the wonderful concept of an umwelt, the world as a specific species experiences it:
Our umwelt is still limited; it just doesn’t feel that way. To us, it feels all-encompassing. It is all that we know, and so we easily mistake it for all there is to know. This is an illusion, and one that every animal shares,
Doppelganger, by Naomi Klein
Where An Immense World challenges our frames through joyful explorations of the world, Naomi Klein’s Doppelganger is a true crime investigation into an unfolding dystopia. It begins with the titular doppelgänger of Naomi Wolf, the feminist author turned conspiracy theorist often confused with Klein. Examining identity, conspiracy theories and polarisation, the book is a long and winding train of thought through problems of our time.
While I didn’t always agree with her conclusions, I welcomed every challenge. Often I’d be shocked by a description of unnoticed and unnamed things. For instance, the Klein-coined “mirror-world,” where far-right commentators accuse others of their own crimes (e.g., white people worried about racism, men worrying about equal rights).
The power of labeling and challenging the reader to confront new realities makes this a rewarding but not easy read. Worth it, though.
Once you see how easily online identities can be warped—yours included—everything you thought was fixed becomes permeable and uncertain.
Co-intelligence, by Ethan Mollick
At my company, we were ready for this book’s release, having read Ethan Mollick’s One Useful Thing newsletter for the past couple of years. We’ve made it recommended reading for anyone we work with.
Its value lies in its practical guidance for navigating AI’s transformative power. While some parts felt entry-level for us in the field, they are vital for non-technical audiences. Mollick aptly describes generative AI as a “co-intelligence” that “augments, or potentially replaces, human thinking to dramatic results.” His exploration of hybrid models like “Centaurs” and “Cyborgs,” emphasising collaboration between humans and AI, is insightful.
Mollick argues that organisations must prioritise experimentation and adaptability: “The way to be useful in the world, be an expert human.” This book encouraged our AI work, validating our approach while offering fresh perspectives. For anyone, it’s the ideal starting point to embrace generative AI’s potential responsibly and practically.
Here are five useful tips from Co-Intelligence:
1. Treat AI as a Thinking Partner, Not a Tool
• Insight: AI excels at generating ideas, identifying patterns, and offering creative solutions, not just executing tasks.
• Action: Engage with AI by iterating on its outputs, treating its suggestions as a springboard for refinement rather than final answers.
2. Focus on Prompt Engineering
• Insight: The quality of input (prompts) determines the quality of AI’s output.
• Action: Experiment with structured and specific prompts, refining them to improve AI’s responses. Use step-by-step instructions or ask the AI to “think aloud” in its reasoning.
3. Combine Human Expertise with AI Strengths
• Insight: Humans excel at intuition and contextual understanding, while AI is superior at processing vast amounts of information quickly.
• Action: Use AI to generate first drafts, automate repetitive tasks, or explore unfamiliar domains, while humans focus on evaluating and editing outputs.
4. Embrace Iteration and Experimentation
• Insight: The collaboration between humans and AI thrives through cycles of experimentation and refinement.
• Action: Test different approaches with AI, iterate based on results, and leverage the feedback loop to enhance creativity and problem-solving.
5. Maintain Ethical Oversight
• Insight: AI lacks ethical judgment and contextual understanding, so human guidance is essential to ensure responsible use.
• Action: Establish boundaries for AI use, verify outputs critically, and be mindful of biases or errors in AI-generated content.
Right Kind of Wrong
Right Kind of Wrong is the first book to make two of my annual best books lists, as I included it in my 2023 business books recommendations, the year it won the FT Business Book of the Year.
I’m including it again because I reread it and have cited it more than any other book in my AI client work. While Mollick concludes that experimentation and discovery are the best activities to understand the potential, most organisations are culturally conservative – even when they say they are not – and find it hard to commit to work that must sometimes fail to learn and be daring enough.
Here are five things Edmondson recommends:
1. Differentiate Types of Failure
• Insight: Not all failures are created equal. They can be categorised into three types: preventable failures (avoidable errors), complex failures (from uncertainty or coordination challenges), and intelligent failures (from thoughtful experiments).
• Action: Focus on encouraging intelligent failures while minimising preventable ones. Use this distinction to build a culture that balances learning and accountability.
2. Create Psychological Safety
• Insight: People need to feel safe to share failures and mistakes without fear of punishment or ridicule.
• Action: Leaders should actively model openness by discussing their own mistakes and inviting honest conversations about what went wrong.
3. Embrace Failures as Learning Opportunities
• Insight: Failure is inevitable in innovation and experimentation. What matters is learning from it.
• Action: After a failure, conduct structured debriefs to analyse what happened and extract actionable insights for improvement.
4. Encourage Experimentation
• Insight: Intelligent failures are a natural part of experimentation, particularly when exploring new ideas or uncertain territory.
• Action: Design experiments with clear hypotheses and metrics for success, ensuring the scope of failure is manageable and the outcomes lead to meaningful learning.
5. Normalise Failure through Systemic Practices
• Insight: To “fail well,” organisations need processes that make failure an expected and accepted part of progress.
• Action: Build processes like after-action reviews, innovation retrospectives, and “failure forums” to embed a culture of reflection and improvement.
The Siege, by Ben Macintyre
The story of the Iranian embassy siege in 1980, brilliantly researched and told by Ben Macintyre. I remember the siege as a child, it dominated the news so completely for its duration, and then reading about it for years afterwards, a modern myth made and returned to with increasing pomp and decreasing clarity.
This isn’t an SAS hagiography, though, it’s a meticulous telling of how the crisis played out, through the perspectives of the police, politicians, hostages and terrorists. The details from the comical – such as the coverage of Nibbles the hamster in the Montessori school next door – to the post-traumatic stress disorder (PTSD) symptoms suffered by the survivors are fascinating. The endurance of the hostages stuck with me. And as for the famous denouement, the ability of the soldiers to improvise and keep momentum even as myriad things went wrong was more impressive than the fantasy of flawless special forces superheroes that plays out in movies.
That’s all for this year, folks…
Antonym’s abnormal programming will resume soon. If you enjoyed this please subscribe or leave a like below.
Happy New Year’s Reading!
Antony