Dear Readers
Strange but true: it can be harder for an Olympic gymnast to explain exactly what they do during a routine than it is for them to do actually do it. The reasons for this give us clues about how to develop better ways of working with AI.
The most popular AI app in workshops we run – the one that everyone notes down as soon as they see it – is Goblin.Tools. It is actually a little bundle of apps, a digital swiss-army knife designed to make life a little easier for people living with autism, ADD, dyspraxia and other forms of neurodivergence (the app prefers “neurospicy” as a descriptor).
The show-stealer is called Magic To-Do List, and bears to cheeky strapline: “breaking things down so you don’t”. Put in any task, pick a level of “spicyness” to indicate how much detail you want, and it breaks down your tasks in a list of steps. Here’s a quick video overview from an assistive technology reviewer:
https://youtube.com/shorts/hNQvV_bm_cI?si=vRrbkbda-ct5VKl7
So simple. So why do so many people, neurodiverse and neurotypical alike, love it?
Because if you’re experienced at something and well-rested and calm you just do it. But most of us have to do things we’re sort-of good at or as out of our comfort zone. Ask me to write 500 words on a subject I’ll be back in a jiffy. In the same amount of time, I will struggle to complete the processing of 15 expense claims (especially if some of them are subscriptions for software which have hidden their invoices in the app equivalent of a locked, dusty broom cupboard near the boiler room in the basement).
This is – please believe me, Steve, my long-suffering finance colleague – is not laziness, but the very specific strengths and weaknesses of my brain, some due to nurture (I’ve worked a lot on writing and much less on spreadsheets and accounting) and some nature (I was born with a brain that finds it difficult to do repetitive, focused work where details matter at each individual step).
Added to this, we all have a maximum number of things we can hold in our minds at once which is surprisingly low: just three to four chunks of information. So if a task requires us to remember to do more things than this we need a checklist or some other reminder to keep us on track unless we practice it to the point where we don’t even have to think about it. At that point our brains have formed new structures, neural pathways that let us go into autopilot and still get it right every time.
Zoomies and task breakdowns: How to mix AI with our talents
Understanding how we perform a task is difficult, and even more difficult if we’re really good at it. In a recent newsletter, Dr Chantel Prat describes the process of learning how to do something using the example of the inspirational super-athlete, Simone Biles.
When Ms Biles “got the Zoomies” in the 2021 Olympics, we might have thought of it as her losing control in some way, but in a way it was the exact opposite. When she dropped out of the 2021 Olympics it was deep self-awareness and use of self-control that helped her avoid further injury and rebuild her abilities. She was 24, and her pre-frontal cortex (PFC) – the part of our brain that makes conscious decisions – was approaching maturity (we get a fully developed PFC between ages 25-30). Dr Prat uses the analogy of a horse and rider – the horse is our powerful, instinctive brain, that can perform complex feats that it has learned, the rider is our PFC, the rational decision-maker and sometime over-thinker:
the role that her developing frontal lobes played in the “twisties” that led her to withdrawal from the 2021 Olympics at the age of 24. You see, as coordinated as the process of flinging her body through the air in nearly impossible ways is, it’s probably not a thing that the “controlling” part of her brain likes to do. Imagine what it might feel like if the “should do” part of your brain were to wake up in the middle of said exercise, and try to grapple for control with the horse, in the middle of a highly practiced routine. “What the hell are we doing?” the rider yells as the horse executes a jump that defies physics.
As we noted with Goblin.Tools, describing how we do something that we’re good at can be really difficult, even for experts – perhaps especially for experts. Dr Prat explains
In a series of publications with evocative titles like When High-Powered People Fail, psychologists Sian Beilock and Tom Carr discuss the tradeoff between the ability to control your behaviors and to execute a skilled performance. A repeated theme that jumps out of their research is that “choking under pressure” does not result from a lack of strength, but rather from an abundance of control.
Those who can do. Those who can teach are doubly talented.
Coaches and teachers are special professionals, who are able to combine domain knowledge with pedagogy, knowing how to help people learn and improve their skills.
This ability to teach may also play a role for high performers in the age of working with machines as partners, as in the concept of co-intelligence or parts of thinking systems.
At the edge of what’s possible with humans and machines task decomposition is something we need to be able to do. This isn’t something a high performers can necessarily do while performing a task. Dr Prat notes:
For example, if you ask a professional golfer to describe what they are doing (a task that requires the control of the rider) while executing a skilled putt (a task that requires the experience of the horse), they are much more likely to miss the shot than if they can take their shot and then talk about it afterward. And as the title of their paper suggests, the stronger your frontal lobe function is, the more likely you are to get in your own way when it comes to having the opportunity to show what you know. (View Highlight)
There are a couple of ways to do this.
Describe. We talk through the steps of what we do. Sometimes this creates an idealised version of the way we do things.
Close observation. My colleague Jason Ryan describes how when working with the V&A they were trying to document how a master craftsperson performed their craft. They could not tell them what they did, Jason and his team had to watch and meticulously note the steps.
Perhaps we will need to do a bit of both. Some basic operations with generative AI systems can help us – we use Goblin Tools and prompts with ChatGPT to get suggestions of how we break down our tasks and then edit them to reflect what actually happens, or should happen.
In the longer term this can help us improve those processes, but just not while we’re in the middle of doing them.
Note: Chantel Prat’s The Neuroscience of You, is a brilliant read on how our brains work and, more surprisingly, how differently different brains work.
Do bad delegators make bad AI users?
The self-defeating habit of “delegation dodging”, where people do not delegate work to others because “it’s faster to do it myself” is caused by a number of thinking traps – lack of trust in colleagues and fear of failing at the more demanding work they should be doing instead among them – but the difficulty of actually explaining how to do something they just know may be one of them.
When we are learning to work with AI I’ve observed a very similar dynamic, usually in the spaces between training when people need to do the hard work of thinking about how to integrate generative AI into their work.
It is better to think of creating a system of thinking between you and the AI system – a co-intelligence, as Ethan Mollick’s book puts it it – learning how to work with the system rather than how to “use” it correctly.
In fact there is no “correct use” yet, as it is still early days in exploring what Large Language Models (LLMs) can do – even if there are a lot of “incorrect uses”.
Perhaps (oh gods, another Olympics metaphor coming up here) it will be like the advances in running shoes that favour some athletes natural gait more than others. If we’re able to naturally able to incorporate the technology into how we work we may want to understand why, if we have to adapt either the technology or our techniques to adapt to the technology we have to choose whether the gains are worth it against the investment we will need to make.
So AI literacy will be about understanding this process mixing in AI systems with our own cognition. It’s a skill can practice on smaller, lower-stakes or lower-investment tasks – like drafting emails and outlining reports – before we apply them to larger projects or higher-stakes investments like strategic decision-making, or execution of creative tasks.
Hype cycles only work 20% of the time
“Is AI already in the trough of despond?” — That’s the weird question I’ve been asked a lot lately. Weird, because it’s hard to tell what the question is, or what response is expected. Sometimes there’s a bit of glee or relief in the questioner, the sub-text perhaps: “you said this tech is great but now everyone’s a bit doubtful” or “hurrah – it’s a fad, I don’t need to worry about it”.
There’s two big problems with this strain of hopeful pessimism about AI froth and bubble bursting:
If we are buying into the Gartner hype cycle model then entering the “trough of despond” means we’re moving rapidly toward the phase where we work out out proper uses of a technology (“the slope of enlightenment”).
Technologies that follow the exact path of the hype cycle are rarer than most of us realise.
A study by The Economist found that about one in five technologies go through the hype cycle stages – most skip steps or disappear.
We find, in short, that the cycle is a rarity. Tracing breakthrough technologies over time, only a small share—perhaps a fifth—move from innovation to excitement to despondency to widespread adoption. Lots of tech becomes widely used without such a rollercoaster ride. Others go from boom to bust, but do not come back. We estimate that of all the forms of tech which fall into the trough of disillusionment, six in ten do not rise again. Our conclusions are similar to those of Mr Mullany: “An alarming number of technology trends are flashes in the pan.”
AI literacy working definition v1.1
Last week I offered a working definition of AI literacy, but I think I left out the most important part:
AI literacy is the ability to understand, evaluate and use artificial intelligence systems and tools in a responsible, ethical and effective way.
It is an evolving set of skills, including critical thinking, knowledge of the limitations of AI systems, and the ability to evaluate their output in relation to work in a field and where they can be applied. For example, in decision making and data analysis, knowing when systems are accurate or prone to error, or in creative processes, how systems can complement human cognition.
Working with generative AI creates a human-machine hybrid system. Our thinking is accelerated, extended and improved by working with a generative AI system like an LLM. The AI is also improved by our interactions – how we shape it for our use, and how our use refines its performance. Thinking about thinking – sometimes called metacognition – will be a crucial element to discovering the best uses of generative artificial intelligence in every field where human thought now plays a part.
Needs work. That’s why we call it a working definition, though.
This week I have been…
Reading
Dawn, by Octavia Butler. Butler, who died in 2006, was part of a science-fiction movement called Afrofuturism, and her writing challenges – in the most eloquent and intriguing ways – the way we think about race, sexuality, and the strange business of having a perceived self. Dawn starts with the protagonist, Lilith, waking up 250 years after a nuclear war in the care of a (very) alien species, who want her to help re-settle a now decontaminated Earth.
Antarctica, by Claire Keegan. Next-level short story genius. Each one leaves you reeling for a day or so and then you starting thinking about reading it again.
And last, a wilfully tangental Zoomies connection – a brilliant TikTok of a Zoom meeting performed as Evensong. Added wild fact, the creator credits my Step-Uncle, Malcom Pearce for writing the original Evensong that he adapted this from.
That’s all for this week. The holiday continues… see you in September.
Antony