Antonym: The Brain Boom Edition
By the time we work out how to predict the value of generative AI it will be a history lesson.
Dear Reader
What’s happened this week in AI?, someone asked me on Thursday.
The question stalled my train of thought for a moment. Not a good look in a conversation where I was presenting some measure of expertise about things generative, intelligent and artificial. The problem wasn’t that it was a quiet news week. The problem was where to begin.
This newsletter is effectively me reviewing the week that I experienced, but there’s no chance of catching everything that will be relevant to everyone.
How much value will AI growth create and when?*
Analysts agree that AI will create economic and productivity growth, but disagree about whether the effects will be felt immediately or in 20 years. Soumaya Keynes wrote an FT column on the big gap in estimates.
There have been several attempts to estimate the effects of generative AI on annual productivity growth, with pretty varied results. Last year, Goldman Sachs estimated that in rich countries it could contribute around 1.5 percentage points over a decade.
Soon after that, McKinsey predicted that it could deliver between 0.1 and 0.6 percentage points between 2023 and 2040. And most recently Daron Acemoglu of MIT calculated a boost over the next decade of at most 0.2 percentage points.
Arguments focus on how long productivity gains generally take to be realised after technology revolutions become apparent. Precedents are hard to apply to artificial intelligence, especially generative AI. Simply put: there has never been a technology so powerful and instantly available to so many people at the same time in history and there has never been a technology that is capable of accelerating human thought before. Unchartered territory. A new world. Yep.
Take scientific research as an example of a complex-to-measure field of value creation. Science is being both undermined and supercharged by generative AI on different fronts. The academic system of citations and peer review is threatened by both people submitting sloppy research with poorly AI-generated copy in it and by people using generative AI (poorly) to review the papers. This week Semafor reported that academics were being warned not to Perplexity summarises it as:
The use of AI-generated text in peer reviews has been rising, according to a study led by Stanford University. Researchers are increasingly turning to AI due to the fast-paced nature of research and pressure to meet deadlines on top of their demanding workloads. While some argue that AI performs well and can make academics more productive, there are concerns that relying on AI to replace human expertise could degrade the research process and weaken public trust in academia.
This week Google Deepmind announced its Alphafold 3 AI model, a system that can simulate how proteins interact with RNA and small molecules such as drugs, which will speed up the discovery and testing of new medicines. The previous versions of this tech saved – and I had to double check my notes three times before writing this – one billion hours of research time by discovering 200 million protein structures, each of which would previously have taken a PHD student around five years to develop.
The economic effects of complexity that are solvable with the smart application of AI will be unknown until the economic historians are analysing what happened from the 2020s on with AI and society. I imagine that they will be using forms of AI to understand the complexity of the question.
The only way to find out is to experiment. And then to commit fully to the logic of what those experiments tell you.
Apple cancelled its $10BN car earlier this to divert resources to generative AI, fearing falling behind would make the iPhone seem like a “dumb brick”. That’s a huge call, and by a company with more cash and resources than almost any other.
The decision was made after two senior executives spent a couple of weeks trying out ChatGPT in early 2023. A New York Times reports goes deep into what the company is doing about AI, with some fascinating insights including its secrecy being a turn-off for computer scientists it wants to employ (they want to publish their research).
Bots will make docs better
In this week’s Antonym sibling email, BN Edition, we turned three reports into custom AI chatbots** that you can talk to to get information you need. We could call it creating doc-bots, I suppose.
Making docs into doc-bots does a few things:
Most simply, it’s easier to get a sense of what’s in a document and pull out things that are useful to you. In the case of something like the Andrew McAfee report this will help you decide whether reading the whole thing is useful, and for something like the nearly 600-page AI Index, it’s possible to find data and stats that are most relevant or useful to you.
You can submit your own data or content and ask for connections or perspectives. For instance, when writing a plan and wondering what data would help support or challenge it you can attach it and ask for supplementary information or perspectives that are relevant to it.
It means – and this may be a bit mind-blowing – that you can create systems of documents that can contribute to a conversation. In the paid-for versions of ChatGPT this can be achieved by putting “@[name of other GPT]” in the chat box. The other bot will come in and give its contribution to the conversation.
(The usual caveats apply – watch out for inaccuracies, responses are generally only as good as the questions, information may not be secure.)
Bonus Bot!
I’ve made another for you, dear Readers after seeing that the DLD group and their partners Burda in Munich had published an AI magazine in PDF. It’s from a nicely designed newspaper, but lacks a digital version beyond the PDF. With generative AI this is not a problem – I quickly created a chatbot with Poe which you can view here:
DLD holds excellent conferences about the intersection of technology, design and society, as regular readers will know (the September conference I attended was a heady mix of inspiration and insights – we also created a guide to that event in a public Notion page here).
That’s all for this week…
Thank you for reading. If you liked it, why celebrate by liking or sharing. This newsletter is solely funded by occasional dopamine hits from fairly basic feedback loops like these.
I hope you enjoyed the sunshine and aurora borealis displays if either of those celestial phenomena were available to you.
Antony
Footnotes
* Remember: Nobody knows anything.
Or why you should never do trends forecasts for a living (but please contact me at hello [at] brilliantnoise [dot] com for a trends analysis of your sector).
All questions relating to generative AI especially can be answered with William Goldman’s memorable line about Hollywood producers predicting hit movies: “Nobody knows anything.” But just like producers, everyone has to have an opinion. Understanding this truth about the future AI is by turns liberating, calming and inspiring. Don’t wait for an expert to tell you the definitive answer to what comes next, start exploring and experimenting yourself. Also, if nobody knows anything for sure then you aren’t as behind the curve as you might feel.
** The custom chatbots were:
Original: Generally Faster: The Economic Impact of Generative AI, by Andrew McAfee
Chatbot version: Doc-GenerallyFaster
Original: Stanford’s 2024 AI Index
Bot version: AIIndex.2024
Original: AI and Decision-Making, from the Alan Turing Institute
Bot: GenAI.Security