Dear Reader
Every tech wave brings new jobs, new areas of expertise. Search engines brought us junk copywriting content factories, search engine optimisation agencies and paid search advertising fraud specialists, for instance. All of these are now in danger of massive disruption from the generative AI revolution – but not quite yet.
We’re back on the AI bullet-train this week (as we have been the past year, all you johnny-come-lately-newsletters).
The first $100BN victim of “hallucinated facts”
Fact-checking is the sort of thing that respected newspapers did with dedicated departments and now do less, or are considering further reducing the budgets. It’s something that any editor will do to the extent of asking for the source for a fact or statistic that is unattributed.
Fact-checking something you do automatically when getting ready to publish on something as modest as this newsletter or a company blog. This week I added another check for articles in my company that had used AI in any stage: fiction-checking.
We don’t write articles with AI wholesale, but we’ve been experimenting with structuring outlines, summarising research articles and – much as more primitively learned machines have spell-checked and grammar-checked work for decades now – proofing and copyediting. Hearing my request for fiction-checking gave a couple of people pause, but it made sense. As we’ve talked about in previous Antonym editions, generative AI like ChatGPT will often hallucinate facts.
This week we saw the first $100BN costs of not having fiction-checking as a priority. Google’s parent company Alphabet’s shares fell 9% after it showed a video demonstrating its answer to ChatGPT3: a search assistant AI called Bard. The video showed Bard making up some facts.
Experts pointed out that promotional material for Bard, Google’s competitor to Microsoft-backed ChatGPT, contained an error in the response by the chatbot to: “What new discoveries from the James Webb space telescope (JWST) can I tell my nine-year old about?”
Bard’s response – in a video demo posted online – includes an answer suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets.
The error was picked up by experts including Grant Tremblay, an astrophysicist at the US Center for Astrophysics, who tweeted: “Not to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST did not take ‘the very first image of a planet outside our solar system’”.
Perhaps the markets thought “well they are panicking” or “this tech isn’t much good”, but the mistake could as easily have been made by any other chat-bot AI.
Please adjust your AI analogy
Because of 50 years of popular culture priming us to expect omnipotent supercomputers like HAL9000, even super-smart, super-rich tech companies can fool themselves into making silly errors. So should we all avoid AI for a while?
No. It’s incredibly useful and we need to learn how it works. I suggest changing your analogy. Instead of thinking monolithic all knowing robot think of AI as more like:
R2-D2 from Star Wars: Ask R2 to fetch, carry data, interact with big systems, help you fix technical problems. You don’t ask it to write your company prospectus, legal defence or wedding speech. Also it may have a hidden agenda or side-mission of its own – so watch out if he starts acting strangely.
Marvin the Paranoid Android: In The Hitchikers’ Guide To The Galaxy, Marvin was the capable but utterly morose machine that would often take a lot of persuading to do anything useful at all. As Marvin might have said:
Here I am, brain the size of a planet, and they ask me what the weather is like today and to write some banal marketing copy for a website that only other melancholic bots will ever read.
Interns: This is a very useful video (start at 4:38 for the interns analogy, but the whole thing is an interesting watch) about working with AI, and compares using ChatGPT to having access to 1,000 interns. Interns have a lot of energy and want to come back with useful information or good quality, but their work needs checking and they can’t help making a few mistakes. If you’re patient and understand how they work, then you can get really good support for your own work. Working with AI in this way requires you to know what your process is, take time and care in briefing (writing prompts to put into the AI).
And if all those analogies aren’t helping, an article from The Singularity Hub offers plainspoken suggestions:
Understand that ChatGPT is not a fact-finding service and does not try to write sentences that are true.
Use ChatGPT to generate plausible sentences rather than factual ones.
Ask colleagues for input to help identify factually untrue statements produced by ChatGPT.
Recognize that ChatGPT is a data retrieval system, not a fact-finding service.
Utilize existing fact-finding services, such as Google, for factual information.
Lastly, this article from Wharton professor Ethan Mollick, advising his students on how to write with GPT3 is great The Practical Guide To Using AI To Do Stuff. It’s clear that Mollick isn’t taking part in the “but all the students are plagiarising” moral panic:
My classes now require AI (and if I didn’t require AI use, it wouldn’t matter, everyone is using AI anyway).
AI lessons from history
The Economist does this very well – looking back at tech revolutions and noting that even the most powerful new technologies take time to change an economy, and that capital constraints and the need for intangible capital can slow deployment.
First up, the mass unemployment general panic is usually wrong:
despite epochal technological and economic change, fears of mass technological unemployment have never before been realised.
But things will change:
The sustained economic growth which accompanied the steam revolution, and the further acceleration which came along with electrification and other later innovations, were themselves unprecedented. They prompted a tremendous scramble to invent new ideas and institutions, to make sure that radical economic change translated into broad-based prosperity rather than chaos. It may soon be time to scramble once again.
Reverse Brexit?
Respected foreign policy expert Gideon Rachman of the FT says reversing the terrible economic harm of leaving the EU is more possible than you might think, although the UK would not get the sweet deal it had last time and would probably have to join the Euro.
European opposition to a British return certainly exists, but can be overstated. Michel Barnier, who led the EU’s Brexit negotiating team, says the door is open for Britain to rejoin the EU “any time”. Guy Verhofstadt, who was head of the European parliament’s Brexit committee, tweeted last week: “I have a dream. Ukraine and Britain joining the EU in the next five years.”
When I rang Philippe Lamberts, co-chair of the Green group in the European parliament, and asked him about Britain rejoining, he replied: “That would be my dream scenario.” Lamberts thinks the five main political groupings in the parliament would all favour British re-entry.
And breathe…
And if the pace of everything is a little too much don’t forget to breathe. Learning about breathwork, controlled breathing exercises that change your body’s stress responses among other things, has been really helpful to me. Reach for YouTube, your favourite relaxation app or this infographic from the Sloww Sunday newsletter as a quick reference to main types.
Watching…
Give RRR (Netflix) a look if you haven’t seen it. It’s up for best song at the Oscars, but really this is most bewilderingly brilliant and opulent action thriller I’ve ever seen. It’s nuts in the best possible way. My jaw was on the floor for the first fifteen minutes. Warning: the British are very much the bad guys, which shouldn’t come as much of a shock as it might.
And… that’s all we have time for this week, folks.
See you very soon. Thanks for reading, liking, subscribing and sharing. It’s a pleasure to write this.
Antony
RRR is brilliant - thanks for the recommendation