Dear Reader
History rarely announces itself as it unfolds. But this week, it did – forcefully and undeniably.
In the past week or so…
The Assad regime fell in Syria, ending one of the longest, cruellest civil wars.
Nine Nobel prizes were awarded. Four of nine Nobel Prizes went to British citizens. Three for achievements in artificial intelligence.
A PHD expert in any field can be hired for £200 / month. The ChatGPT-o1 Pro brings PhD level reasoning for £200.
Your phone can see, and talk to you about what you can see. A new feature on the iPhone powered by OpenAI, lets you share what’s on your screen or your camera with the AI. I used it while walking around Bond St station this week and was giddily impressed.
10 septillion years-worth of calculations were in five minutes. The prize for the most existential crisis-inducing press release of the century (so far) to Google for its Willow quantum computing chip announcement.
Google announced AI-agent building abilities for everyone. Gemini 2.0 was shown off, with a slew of feature and product releases for Google AI tools.
Facebook seeks atomic power. Meta asked for companies to express interest in a contract to build nuclear power stations capable of delivering 4 gigawatts of energy to power its data centres.
Test your organisation’s AI literacy with one question
The answer to “What can we use o1 for?” can tell us a lot about the level of AI literacy a team or individual has.
A reminder: our definition of AI literacy is:
…an evolving set of skills, including critical thinking, knowing the limitations of AI systems, the ability to assess their outputs and understanding where they can complement or enhance human cognition and expertise in a given field.
It's the ability to understand, evaluate and use artificial intelligence systems and tools in a responsible, ethical and effective way.
ChatGPT opened the bidding for the attention of investors and customers in a frenzy of announcements from AI companies—on Wednesday Apple, Meta, OpenAI and Google all went came out with product news—with its ChatGPT-o1 model, including a £200/month version with more power. What’s become clear since we discussed this in last week’s Antonym is how differently people perceive the new service.
Here’s our take:
ChatGPT-o1 is fantastically useful for complex reasoning tasks. But you need to bring your own data and domain expertise to the table, and some AI literacy.
Data: Don’t use it as a Google or Perplexity replacement. Bring your own data and context to increase accuracy and focus in outputs (and fact-check them, as standard).
Domain knowledge: If you are putting a PhD on a problem, you need to understand enough about the area to know when its findings and processes are useful, brilliant or slightly off-course.
AI literacy: Understanding how generative AI systems work, and how to get th
Reactions to ChatGPT-o1’s launch follow a familiar pattern of scoffing, bemusement and massive enthusiasm from users. Like the original ChatGPT-3.5 launch two years ago, those who understand it aren’t too bothered by cynics and sceptics as they are likely to be heads-down getting on with deploying the model on their hardest problems.
I’ve found that you can guess the level of AI literacy a team or individual has by their view on ChatGPT. A reminder of the AI literacy framework we use at Brilliant Noise (download the full paper here).
Here are some examples of use cases we have found some success with:
Developing return on investment analysis that goes beyond first-order effects of projects.
Playing out the execution plan for strategy across a complex organisation.
Critiquing and analysing a corporate strategy.
Everything that is true about how to get the best from working with AI seems to be the case for ChatGPT-o1, but more so.
Perplexity killer? Google Gemini 1.5 Deep Research
One of Google’s product announcements this week that is (sort of) available immediately was a new feature for Google Gemini 1.5 (a model which was only released a few weeks ago). It’s another expensive [1] premium, so here’s a walkthrough so you can see what it does.
I’ve been researching AI in the marketing services sector this week, which I mainly did with Perplexity and ChatGPT. So I’ll run the process again in Deep Research here.
I put in my prompt and Deep Research suggested a research plan – an excellent example of an AI doing what we called “chain of thought” prompting for most of this year, but is now becoming part of how gen AI systems “reason” to get better results.
It returns the results and the option to create a Google Doc, which is handy (exporting from ChatGPT and Perplexity can be distractingly awkward sometimes).
The result is–for the relatively low quality of my prompt–comprehensive. It’s listed about 47 sources, which are credible and mostly up-to-date. This should be—and doubtless is—concerning for Perplexity, as Deep Research feels more comprehensive and with more initial analysis than theirs. I’d need to test more to be sure.
It isn’t complete though. Some of this research would enhance the deeper analysis and insights from the report I prepared this week.
This is also an illustration of another lesson of using AI for these tasks. Even as the quality and capability rises, the outputs of the AI are as good as the user’s domain knowledge + their AI literacy. Someone who hadn’t done the research I’d done, knew the industry less well, and knew less about working with AI might be tempted to put that report through. It would fulfill the output requirement of a request to do some research but it lacks the value that comes with critical thinking and iteration.
Continuing with this test, I added my original report to the Gemini Deep Research conversation and asked it to fact-check my work and re-work the report.
Bonus usefulness: While experimenting with Deep Research I found it makes an excellent fact-checker. Always helpful when you’ve been doing research with AI tools to double check the sources etc.
The money is on the move
This year’s money was all-in on the infrastructure to create and run generative AI: the energy (see Meta’s nuclear investment), the chips (NVIDIA) and the know-how to put together the large language models (LLMs) that power the tech.
Most of the money is in the GPUs. NVIDIA, which makes the GPUs that have dominated the market for AI chips so far, is the second most valuable company in the world at a US$3.29 trillion market capitalisation (at the time of writing).
Over the next couple of years the value will be created increasingly by the models (ChatGPT, Claude, Gemini etc.) and by the business innovation they enable.
This view is supported by an article in the FT this week by Jennifer Hughes which outlines four phases of AI investment:
Nvidia and the infrastructure providers
AI infrastructure
AI-enabled revenues
industries transformed by AI.
Currently, the focus is shifting towards the third phase, where companies that generate revenue from AI-enabled products are expected to emerge as potential winners. Hughes quotes Goldman Sachs’ chief US equity strategist, David Kostin:
Our thesis is in calendar 2025, we’re going to see a transition from the beneficiaries . . . of the infrastructure spending to the AI spending,” Kostin adds. Potential winners in this phase include software and IT services companies that can generate revenue from their AI-enabled products. Companies highlighted by Kostin’s team recently include Datadog, MongoDB and Snowflake, which help companies manage cloud-based data and infrastructure. Microsoft also made its list.
Phase four, should it happen, are the industries that would be transformed by AI as personal computers and the internet have previously revolutionised the way we operate.
How can a company create AI-enabled revenues? Maybe ask ChatGPT-o1.
Modelsturmfrustration
It was late evening in the office on Wednesday, feeling all the later for the early sunset of December and the motion-detecting lights assuming the room was empty and turning themselves off. I was preparing a presentation to a client’s senior team the next day about generative AI. The problem wasn’t finding interesting or useful things to say, it was the unending series of announcements of new AI models and features.
I wanted to scream. Instead I worked with ChatGPT to develop some German compound-words to describe the situation, and put my favourite in as the opening slide for the presentation:
Modelsturmfrustration: The exquisite frustration of trying to write a report on gen AI when the big tech companies are releasing new things by the hour.
Talk to your AI podcast
Ai companies are trying to drown one another in feature releases. Some of the news seems petty and weirdly timed, but there’s gold in them there announcements.
Google is releasing a paid version of NotebookLM Pro. One of the features will be that you can join the conversation.
NotebookLM introduces a new interface and audio interactivity, alongside a premium version called NotebookLM Plus for enhanced user and team experiences. Users can now engage directly with AI hosts in Audio Overviews and manage content more intuitively across three redesigned areas. NotebookLM Plus offers subscribers increased limits and features, including privacy and security for organisations, available through Google Workspace or Google Cloud.
The launch of new NotebookLM features. A few weeks ago someone showed you the Google NotebookLM’s ability to create a realistic sounding podcast from a few PDF documents and you were amazed. The two hosts sound like they are having a real conversation.
Now you can join that conversation.
Treats
There’s a lot of lovely links in my notes, so I thought I’d share my favourites here…
“How to read a book using o1”, by Tyler Cowen. The joy of reading with an AI that can answer all your questions.
Big tiny photos. The annual Nikon photomicrography competition is always awe-inspiring. Scroll through the 20 winners for a treat. Nautilus magazine highlighted this incredible image of a parasitic wasp taken by Allison Pollack beginning to hatch from a moth egg on a leaf, created from a stacked image of 200 shots focused on slightly different points.
Better Living Through Algorithms: A short story about an app that makes people happy. I loved this. And then thought about building it, which may not have been the point.
Page 94 on cryptocurrencies. After celebrating its role in the recent downfall of the Archbishop of Canterbury, the Private Eye podcast talks Bitcoin, memecoins, and does a stand up job of explaining what’s going on.
Great everyday AI advice: Ethan Mollick tells us 15 times to use AI, and 5 times not to in one of his practical, thoughtful posts. Really the “not to” pieces were the most useful to me, especially the warning about learning:
> When you need to learn and synthesize new ideas or information. Asking for a summary is not the same as reading for yourself. Asking AI to solve a problem for you is not an effective way to learn, even if it feels like it should be. To learn something new, you are going to have to do the reading and thinking yourself, though you may still find an AI helpful for parts of the learning process.
AI can make learning richer and easier to access, but we still have to do the work of getting our brains into learning mode, creating the frames of reference and then the hard wiring of putting down new knowledge in the brain.
That’s all for this week!
Thank you so much for reading. I hope there was something you found useful. If so, leave us a 🤍by clicking below.
See you next week!
Antony
[1] To get access you need a personal Google One AI Premium Plan for £18.99. You can’t get it as part of your £30+ Google Worksuite AI premium yet (boo!).
By Perplexity: To access Google Gemini Advanced in the UK, follow these steps:
Go to gemini.google.com or open the Google app on your Android or iOS device.
Sign in with your Google account. You must be over 18 to access Gemini Advanced.
Look for the option to "Upgrade to Gemini Advanced" in the menu or settings.
Subscribe to the Google One AI Premium plan, including Gemini Advanced.
During the subscription process, you'll need to provide a payment method.
Once subscribed, you can start using Gemini Advanced on the web or mobile app.
If you encounter issues accessing Gemini Advanced:
Ensure your app is updated to the latest version.
Make sure you have a stable internet connection.
If using the mobile app, try switching to Gemini Advanced in the settings or restart the app.
Note that Gemini Advanced is part of the Google One AI Premium plan, which also includes 2 TB of storage and other benefits