Trigger-warning: it is November 2024 and you are alive and awake. If you can make your peace with that, read on. If not, you really shouldn’t be reading newsletters for another week or two. If you want to skip election matters, we have dark pattern design in AI, why ontology is a word we need to learn, and why efficiency is a trap in AI.
Dear Reader,
This is the bit where everyone who went to the same movie as you is standing outside the cinema slack-jawed and trying to explain what it was that you all just saw.
Suddenly all the power-moves, psychodramas and plot twists in tech & geopolitics look different in retrospect. Even Elon’s tummy-reveal jumps on stage with Trump looks less like lunacy and more like someone celebrating in advance.
“Take over, Elon,” says Trump at the beginning of that clip…
As Scott Galloway said, the $130 million that Musk spent on the Trump campaign was “the trade of the year”, as the win added $20BN to the market capitalisation of his companies by the end of Tuesday.
And a lot more…
Ukraine’s president, Volodymyr Zelensky, has already spoken to Mr Trump as president-elect; Elon Musk joined the call, Axios reported.
— The Economist, news in brief
“The Tech Bro Coup”
There is some smart, arresting analysis on Merryn Talks Money podcast from Bloomberg UK from Pippa Malmgren, economist and former adviser to George W. Bush and Helen Thomas, founder and chief executive of BlondeMoney. They were both in the minority of people who predicted a Trump win, and what may happen under what they call a Musk-Thiel presidency, referencing the two PayPal billionaires who seem to have bought themselves the leadership of FKA The Free World.
Some headlines:
Globally, states are weighed down by debt after the pandemic. Everyone needs economic growth and sees imminent breakthroughs in AI as a solution and a prize for those that get there first.
Ukraine and Taiwan will be abandoned: With the U.S. stepping back, Russia and China are positioned to expand their influence in these regions.
AI will transform U.S. Government: They plan to adopt AI widely, aiming to boost efficiency and improve decision-making.
Renewable energy to fuel economic growth: The Trump administration is prioritising renewable energy alongside traditional sources, setting the stage for lasting economic gains.
Silicon Valley will get its hands on government operations: Tech firms are working closely with Washington to update government systems and make operations more efficient.
AI and energy will drive new economic boom: Abundant domestic energy and cutting-edge AI could power a fresh wave of U.S. economic growth.
U.K. needs fresh fiscal ideas for growth: the country is more aligned with the US than the rest of Europe and could benefit from this US growth, but not with its current policies.
U.S. and Mexico build a stronger supply chain: The headlines are about tariffs, but Malgren and Thomas see trade with Mexico as being strengthened.
Cryptocurrency: Favourable policies from the Trump administration will bring Bitcoin and other cryptocurrencies into the mainstream.
New synthetic materials: AI-driven breakthroughs in synthetic materials could reduce demand for resources like copper and iron, reshaping markets.
What about Palestine, women’s reproductive rights, immigrants, democracy and humanity? Apparently out of scope for this podcast. I should add that these commentators are not pro-Trump, they are reading the geopolitical and economic context.
Efficiency is not the only fruit
Harry Brignull is the expert on dark patterns – deceptive user interface mechanisms designed to fool people into spending money, e.g. the mechanisms that make it insanely difficult to cancel a subscription.
On LinkedIn, he highlighted a paper showing that when AI is used to design websites, it will often include these kinds of unethical practices.
My comment from the post:
Unintended consequences of AI adoption over-emphasis on efficiency no.94,941: Sharp practices like dark pattern design find their way into AI assisted web design […]
Without guidance, AI defaults to the average from its training. What many UX designers decry is best practice for others. Bring solely data-drive (e.g. A/B tests) in design will produce the same results. Higher retention? Great! Higher because you confused the customer, though…?
There’s two warnings here for everyone integrating AI into how they work:
AI has learned from what’s out there already. What’s out there might not always be what’s best.
Rapidly reducing the human inputs into processes in the name of efficiency is high risk.
You should read this but…
AI: Large Language Models (LLMs) are revolutionising AI capabilities, but organisations face challenges in reducing inaccuracies and protecting valuable data. Knowledge Graphs offer a solution, helping improve LLM accuracy and safeguard organisational data assets. Learn how implementing Knowledge Graphs can address these critical issues and maintain competitiveness in the AI landscape.
Don’t be put off by big words like ontology and new terms like knowledge graph. Learn what they mean, because they are going to become very important to everyone in the next few years of the AI age.
Put simply, an ontology is a map of what things mean to each other. These maps exist in the fabric of the culture of an organisation, in how things get done. But if you are a newcomer you have to learn the ways of your new company through observation and experience. An AI needs it written down, explicitly plotted on a map.
For instance, we think that everyone knows what we mean when we use a word like “problem”, but it changes subtly from group to group, context to context. Even within the same organisation, some words will change their meaning as you move between different tribes of colleagues.
In an engineering group, ‘problem’ might be a desirable thing, something to pin down and examine in a problem statement, something to solve for. In a leadership team, ‘problem’ could mean threat, and trigger thoughts of contingency plans and countermeasures.
So when a representative from engineering is asked to speak in the boardroom and enthusiastically explains they have found a rich seam of problems, they won’t even realise then when they’re talking about opportunity they’re giving the executives a jump-scare.
All of this is a pre-explanation for why everyone should read this excellent paper from Superlinked, hat tip to the brilliant Tony Seale, one of the authors.
It is a simple guide to knowledge graphs and ontologies, but still requires some commitment from the lay-reader to tune in to what is being explained.
Essentially this:
Gen AI’s large language models (LLMs) like ChatGPT et al are great, but they organise information in a fuzzy way, so sometimes you get errors and confusion in the results.
Knowledge graphs are crisp and clear ways of organising data, and can help gen AI be more accurate and come up with better results.
Ontologies explain how language is specifically used in an organisation and how all the bits of knowledge relate to one another
Quote of the week
The absurd and hilarious comedian Richard Herring has the last word in this week’s Antonym, because some confused laughter has got to be good for us. In his new Substack, Herring talks about his lack of imposter syndrome:
I don't think I have imposter syndrome because if you claim to have it then you understand that you may feel like an imposter, but are not one. It's just a syndrome. I am an imposter. A brilliant one too. Don't know how I've wangled it. I am actually superior as an imposter to the non-imposters because they have actual talent, so it's easy for them. The artifice to be as good an imposter as me is actually a greater skill than being good. Anyone can make a living with talent. I've done it with nothing.”
That’s all for this week…
Thank you for reading. I hope Antonym was a help, a distraction or a useful provocation. If you liked it, there’s a 🤍 button that’s needing a click below.
See you next week.
Antony