The Claude "Plugin" market scare
And how to avoid compliance-friendly death spirals
“A revolution is when the world turns upside-down.”
Too many businesses are worried only about AI going wrong. They are missing a greater risk: being on the wrong side of AI going right. Of AI systems getting rapidly better at making their competitors rapidly faster and more competitive.
I spent too much time last week trying to persuade a firm not to “invest” in a clunky-but-compliant AI chatbot for their team. Better to invest, I said, in tools and AI literacy that would allow rapid learning and innovation among their people - many of whom were already showing brilliant prototypes and ideas. Nothing wrong with the incumbent’s chatbot for meeting notes and report writing, but it would slow learning and choke innovation.
A board meeting made the final call they told me, being a long-term user of a particular software ecosystem and keeping on track with their compliance swung it. They would roll out the “free” chatbot from the incumbent to all their teams. Fascinating as the potential of some of these new systems were, they would wait and see how the likes of Anthropic and OpenAI evolved.
Meanwhile the revolution was rolling and this week the markets moved.
Bloomberg reported:
[…] a wave of disappointing earnings reports, some improvements in AI models, and the release of a seemingly innocuous add-on from AI startup Anthropic to suddenly wake up investors en masse to the threat. The result has been the biggest stock selloff driven by the fear of AI displacement that markets have seen.
A lot of the attention was on SaaS (software-as-a-service) stock plunges, but professional services, information-based businesses and marketing & PR groups were down 5 - 20% by Thursday.
Here’s a screenshot the “innocuous add-on”. It’s called “Plugins” and works in a desktop app, Claude Cowork.
“Plugins” did what years of white papers and panel discussions hadn’t – it made the risk visible, measurable, and immediate. The markets felt it, and thus, the board execs felt it.
A few days before the markets wobbled, I was being interviewed for a research piece on large language model (LLM) risk and compliance concerns. The questions kept circling familiar ground: manipulation, “poisoning” attacks, compliance gaps. At one point, I was asked to rate my level of concern on a scale of one to ten. I decline to answer. That kind of question produces data that looks meaningful but isn’t. “How worried should we be?” doesn’t have a number. It’s like asking how worried we should be about a geopolitical conflict. Three? Seven? The scale reassures the questioner while obscuring the reality.
What struck me was the timing. While the researchers seemed to be interested in building frameworks to measure AI risk, the actual risk was playing out in share prices – and in precisely the opposite direction to what most people expected.
The bubble that didn’t pop
The business-as-usual narrative is neat and comforting. AI company valuations were inflated. A correction was inevitable. Sensible organisations would wait for the dust to settle before committing too much, too soon.
Instead, the dust is settling on everyone else – in thick, uncomfortable, choking layers. We’re realising AI is an eruption event not a passing rumble of disturbance.
Legal tech companies. Analytics platforms. Document processing businesses. As Nomura’s Charlie McElligott put it, the “second-order impacts of the AI rollout” are being felt not by AI companies, but by the companies AI is beginning to replace.
Maybe the bubble isn’t in AI valuations as much as complacency. Believing an existing business model had more time. That the need to manage the risk of data leaks was more than the risk of rapid obsolescence.
The risk no one is measuring
Back in the interview, I kept trying to reframe the discussion. The researcher wanted to talk about external threats: integrity manipulation, adversarial attacks, systems compromised from outside.
I kept returning to a different vulnerability: organisations that believe they are safe because they are compliant with current governance frameworks.
I’ve seen this repeatedly. A large organisation with a well-written AI policy. Microsoft Copilot mandated inside the corporate environment. ChatGPT permitted only via personal accounts – no corporate data allowed.
This sounds reasonable, looks compliant and is completely delusional.
Here’s the open secret: all you have to do is look at the network traffic. All you have to do is look at the network traffic. When Copilot’s interface is clunky or constrained, where does the work actually go? Outside GDPR. Outside corporate controls, with no training worth checking a box for and no oversight. That isn’t a serious risk management, it’s compliance theatre.
When AI use is limited to clunky or constrained systems, where does the work actually go? Outside GDPR. Outside corporate risk and compliance controls. Without training. No oversight.
These are organisations that follow the letter of the law while ignoring the evidence showing their policies are pushing real work onto unsecured networks. Client data exposed. Corporate knowledge scattered across personal accounts.
That isn’t a serious risk management approach as much as compliance-theatre.
But even that isn’t the biggest problem. The larger risk is that while risk and compliance teams construct increasingly elaborate governance frameworks for 2024-shaped risks, the competitive landscape is shifting underneath their business.
Literacy as the real control
The researcher asked where the soft underbelly is in enterprises using AI, I pointed to undertrained, undersupported people. How do we manage that, they asked?
We hire people for their curiosity. It would be strange if they weren’t trying new tools. But curiosity without literacy is where risk creeps in: not dramatic breaches, but accidental misuse, poor judgement, and unintended exposure.
This is where the risk calculus flips again. Organisations investing in AI literacy aren’t just reducing risk and compliance exposure. They’re building the capability to adapt to the kind of disruption that hit legal and analytics firms last week.
The companies getting hit hardest aren’t the ones that moved too fast. They’re the ones that waited. Or got generous with tool budgets, and miserly on learning and development budgets.
The parallel universe problem
One of Anthropic’s founders recently described working seriously with AI as “living in a parallel universe”. You’re in the same world as everyone else, but you’re speaking a different language. Experiencing work very differently.
You can see that gap opening in boardrooms. Decisions about AI are being made based on advice that is already out of date, by leaders who don’t yet have the literacy to evaluate what they’re being told.
They turn to their technology teams – but even now, I regularly meet IT directors who are dismissive of large language models, or uncertain whether they matter at all. Three years into this phase of the shift.
Meanwhile, the organisations that have built real capability aren’t debating how worried they should be on a scale of one to ten. They’re watching competitors scramble to catch up.
The question that matters
The platform shift under way now is comparable to the web. Record shops. Telephones fixed to walls. Argos catalogues. All swept away, gradually, then suddenly – replaced by warehouses on motorways and taxis that arrive as if by magic.
Each step felt exciting or mundane at the time. This shift is harder to grasp because it’s about cognition itself. But the pattern is familiar.
So here’s the question boards should be asking – not next year, not after another framework review, but now:
Which risk are you actually measuring?
The risk of AI systems being manipulated or misused?
Or the risk of discovering – via your share price – that you were on the wrong side of AI going right?
Remember – this is a revolution. Revolutions are when the world turns upside-down. Time to start looking at what risks look like from that perspective.
Thanks for reading
If you’re interested in using or making your own Claude plug-ins hit me up at Brilliant Noise or reply to this email.
If you don’t think risk and compliance policies would let you even try them out, my advice is to check your share price and make your first AI learning project how to find a new role.
Antony






Also - totally agree with the quote about “living in a parallel universe”. I've spent many, many hours over the last few weeks buried in Claude CoWork - it is a totally different experience and approach to interacting with a normal AI chat bot. As with getting the best out of Claude Code, the more thought and planning you put in up front, the better the results. Figuring out the best to way brief CoWork and which skills, agents, plug ins etc should be assembled to go and do the work (or just getting Claude to assist you with making those decisions). Also, just pointing it at the right folders and saying the necessary background you need is all here (you can also now add existing Projects as additional context).
Someone asking MS CoPilot to summarise an email for them has no idea of what someone with expertise and experience armed with access to something like Claude CoWork can do - and it is only been out for few weeks - wait till the Window version launches - can't be long now. This really does feel like a significant "moment".....
Fabulous observations, Anthony. You articulated something that I've been unable to put straight in my own mind.
This idea of restricted opportunity through IT compliance, is tantamount to compliance for self-preservation rather than innovation. It reminds me of the stories of market leaders who are scared to disrupt their business model or even their product range with anything dramatically innovative or different because the cash cow keeps producing the cash.
All of a sudden at some point another competitor comes along with something radical or different in their space and everybody flocks towards that.
Apply that to AI and you have these companies that are restricted by their compliance to an enterprise version of AI who are unquestionably going to be at risk of being attacked or even isolated by an AI native startup that can do exactly the same thing or even better with a fraction of the resources, a fraction of the people.
I look at the project I was involved in at my last role at Microsoft. We were trying to find a simple and easy way of summarising and capturing a plethora of regulatory changes such that our lawyers could quickly make the appropriate judgments, and our engineers implement the appropriate specification changes to ensure that the product was compliant.
A simple plug-in or an update from Claude has managed to replace an entire team. While the team was long gone for many different reasons before the Claude plugin came out it’s probably the best example of where an AI native startup can look at the same problems as the big businesses and run roughshod over the larger organization.
I would not be surprised if within the next year to eighteen months AI native startups are producing tools that are not only innovative, cost-effective, and compliant but meaningfully impact major industries where compliance is almost considered protectionism.