This -> "What if AI strategy isn’t a project but a permanent learning posture? Most organisations treat AI as a tool to acquire. Something with a start and end date. But what changes when leadership models continuous relearning — when the question shifts from “how do we implement AI?” to “how do we stay current with what’s possible this month?”
Everyone needs to learn from each other in organisations. Most use of AI is siloed, when it should be shared.
Time has compressed. What used to take a quarter now takes a month. What took a month now takes a week. A day is a week. A week is a month. A month is a quarter.
So we moved to monthly OKRs. A structural change, designed to test how fast we can actually go.
The effect has been revelatory. Curriculum database: built. Messaging development: shipped. New internal systems: running. Every time we assumed something would take longer, we found we could move faster than we thought.
And then faster again the next month — because last month’s work made this month’s work quicker. The tools compound. The agent I built last month used about 150 tests to make sure it worked properly. This month I’m not building apps so much as making processes come to life. The vocabulary has shifted.
The competitive advantage lives in your organisational tempo. How fast you cycle through learn–build–test–adjust. The speed of the loop matters more than any single output.
2. Reframing
Constantly learning and relearning as things change is non-negotiable now. It begins with leadership and must spread through the entire organisation.
This isn’t a project. It’s a permanent posture.
When we shifted to monthly OKRs, we had to reframe almost everything. What “done” looks like. What “good enough to ship” means. What “testing” actually requires. Fundamental shifts in how a team relates to its own work.
Here’s one I didn’t expect: software is different now. We had a concept for Helix — our internal methodology tool. In the old world, that’s a six-month development project with a scoping phase and a vendor. In this world, it was a working tool the whole team uses within weeks. The old question was “can we do this?” The new question is “how fast can we do this?”
That shift — from possibility to velocity — is the reframe that matters most right now.
3. Recursivity
Everything you do creates data. Every piece of data is valuable. The decisions. The conversations. The working-out. All of it — the deliverables too, obviously, but those are the least interesting part.
If you capture it, clean it, structure it — it feeds current systems and future ones.
This is where consulting transforms. Every client engagement becomes R&D for your platform. Every internal process becomes data that compounds.
Linear work (consulting, services, time-for-money) depletes. Recursive work (systems that learn from themselves) compounds. The difference between the two is the difference between selling hours and building an engine.
The Thought Experiments
I’ve been running these as provocations in my own thinking. They’re worth sitting with.
Speed: What happens when your decision cycle is 3x faster than your competitors’? If you’re running monthly OKRs and they’re running quarterly, you get three learning cycles for every one of theirs. You test, fail, adjust, and ship while they’re still in planning. Even monthly might be slow — but it’s a start.
Reframing: What if AI strategy isn’t a project but a permanent learning posture? Most organisations treat AI as a tool to acquire. Something with a start and end date. But what changes when leadership models continuous relearning — when the question shifts from “how do we implement AI?” to “how do we stay current with what’s possible this month?”
And here’s the tricky bit: how do you test whether you’re moving fast enough without lying to yourself? Because it’s remarkably easy to trick yourself into thinking everything is just fine. The meetings are happening. The slides look good. Someone’s been on a course. But the gap is still opening.
Recursivity: What if every client project generated a reusable system? Imagine five years of captured decisions, conversations, and working-out — cleaned and structured. A queryable knowledge base. Training data for your own agent systems. Proprietary sector expertise that compounds with every engagement.
How does consulting change when it’s not just delivery — it’s R&D for your platform?
What’s Actually Scarce Now
This is the meta-question, and I think it’s the one that matters most.
What happens when strategy can be generated in minutes instead of weeks? If everyone has access to the same AI, what remains scarce?
AI handles strategy documents. Speed of execution is commoditised. Domain expertise? AI has read everything you have, and more. The old model was simple: knowledge work equals time multiplied by expertise. What you know, sold by grains of sand through an hourglass.
The new model is different: knowledge work equals systems multiplied by data, multiplied by curiosity.
What’s scarce now is curated data — proprietary, structured, compounding. System design — the 80/20 flip where you spend your time on process, not output. Judgement — what to build, when to ship, what actually matters. And tempo — how fast you learn and adjust.
One role for an AI consultant now — and I’m thinking about what Brilliant Noise is becoming here — is to tell you what it’s actually like on the other side. Not a skunkworks. More like a scout. Someone who’s been there, done the loops, felt the vertigo, and can say: here’s what the terrain looks like from a month ahead.
I keep thinking about Feynman’s twelve questions — the problems he carried around in his head at all times, so that whenever he encountered a new piece of information, he could test it against his open questions. What are your twelve questions right now? What are the must-carry questions for your organisation? Because the new questions that are now possible might be the most valuable thing you can identify.
So What Do You Actually Do?
Here’s what we’re testing. What we’re building towards.
Compress your timeline. Move to monthly OKRs — or whatever cadence forces faster learning. Test “how fast can we actually go?” Ship in a month. Learn in public. Adjust. Those extra ten-minute investments — the quick experiment, the half-built prototype, the conversation captured as structured data — they compound. Don’t leave them on the edit room floor.
Make relearning non-negotiable. Leadership models continuous learning, starting at the top. Ask “how fast?” not “can we?” Turn ideas into tangible tools quickly. Software is the reframing accelerator now.
Capture everything. Decisions, conversations, working-out — especially the messy stuff. Every engagement generates reusable knowledge. Linear work depletes; recursive work compounds. Build the engine, not the treadmill.
The Structural Shift
I want to be precise about this, because the distinction matters and I think it’s the thing most people gloss over.
AI-assisted work is humans thinking, AI executing. You have the idea, the model helps you produce it faster. Useful. That’s where most organisations are, or are trying to get to.
AI-native work is a different operating model entirely. Systems continuously thinking. Humans directing and judging. A faster horse versus an engine.
We saw this in our own shift. Month one of monthly OKRs was AI-assisted: we used tools to move faster. Month two was something else. The systems we’d built started surfacing things we hadn’t asked for. The recursive loops began generating their own value. The difference is architectural.
This isn’t about catching up to where AI is now. It’s about building systems that learn and compound while you sleep.
You’re probably not moving fast enough. But you can move towards the curve very, very quickly — if you stop treating this as a project and start treating it as the way you work.
The question is: will you?
That’s all for this week. The gravity well is real, but escape velocity might be closer than you think.
This -> "What if AI strategy isn’t a project but a permanent learning posture? Most organisations treat AI as a tool to acquire. Something with a start and end date. But what changes when leadership models continuous relearning — when the question shifts from “how do we implement AI?” to “how do we stay current with what’s possible this month?”
Everyone needs to learn from each other in organisations. Most use of AI is siloed, when it should be shared.
Raw outline please 😀
Time has compressed. What used to take a quarter now takes a month. What took a month now takes a week. A day is a week. A week is a month. A month is a quarter.
So we moved to monthly OKRs. A structural change, designed to test how fast we can actually go.
The effect has been revelatory. Curriculum database: built. Messaging development: shipped. New internal systems: running. Every time we assumed something would take longer, we found we could move faster than we thought.
And then faster again the next month — because last month’s work made this month’s work quicker. The tools compound. The agent I built last month used about 150 tests to make sure it worked properly. This month I’m not building apps so much as making processes come to life. The vocabulary has shifted.
The competitive advantage lives in your organisational tempo. How fast you cycle through learn–build–test–adjust. The speed of the loop matters more than any single output.
2. Reframing
Constantly learning and relearning as things change is non-negotiable now. It begins with leadership and must spread through the entire organisation.
This isn’t a project. It’s a permanent posture.
When we shifted to monthly OKRs, we had to reframe almost everything. What “done” looks like. What “good enough to ship” means. What “testing” actually requires. Fundamental shifts in how a team relates to its own work.
Here’s one I didn’t expect: software is different now. We had a concept for Helix — our internal methodology tool. In the old world, that’s a six-month development project with a scoping phase and a vendor. In this world, it was a working tool the whole team uses within weeks. The old question was “can we do this?” The new question is “how fast can we do this?”
That shift — from possibility to velocity — is the reframe that matters most right now.
3. Recursivity
Everything you do creates data. Every piece of data is valuable. The decisions. The conversations. The working-out. All of it — the deliverables too, obviously, but those are the least interesting part.
If you capture it, clean it, structure it — it feeds current systems and future ones.
This is where consulting transforms. Every client engagement becomes R&D for your platform. Every internal process becomes data that compounds.
Linear work (consulting, services, time-for-money) depletes. Recursive work (systems that learn from themselves) compounds. The difference between the two is the difference between selling hours and building an engine.
The Thought Experiments
I’ve been running these as provocations in my own thinking. They’re worth sitting with.
Speed: What happens when your decision cycle is 3x faster than your competitors’? If you’re running monthly OKRs and they’re running quarterly, you get three learning cycles for every one of theirs. You test, fail, adjust, and ship while they’re still in planning. Even monthly might be slow — but it’s a start.
Reframing: What if AI strategy isn’t a project but a permanent learning posture? Most organisations treat AI as a tool to acquire. Something with a start and end date. But what changes when leadership models continuous relearning — when the question shifts from “how do we implement AI?” to “how do we stay current with what’s possible this month?”
And here’s the tricky bit: how do you test whether you’re moving fast enough without lying to yourself? Because it’s remarkably easy to trick yourself into thinking everything is just fine. The meetings are happening. The slides look good. Someone’s been on a course. But the gap is still opening.
Recursivity: What if every client project generated a reusable system? Imagine five years of captured decisions, conversations, and working-out — cleaned and structured. A queryable knowledge base. Training data for your own agent systems. Proprietary sector expertise that compounds with every engagement.
How does consulting change when it’s not just delivery — it’s R&D for your platform?
What’s Actually Scarce Now
This is the meta-question, and I think it’s the one that matters most.
What happens when strategy can be generated in minutes instead of weeks? If everyone has access to the same AI, what remains scarce?
AI handles strategy documents. Speed of execution is commoditised. Domain expertise? AI has read everything you have, and more. The old model was simple: knowledge work equals time multiplied by expertise. What you know, sold by grains of sand through an hourglass.
The new model is different: knowledge work equals systems multiplied by data, multiplied by curiosity.
What’s scarce now is curated data — proprietary, structured, compounding. System design — the 80/20 flip where you spend your time on process, not output. Judgement — what to build, when to ship, what actually matters. And tempo — how fast you learn and adjust.
One role for an AI consultant now — and I’m thinking about what Brilliant Noise is becoming here — is to tell you what it’s actually like on the other side. Not a skunkworks. More like a scout. Someone who’s been there, done the loops, felt the vertigo, and can say: here’s what the terrain looks like from a month ahead.
I keep thinking about Feynman’s twelve questions — the problems he carried around in his head at all times, so that whenever he encountered a new piece of information, he could test it against his open questions. What are your twelve questions right now? What are the must-carry questions for your organisation? Because the new questions that are now possible might be the most valuable thing you can identify.
So What Do You Actually Do?
Here’s what we’re testing. What we’re building towards.
Compress your timeline. Move to monthly OKRs — or whatever cadence forces faster learning. Test “how fast can we actually go?” Ship in a month. Learn in public. Adjust. Those extra ten-minute investments — the quick experiment, the half-built prototype, the conversation captured as structured data — they compound. Don’t leave them on the edit room floor.
Make relearning non-negotiable. Leadership models continuous learning, starting at the top. Ask “how fast?” not “can we?” Turn ideas into tangible tools quickly. Software is the reframing accelerator now.
Capture everything. Decisions, conversations, working-out — especially the messy stuff. Every engagement generates reusable knowledge. Linear work depletes; recursive work compounds. Build the engine, not the treadmill.
The Structural Shift
I want to be precise about this, because the distinction matters and I think it’s the thing most people gloss over.
AI-assisted work is humans thinking, AI executing. You have the idea, the model helps you produce it faster. Useful. That’s where most organisations are, or are trying to get to.
AI-native work is a different operating model entirely. Systems continuously thinking. Humans directing and judging. A faster horse versus an engine.
We saw this in our own shift. Month one of monthly OKRs was AI-assisted: we used tools to move faster. Month two was something else. The systems we’d built started surfacing things we hadn’t asked for. The recursive loops began generating their own value. The difference is architectural.
This isn’t about catching up to where AI is now. It’s about building systems that learn and compound while you sleep.
You’re probably not moving fast enough. But you can move towards the curve very, very quickly — if you stop treating this as a project and start treating it as the way you work.
The question is: will you?
That’s all for this week. The gravity well is real, but escape velocity might be closer than you think.