How I actually think about problems.
Mental models I've developed from building things that failed, building things that worked, and trying to understand why.
Correlation is not understanding. It's not even close.
Most of what passes for data-driven decision making is pattern-matching on historical accidents. You see that A and B move together, so you assume changing A will change B. Then you intervene, nothing happens, and everyone blames execution. The problem wasn't execution. The problem was you never understood the causal structure. I've watched companies make million-dollar decisions based on correlations that evaporated the moment they acted on them. This is why I work on causal AI. Not because it's intellectually interesting, though it is. Because the alternative is expensive guesswork dressed up as analysis.
You cannot optimise a system by optimising its parts.
This sounds obvious until you watch smart people do exactly this, repeatedly. They improve the sales team without understanding how it affects engineering capacity. They optimise for one metric while destroying three others. They fix the symptom and amplify the underlying disease. The hardest thing about systems thinking isn't the theory. It's resisting the pressure to solve the visible problem instead of the actual problem. Organisations are structurally biased toward local fixes. Fighting this is most of the work.
Every model is wrong. The question is whether it's useful enough to act on.
I spent years in academia where the goal was to be precisely right. In business, the goal is to be approximately right fast enough to matter. A decent model you can act on beats a perfect model that arrives too late. The skill is knowing how wrong your model can be before it becomes dangerous. Most people either trust their models too much or abandon modeling entirely. Both are failure modes.
Systems fail organisationally before they fail technically.
The best technology in the world dies inside a company with misaligned incentives. I've seen it happen dozens of times. The AI works perfectly in the demo, then gets deployed into an organisation where no one's bonus depends on it succeeding, where the people threatened by it control its rollout, where the executives who bought it aren't the ones who have to use it. If you don't understand incentive structures, you don't understand why things happen. This is true in markets, in companies, and in your own head.
Confidence without uncertainty quantification is theatre.
When someone tells me this will increase revenue by 15%, my first question is what's the confidence interval? If they can't answer, they're not doing analysis. They're doing storytelling. Real decisions require understanding the distribution of outcomes, not just the point estimate. This is uncomfortable because humans want certainty, and providing uncertainty feels like weakness. It's not. It's honesty. The most dangerous people in any organisation are the ones who are confidently wrong.
Ideas are cheap. Execution against ambiguity is expensive.
Everyone has ideas. The thing that's actually scarce is the ability to make progress when you don't know what you're doing, when the feedback loops are long, when you're wrong more often than you're right, and when you have to keep going anyway. Most startup advice focuses on having the right idea, the right market, the right timing. Those matter. But the variance in outcomes is mostly explained by the ability to persist intelligently through the long middle where nothing is clear.
Most people optimise for narratives. I try to optimise for outcomes.
Narratives are seductive because they're coherent. Reality is messy, contradictory, and doesn't respect your story. The skill I've tried to develop is noticing when I'm believing something because it's true versus because it's a good story. I'm not always successful. Consultants sell narratives. Operators discover reality. The gap between these is where most value is created and destroyed.
"Understanding how things actually work, not how people claim they work."