Most product failures aren't caused by bad execution. They're caused by failing to understand the system you're operating in.

I learned this the hard way on a project where we spent three months optimising onboarding flows, only to watch churn remain stubbornly high. The problem wasn't onboarding. The problem was a mismatch between what we promised users during acquisition and what they actually got. We were solving a local optimum in a system with a much bigger problem elsewhere.

That experience pushed me to read Donella Meadows' Thinking in Systems, and it fundamentally changed how I approach product work.

What is systems thinking?

A system is a set of interconnected elements organised to achieve a goal. The key word is interconnected โ€” change one element, and you create ripple effects you didn't anticipate.

In product terms:

  • Your users are elements
  • Their behaviours are flows
  • The feedback loops between what you build and how they respond are what make a product healthy or toxic

The three traps product managers fall into

1. Fixing the wrong thing

We're trained to look for proximate causes โ€” the thing that's visibly broken right now. But systems failures often have distant causes. A spike in support tickets might trace back to a confusing bit of copy written six months ago, not the recent feature launch you're blaming.

The fix: before you act, spend 20 minutes mapping the system. Draw the connections. Ask yourself: "What else is affected if I change this?"

2. Optimising locally, hurting globally

A conversion rate team that pushes aggressive marketing to hit acquisition targets while the product team watches churn rise is a classic example. Each team is winning locally. The business is losing globally.

The fix: define shared metrics that live at the system level, not the team level. Retention, LTV, and NPS cross team boundaries in a way that monthly actives and conversion rates don't.

3. Ignoring delays

Systems have delays โ€” there's often a gap between an action and its consequence. Engineers know this. PMs forget it.

If you launch a feature today, the impact on retention might not show up for 90 days. That means a lot of bad decisions get made because they seemed to work in the short term.

The fix: when measuring the success of a change, build in a wait period that accounts for realistic usage patterns. Don't optimise on week-one data if your users typically take 30 days to form a habit.

A practical exercise

Before your next sprint planning session, take 10 minutes to draw a causal loop diagram. Just boxes and arrows.

  • Put your core user action in the centre (e.g. "user completes task")
  • Draw arrows to what that action affects (satisfaction, retention, word-of-mouth)
  • Draw arrows from what affects that action (ease of use, value delivered, time required)
  • Now look for the feedback loops โ€” where do outputs become inputs?

You won't draw a perfect map. That's not the point. The act of drawing it forces you to think systemically before you act locally.


The best product managers I know aren't the fastest executors. They're the ones who slow down long enough to understand what they're changing and why that change might matter โ€” in a system where everything is connected to everything else.

That's the mental model. Once you have it, you can't unsee it.