Skip to content

Clear Thinking

How to go from confusion to clarity. Frameworks for spotting problems, finding root causes, and reaching the right answer.


TL;DR

  • Define the problem first — State it in one sentence, no solutions allowed
  • Strip to first principles — List assumptions, challenge each one, keep only facts
  • Find the root cause — Ask "why?" 5 times. Symptoms recur; root causes don't
  • Invert the problem — Ask "how would this fail?" instead of "how do I succeed?"
  • Think in systems — Fix the process that produced the bug, not just the bug
  • Separate facts from opinions — Label every input: fact, opinion, or assumption
  • Simplify — Prefer the simplest solution that works. Complexity is a cost
  • Know what you don't know — False confidence kills more projects than ignorance

1. Define the Problem Before Solving It

Most wrong answers come from solving the wrong problem. State the problem in one sentence before doing anything.

Bad:

"We need to rewrite the backend."

That's a solution disguised as a problem. What's actually broken?

Good:

"Checkout takes 8 seconds. Users abandon at 40%. We need it under 2 seconds."

Bad:

"We need to hire 3 more engineers."

Good:

"We're missing deadlines because 2 people are context-switching across 4 projects."

The test: Can you state the problem without mentioning a solution? If your "problem statement" contains a verb like build, rewrite, migrate, add, or hire — it's a solution, not a problem.


2. First Principles: Strip to Fundamentals

When stuck or when everyone "knows" the answer, decompose. What do we actually know to be true? What are we assuming?

Framework:

  1. List your assumptions
  2. Challenge each one — is this a fact or a belief?
  3. Keep only what's verifiable

Bad:

"Batteries cost $600/kWh, that's just how it is."

Good:

"Batteries are made of cobalt, nickel, aluminum... raw materials cost $80/kWh. Why does assembly cost $520?"

Bad:

"We need Kubernetes because everyone uses it."

Good:

"We have 3 services and 2 engineers. What problem would Kubernetes solve that our current deploy doesn't?"

The "explain it to a 12-year-old" test: If you can't explain your reasoning without jargon, you're hiding behind assumptions you haven't examined.


3. Find the Root Cause, Not the Symptom

Symptoms recur. Root causes don't (once fixed). Most people stop at the first layer.

5 Whys

Keep asking "why?" until you hit something you can fix.

"The page is slow" → Why? The API takes 5 seconds → Why? It queries all users → Why? No pagination → Why? Original author didn't expect >100 users → Fix: add pagination

  • Stopped at layer 1: "add a loading spinner" — treats the symptom
  • Stopped at layer 5: "add pagination" — fixes the root cause

Bad:

"Users are complaining about the app" → add more support staff

Good:

"Users are complaining about the app" → 80% of tickets are about the same flow → the flow has a UX bug → fix the bug → tickets drop

The test: If the same problem comes back after your fix, you fixed a symptom.


4. Invert the Problem

Instead of asking "how do I succeed?", ask "how would I fail?" and avoid that. Failure modes are easier to spot than success conditions.

Vague:

"How do we build a great product?"

Concrete (inverted):

"What would make users hate this product?" → Slow, buggy, confusing, expensive. Now fix those.

Vague:

"How do we retain engineers?"

Concrete (inverted):

"What would make engineers quit?" → No growth, bad manager, below-market pay, no ownership. Now address those.

Charlie Munger: "Tell me where I'm going to die, and I'll never go there."

Use before launches, hiring decisions, strategy — list the ways it fails, then check if you've addressed each one.


5. Think in Systems, Not Events

Events are symptoms. Systems produce them. Fix the system, not the event.

Event: "We shipped a bug to production." System: "We have no staging environment, no automated tests, and deploy on Friday afternoons."

  • Fix the event: revert the deploy
  • Fix the system: add staging + tests + no Friday deploys

Second-order effects

Every solution creates new conditions. Ask "then what?" at least once.

  • "We'll add caching" → What happens when the underlying data changes? Stale data.
  • "We'll add a retry" → What if the first request actually succeeded? Duplicate processing.
  • "We'll add a rate limiter" → What happens to legitimate traffic during a spike? Dropped requests.

6. Separate Facts From Opinions

In any discussion, most of what's said is opinion or assumption dressed as fact. Label each input before acting on it.

TypeExampleHow to verify
Fact"Page load time is 4.2 seconds"Measurable, verifiable
Opinion"Users think it's slow"Unverified — survey or test needed
Assumption"Faster load time will increase conversion"Plausible, unproven — A/B test needed

Before making decisions: Label each input. Act on facts. Verify assumptions. Weigh opinions.

  • "I feel like the API is slow" → Measure it
  • "I think users want dark mode" → Check the data
  • "Everyone knows microservices are better" → For whom? At what scale? Based on what evidence?

7. Simplify: Occam's Razor

Given competing explanations or solutions, prefer the simplest one that works. Complexity is a cost, not a feature.

The deploy failed → Check the most obvious cause first (credentials expired, disk full, typo in config) before assuming distributed systems failure.

Receipt button doesn't work → The URL is wrong → fix the URL. Don't redesign the URL routing system.

"We need a machine learning model to predict churn" → Have you tried asking churned users why they left?

The test: Can you explain your solution in one sentence? If not, it's probably too complex.


8. Know What You Don't Know

False confidence is worse than admitting ignorance. Intellectual honesty about uncertainty prevents bad decisions.

Three buckets

  1. Things you know — act on these
  2. Things you know you don't know — research these
  3. Things you don't know you don't know — the dangerous ones

Quantify confidence

"I'm 90% sure this will work" is more useful than "it'll definitely work."

When you don't know

Say so, then say when you'll have the answer.

Premortem

Before launching, ask: "It's 6 months from now and this failed — why?" The team will surface risks nobody raised during optimistic planning.

Bad:

"This migration will go smoothly."

Good:

"This migration has 3 risks: [schema mismatch on legacy tables, downtime during index rebuild, rollback takes 4 hours]. Here's how we mitigate each."


Anti-Patterns Quick Reference

Anti-patternWhat it looks likeFix
The Symptom TreaterAdds loading spinner to a slow APIAsk "why?" 5 times. Fix the root cause
The Assumption Follower"We need Kubernetes because everyone uses it"List assumptions. Challenge each one
The Solution-First Thinker"We need to rewrite the backend"State the problem first. No solutions in the problem statement
The Complexity AddictBuilds ML pipeline when a SQL query worksStart with the simplest thing. Add complexity only when it breaks
The Confident Guesser"This will definitely work" with no dataQuantify uncertainty. "80% confident because [evidence]"
The Event FixerReverts the deploy but doesn't fix the systemAsk: will this happen again? If yes, fix the system
The Single-Layer Thinker"Add caching" without considering stale dataAsk "then what?" after every proposed fix

Checklist

Before deciding or solving:

  • [ ] I can state the problem in one sentence — without mentioning a solution
  • [ ] I've asked "why?" enough times to reach the root cause
  • [ ] I've separated facts from opinions and assumptions
  • [ ] I've inverted: "how would this fail?"
  • [ ] I've asked "then what?" at least once
  • [ ] I've chosen the simplest option that works
  • [ ] I've named what I don't know

References