Skip to content

Product Management Onboarding Guide

How we work as PMs. Universal principles that apply across all products.


TL;DR

Mindset:

  • Problems first - Understand the problem before proposing solutions
  • Validate before building - Every feature request is a hypothesis
  • Scope ruthlessly - Best PMs are known for what they say no to

Communication:

  • Write things down - If it's not written, it doesn't exist
  • Proactive updates - Don't make people ask
  • Explain the "why" - Decisions need rationale

Discovery:

  • Four risks - Value, Usability, Feasibility, Viability
  • Qualitative + Quantitative - Interviews tell you what to build, data tells you if it's working
  • Gut feeling is not validation - "I think users want this" is a hypothesis to test

Execution:

  • Outcomes over outputs - "Reduce support tickets by 30%" not "Launch feature X"
  • Ship 80% solutions - Perfect beats shipped every time
  • Postmortems - Shipped is not done, learning is the goal

Part 1: How We Work

Problems First, Solutions Later

Before writing any spec, be crystal clear about:

  • What problem are we solving? Not what feature are we building.
  • Who is affected? Which users? How many? How often?
  • What's the impact? Quantify if possible (revenue, retention, support load, time saved).
  • What are we NOT solving? Scope boundaries matter.

Bad: "We need to add a reschedule feature"

Good: "15% of patients cancel and rebook. This creates 3 support tickets/day and we lose 8% who abandon the rebooking flow."

The solution might be reschedule. Or better reminders. Or easier cancellation. Or nothing. Let the problem guide you.

Validate Before You Build

Every feature request is a hypothesis. Treat it as such.

Before committing engineering time:

  1. Do we have evidence the problem exists? (Not just one user complaint)
  2. Do we understand why current solutions don't work?
  3. Have we talked to actual users experiencing this?
  4. Is this problem worth solving given our priorities?

Sources of validation:

  • User interviews (qualitative)
  • Analytics data (quantitative)
  • Support ticket patterns
  • Session recordings
  • Competitor research
  • Sales/CS feedback

Gut feeling is not validation. "I think users want this" is a hypothesis to test, not a reason to build.

Scope Ruthlessly

The best PMs are known for what they say no to, not what they say yes to.

Scope creep happens when:

  • Requirements are vague ("make it better")
  • Success criteria are undefined
  • Edge cases aren't discussed upfront
  • Stakeholders add "just one more thing"

Prevent it by:

  • Writing explicit out-of-scope sections
  • Defining MVP vs future iterations
  • Getting sign-off before development starts
  • Pushing back on additions mid-sprint

A shipped 80% solution beats a perfect solution stuck in development.

Write Things Down

If it's not written, it doesn't exist.

Why documentation matters:

  • Async communication (timezones, schedules)
  • Onboarding future team members
  • Avoiding "I thought we agreed..." debates
  • Creating institutional memory

What to document:

  • Problem statements and context
  • Decisions and their rationale
  • Requirements and acceptance criteria
  • Meeting outcomes and action items
  • Learnings from launches

Don't trust memory. Write it down.


Part 2: Discovery

The Discovery Process

Discovery is about reducing risk before committing resources.

Four risks to address:

  1. Value risk - Will users want this?
  2. Usability risk - Can users figure it out?
  3. Feasibility risk - Can we build it?
  4. Viability risk - Does it work for the business?

Discovery outputs:

  • Validated problem statement
  • User research findings
  • Proposed solution with rationale
  • Success metrics and targets
  • Known risks and mitigations

User Research

Qualitative research (the "why"):

  • User interviews: 5-8 interviews often reveal patterns
  • Contextual inquiry: Watch users in their environment
  • Usability testing: Can they complete the task?

Interview tips:

  • Ask about past behavior, not hypotheticals
  • "Tell me about the last time you..." > "Would you use...?"
  • Follow up with "Why?" at least 3 times
  • Listen more than you talk (80/20 rule)
  • Record and transcribe (with permission)

Quantitative research (the "what"):

  • Analytics: What are users actually doing?
  • Surveys: Structured feedback at scale
  • A/B tests: Which option performs better?

Qualitative tells you what to build. Quantitative tells you if it's working.

Competitive Analysis

What to research:

  • How do competitors solve similar problems?
  • What's table stakes in the market?
  • Where can we differentiate?
  • What can we learn from their mistakes?

How to document:

  • Feature comparison matrix
  • UX teardowns with screenshots
  • Pricing and positioning analysis
  • User reviews and complaints

Don't copy competitors. Understand the landscape and make informed choices.

Opportunity Assessment

Before prioritizing, assess each opportunity:

QuestionWhy It Matters
How many users are affected?Reach
How often do they face this problem?Frequency
How painful is the problem?Intensity
What's the business impact?Value
How confident are we in the above?Risk

Use frameworks (RICE, ICE) as tools, not rules. The goal is structured thinking, not false precision.


Part 3: Prioritization

Prioritization Principles

Prioritize outcomes, not outputs:

  • Output: "Launch feature X"
  • Outcome: "Reduce support tickets by 30%"

Outcomes let you find better solutions. Outputs lock you into one path.

Prioritize learning when uncertain:

  • High confidence → Build the full solution
  • Low confidence → Build the smallest thing that teaches you something

Prioritize reversible over irreversible:

  • Reversible decisions (UI changes, copy) → Move fast
  • Irreversible decisions (data models, pricing) → Be thorough

Saying No

Most ideas are good ideas. Most good ideas shouldn't be built right now.

How to say no constructively:

  1. Acknowledge the merit ("This is a real problem")
  2. Explain the trade-off ("Building this means not building X")
  3. Provide context ("Here's what we're prioritizing and why")
  4. Offer alternatives ("Could we solve this with Y instead?")
  5. Leave the door open ("Let's revisit next quarter")

Document rejected ideas and why. Context changes. Today's no might be tomorrow's yes.

Stakeholder Management

Identify stakeholders early:

  • Who needs to approve this?
  • Who will be affected by this?
  • Who has context you need?
  • Who can block this?

Keep stakeholders aligned:

  • Share updates proactively (don't make them ask)
  • Flag risks and changes early
  • Explain trade-offs and get input on decisions
  • Close the loop on their feedback

Manage conflicting priorities:

  • Understand each stakeholder's goals
  • Find common ground
  • Escalate when needed with clear options
  • Document decisions and rationale

Your job is to make the right trade-offs, not to make everyone happy.


Part 4: Specification

Writing Requirements

Requirements should be clear enough that someone else could build it.

User Stories:

As a [persona]
I want [action]
So that [benefit]

Keep stories small. If it takes more than a sprint, break it down.

Acceptance Criteria:

Given [context]
When [action]
Then [expected result]

Cover:

  • Happy path (it works as expected)
  • Edge cases (unusual but valid scenarios)
  • Error states (what happens when things go wrong)

Out of Scope: Explicitly list what you're NOT building. This prevents:

  • Scope creep during development
  • Misaligned expectations
  • "I thought we were including X" conversations

Success Metrics

Every initiative needs:

  1. Primary metric - The one thing that matters most
  2. Secondary metrics - Guardrails to ensure no negative impact
  3. Targets - Specific numbers with timeframes
  4. Measurement plan - How you'll actually track this

Good metrics are:

  • Measurable (you can actually track it)
  • Attributable (you can connect it to your change)
  • Timely (you'll see signal soon enough to act)
  • Aligned (with user and business value)

Watch out for:

  • Vanity metrics (looks good, means nothing)
  • Gaming (people optimizing for the metric, not the goal)
  • Lagging indicators (signal comes too late)

Design Collaboration

Before design:

  • Share problem context and research findings
  • Align on constraints (technical, business, timeline)
  • Discuss success criteria

During design:

  • Review early and often (don't wait for "final")
  • Focus feedback on whether it solves the problem
  • Separate "I don't like it" from "This won't work because..."

After design:

  • Ensure edge cases and error states are covered
  • Verify accessibility requirements
  • Confirm designs are feasible with engineering

Design is a partner, not a service. Collaborate, don't hand off.

Engineering Collaboration

Before development:

  • Walk through requirements together
  • Ask for feasibility and effort input
  • Discuss trade-offs and alternatives
  • Agree on scope and approach

During development:

  • Be available for questions
  • Make decisions quickly when blockers arise
  • Don't change requirements mid-sprint without good reason
  • Review work in progress, not just finished work

After development:

  • Test thoroughly before release
  • Don't treat engineers as QA
  • Celebrate launches together
  • Share results and learnings

Respect engineering expertise. They often see solutions you don't.


Part 5: Execution

Sprint Participation

Before sprint:

  • Backlog is groomed and prioritized
  • Stories have clear acceptance criteria
  • Dependencies are identified
  • Designs are ready

During sprint:

  • Attend standups (be present, not just present)
  • Unblock the team quickly
  • Resist adding scope mid-sprint
  • Review completed work promptly

After sprint:

  • Participate in retrospectives
  • Act on feedback about PM performance
  • Prep for next sprint

Launch Planning

Pre-launch checklist:

- [ ] Feature complete and tested
- [ ] Success metrics tracking verified
- [ ] Documentation updated (help docs, FAQs)
- [ ] Support team briefed
- [ ] Rollback plan documented
- [ ] Communication plan ready
- [ ] Stakeholders informed

Launch communication:

  • Who needs to know? (Internal and external)
  • What do they need to know?
  • When should they know it?
  • How will they find out?

Rollout strategy:

  • Big bang vs phased rollout
  • Feature flags for gradual enablement
  • Monitoring plan for first 24-48 hours

Post-Launch

First week:

  • Monitor metrics daily
  • Watch error rates and support tickets
  • Gather qualitative feedback
  • Be ready to rollback or hotfix

First month:

  • Analyze against success criteria
  • Document learnings
  • Decide: iterate, expand, or move on
  • Close the loop with stakeholders

Retrospective:

  • What went well?
  • What didn't?
  • What will we do differently next time?

Shipped is not done. Learning is the goal.


Part 6: Communication

Status Updates

Proactive updates build trust. Reactive updates create anxiety.

What to communicate:

  • Progress against goals
  • Blockers and risks
  • Changes in scope or timeline
  • Decisions made and rationale

When to communicate:

  • Regular cadence (weekly updates)
  • When things change (don't wait for the next update)
  • When you need input (give context for the decision)

How to communicate:

  • Lead with the headline
  • Provide context, not just data
  • Be specific about asks
  • Use the right channel for the urgency

Meetings

Before scheduling a meeting, ask:

  • Can this be async? (Document, Slack, Loom)
  • Who actually needs to be there?
  • What decision or outcome do we need?

Running effective meetings:

  • Share agenda in advance
  • Start with the goal
  • Timebox discussions
  • Capture decisions and action items
  • End with next steps

After meetings:

  • Send summary within 24 hours
  • Assign owners and deadlines
  • Follow up on action items

Meetings are expensive. Treat them that way.

Documentation

Write for the reader:

  • Assume no prior context
  • Use simple language
  • Structure for skimmability (headers, bullets, tables)
  • Link to details rather than including everything

Keep it current:

  • Update when things change
  • Delete outdated docs (wrong docs are worse than no docs)
  • Date your documents

Single source of truth:

  • One place for each type of information
  • Link, don't duplicate
  • Make it easy to find

Part 7: Working with Data

Analytics Fundamentals

Understand your data:

  • What events are tracked?
  • What's the data quality?
  • What are the known gaps?
  • How long is data retained?

Asking good questions:

  • Start with the decision you need to make
  • Work backwards to the data you need
  • Consider what would change your mind
  • Account for confounding factors

Common pitfalls:

  • Correlation ≠ causation
  • Small sample sizes
  • Selection bias
  • Survivorship bias
  • Vanity metrics

Building Dashboards

Good dashboards:

  • Answer specific questions
  • Update automatically
  • Are easy to interpret
  • Lead to action

Dashboard hygiene:

  • Remove unused dashboards
  • Document what each metric means
  • Set up alerts for anomalies
  • Review regularly for relevance

Experimentation

When to A/B test:

  • Change is reversible
  • You have enough traffic for statistical significance
  • The decision matters enough to wait

When NOT to A/B test:

  • Obvious improvements (just ship it)
  • Not enough traffic (you'll never reach significance)
  • Irreversible changes (can't A/B test a data model)

Running good experiments:

  • Clear hypothesis before starting
  • Single variable when possible
  • Adequate sample size
  • Sufficient runtime
  • Pre-registered success criteria

Don't p-hack. Accept null results. Learn from failures.


Quick Reference

Before Starting Work

  • [ ] Do I understand the problem (not just the solution)?
  • [ ] Do I have evidence the problem exists?
  • [ ] Do I know who is affected and how often?
  • [ ] What's the impact if this succeeds?
  • [ ] What are the risks?
  • [ ] How will I measure success?

Before Handing to Engineering

  • [ ] Problem is validated with data
  • [ ] User stories have clear acceptance criteria
  • [ ] Out of scope is documented
  • [ ] Designs are complete and approved
  • [ ] Success metrics are defined
  • [ ] Dependencies are identified
  • [ ] Stakeholders have signed off

Before Launching

  • [ ] Feature is tested and working
  • [ ] Metrics tracking is verified
  • [ ] Documentation is updated
  • [ ] Support team is briefed
  • [ ] Rollback plan is ready
  • [ ] Communication plan is ready

After Launching

  • [ ] Monitor metrics daily (first week)
  • [ ] Watch support tickets and errors
  • [ ] Gather qualitative feedback
  • [ ] Document learnings
  • [ ] Decide on next steps

Common Mistakes

MistakeFix
Proposing solutions before understanding problemsAsk "what problem are we solving?" first
Building without validationTreat every feature request as a hypothesis
Vague requirementsWrite explicit acceptance criteria and out-of-scope
Gut feeling as validation"I think users want this" needs data
Changing scope mid-sprintPush back or defer to next sprint
Silent until askedProactive updates build trust
Outputs over outcomes"Reduce tickets by 30%" not "Launch feature X"
Skipping postmortemsShipped is not done, learning is the goal

Related: