Skip to content

Performance Reviews

How we run performance cycles. Quarterly (unofficial, you own it) and annual (formal, the company runs it).


TL;DR

  • One doc per quarter - Your Quarterly 1:1 doc is where goals are set, tracked, and graded. Don't scatter.
  • Win log at the bottom - Same doc. 5 minutes every Friday. One-liner per win with a number.
  • Quarterly goals ≠ annual goals - Quarterly: "Ship X, move Y metric." Annual: "I went from A to B."
  • Grade yourself before your manager does - End of quarter: grade each goal Hit/Partial/Miss with evidence.
  • Annual self-review = 4 quarterly summaries stitched into a narrative - If you ran the quarterly cycle, annual review writes itself.

Part 1: The Quarterly Cycle

You run this yourself. Your manager doesn't ask for it. That's the point — when review time comes, you have 12 months of receipts instead of 2 weeks of scrambling.

Where Everything Lives

One doc per quarter: your Quarterly 1:1 doc. Goals, actions, and wins all in the same place.

Quarterly 1:1 - Q2 2026 - Alex Chen
├── Goals (set at start of quarter)
├── Actions This Quarter (broken down by month/week)
├── Career Development (capability you're unlocking)
├── Support Needed
└── Win Log (appended weekly, lives at the bottom)

Don't use a separate Notion page for wins and a Google Doc for goals and a Slack message for updates. One doc. Everything in it.

Setting Quarterly Goals

Use the Quarterly 1:1 template. 3 goals max.

What makes a good quarterly goal:

A quarterly goal answers: "What will exist at the end of this quarter that doesn't exist now?"

GoodWhyBadWhy
"Ship notification service to prod, 0 P0 bugs for 2 weeks"Concrete deliverable, measurable bar"Work on notifications"No end state, no bar
"Reduce API latency from 800ms to <200ms p95"Before/after with specific number"Improve performance"No baseline, no target
"Run 20 user interviews, validate 2 product bets on roadmap"Output + outcome connected"Do more user research"Activity, no outcome
"Reduce spec revision rounds from 4+ to 1-2 across 10 specs"Measured over enough reps to be real"Write better specs"No measure of "better"
"Mentor Jamie to ship features solo (currently needs pairing)"Transformation in someone else, observable"Help junior engineers"Vague, no observable change

The test: Read your goal to a teammate. If they can't tell whether you hit it at end of quarter, rewrite it.

Goal anatomy:

Goal: [Verb] + [noun] + [measurable outcome]
Why it matters: [Business or team impact in one sentence]
Success looks like: [Number, shipped thing, or observable change]

Example:

Goal: Reduce API latency from 800ms to <200ms p95
Why it matters: Current 800ms causes 15% cart abandonment, costing ~$300K/year
Success looks like: p95 < 200ms sustained for 2+ weeks across all endpoints

Not:

Goal: Improve API performance
Why it matters: It's slow
Success looks like: Faster

For how to think about the KIND of goal (stretch goals that unlock capabilities vs task lists), see Goal Setting.

The Win Log

Bottom of your Quarterly 1:1 doc. Every Friday, 5 minutes.

Format:

## Win Log

### Week of Feb 3
- Shipped notification service to production → 0 P0 bugs, 12K notifications in first week
- Debugged payment timeout for 300 users → Connection pool exhaustion, fixed in 2 hours
- Led design review for checkout → Caught 3 edge cases, prevented 2-week rework

### Week of Feb 10
- Onboarded Jamie to codebase → First PR shipped Thursday (usually takes 2 weeks)
- Proposed caching strategy in arch review → Adopted by platform team

Rules:

  • One line per win. [What you did] → [Result with number]
  • Every win needs a number: users affected, time saved, revenue, error rate, weeks prevented.
  • Include decisions and influence. Killing a bad idea = win. Unblocking someone = win. Saying no with data = win.
  • Skip empty weeks. That's signal too — investigate why.

What counts:

CategoryExample
Shipped"Launched checkout redesign → conversion 8% to 12%"
Fixed"Resolved payment timeout for 300 users → error rate 5% to 0.1%"
Prevented"Caught data model flaw in review → prevented 3-week rework"
Unblocked"Built shared component → unblocked 3 teams"
Influenced"Proposed caching strategy → adopted by platform team, saved $4K/month"
Grew others"Mentored Jamie → shipping solo in 6 weeks (was fully paired)"

End-of-Quarter Review

Last week of the quarter. 60 minutes. Three steps.

Step 1: Grade each goal

Go through each goal in your Quarterly 1:1 doc:

Goal: Ship notification service end-to-end without senior engineer pairing
Result: Shipped on time. 0 P0 bugs. 12K notifications/week.
Grade: Hit
Why: Broke it into 6 small PRs, got fast feedback loops

Goal: Reduce PR revision rounds from 4+ to 1-2
Result: 4.2 average → 2.1 average (measured over 18 PRs)
Grade: Partial (target was 1-2, landed at 2.1)
Why: Checklist helped for logic, still getting caught on naming conventions

Goal: Debug production issues without escalating
Result: Handled 3 of 5 solo. Escalated 2 (both infra I don't have access for)
Grade: Partial
Why: Need infra access and runbooks for the remaining issue types

Be honest. "Partial" is useful data. "Hit" on everything means you aimed too low.

Step 2: Write quarter summary

One paragraph. Pull the 3-5 biggest wins from your win log.

Q1 2026: Shipped notification service (12K notifications/week, 0 P0 bugs), reduced PR revision rounds from 4.2 to 2.1, handled 3 production incidents solo. Biggest gap: still escalating infra-related incidents. Next quarter focus: infra access + runbooks.

This paragraph is raw material for your annual review. Four of these = your year.

Step 3: Set next quarter's goals

Your "Partial" and "Missed" grades tell you where to aim. Create the next quarter's Quarterly 1:1 doc and repeat.

The pattern: Q1's partial goals become Q2's targets. Each quarter compounds.


Part 2: The Annual Performance Review

The formal one. Your company runs it. Rating, comp, and promotion depend on it.

How Annual Goals Differ from Quarterly Goals

Quarterly goals are concrete deliverables: "Ship X. Move Y metric. Reduce Z from A to B."

Annual goals are narratives: "I went from engineer who needs pairing to engineer who owns systems end-to-end." They describe a transformation arc, not a task list.

Quarterly GoalAnnual Goal
Timeframe90 days12 months
Format"Ship X, move Y metric""Transition from A to B"
GranularitySpecific deliverable with numberCapability or scope change
Example (Eng)"Reduce API latency from 800ms to <200ms""Went from executing assigned features to owning systems end-to-end and mentoring 2 engineers"
Example (PM)"Run 20 interviews, validate 2 bets""Went from shipping features to driving outcomes — research practice built, 3 projects killed with data, $1.6M ARR from checkout redesign"
Measured byHit/Partial/Miss on specific numberNarrative backed by quarterly evidence

Your annual goal is the story your 4 quarterly goals tell when you zoom out. You don't set it separately — it emerges from consistent quarterly execution.

Bad annual framing: "Worked on payments, notifications, and checkout. Also did some mentoring."

Good annual framing: "Transitioned from executing assigned features to owning systems independently. Shipped payments v2 ($2M ARR), notification service (12K/week), and checkout redesign (+4% conversion). Mentored 2 junior engineers from fully-paired to shipping solo."

The bad version lists activities. The good version shows a trajectory.

Writing the Self-Review

Start 4 weeks before the deadline. Not 4 days.

WhenWhat
4 weeks beforePull quarterly summaries and win logs. Start drafting.
3 weeks beforeFirst draft done. Let it sit 2 days.
2 weeks beforeRevise. Ask a trusted peer to read it.
1 week beforeSubmit final version. Prep talking points.

Step 1: Gather your material

You already have it:

  • 4 quarterly summaries (one paragraph each)
  • 4 graded goal sheets
  • ~50 weeks of win log entries
  • Peer feedback from the year

If you didn't run the quarterly cycle, reconstruct from calendar, PR history, Slack, and project tracker. Then start the quarterly cycle so this never happens again.

Step 2: Map to the rubric

Find your company's evaluation criteria. Map your wins to each dimension.

DimensionWhat they want to seeYour evidence
ImpactMetrics moved, outcomes shippedQuarterly summaries, win log
CraftQuality decisions, solid executionPR feedback, architecture decisions, spec quality
CollaborationCross-team work, communicationUnblocked wins, design reviews, spec approvals
LeadershipInfluenced direction, grew othersMentoring wins, proposals adopted, decisions made
GrowthTook on harder problems than last yearQuarterly goal progression (Q1 → Q4 arc)

If a dimension has no evidence, that's a gap you need to own in the review.

Step 3: Write impact statements

For each major accomplishment:

[What you did] → [Measurable result] → [Why it mattered]

Bad:

  • "Worked on the payments project"
  • "Helped improve system performance"
  • "Did user research"

Good:

  • "Led payments v2 migration (Stripe, 3 payment methods) → 100% transactions on new system in 6 weeks → Unlocked enterprise tier, $2M ARR"
  • "Fixed N+1 query in account aggregation → API latency 3.2s to 0.8s → Cart abandonment down 15%"
  • "Ran 22 user interviews, killed dashboard filter project → Saved 3 weeks eng time → Reallocated to checkout (higher ROI)"

Step 4: Build the narrative

Structure:

  1. Opening (2-3 sentences) - Your transformation arc. What changed about you this year.
  2. Top 3-5 impact statements - Biggest first. Numbers in every one.
  3. Growth - What you can do now that you couldn't 12 months ago. Name the capabilities.
  4. Development areas - 1-2 real gaps. What you're already doing about them.
  5. Next year - What you want to take on. Shows direction.

Example opening:

This year I went from executing assigned features to owning systems end-to-end. Shipped notification service independently (12K/week, 0 P0 bugs), led payments v2 migration ($2M ARR), and mentored 2 junior engineers from fully-paired to shipping solo. Biggest growth: production debugging — from escalating every incident to handling 80% independently.

Example development area:

I default to building before validating. On the dashboard project, I spent 3 weeks on filters that analytics showed 4% of users touch. Fix: no feature over 1 week of eng time ships without 5 user interviews or analytics check first. Applied this to 3 projects in Q4 — killed 1, scoped down 2.

The Conversation

Before:

  • Re-read your self-review
  • Prepare 3 points you need to land
  • Know what you want: promotion, raise, new scope, specific growth areas

During:

  • Listen first. Let your manager share their assessment.
  • If their view differs: "Can you give me a specific example?"
  • Don't argue ratings. Ask: "What would it take to get [next level] next cycle?"
  • Take notes. You'll forget 50% within a week.

After:

  • Email summary: feedback received, agreed goals, next steps
  • If targeting promotion: "What 3 things do I need to demonstrate by next review?"

Part 3: For Managers

Your Two Jobs

  1. No surprises - Nothing in the annual review is new. If they're hearing it for the first time, you failed during the year.
  2. Make their case - You advocate to leadership. The clearer their self-review, the easier your job.

Running the Quarterly Cadence

WhenWhat you do
Start of quarterSet goals together in the Quarterly 1:1. 3 goals max. Specific success criteria.
Mid-quarterCheck progress in Biweekly 1:1s. Course-correct early. "You're behind on Goal 2 — what's blocking you?"
End of quarterGrade goals together. Honest. "Goal 1: Hit. Goal 2: Partial — here's what I saw."

Teaching the System

Don't assume your reports track their work.

  1. Start of year: Share the win log format. Explain: "This protects you at review time."
  2. Every biweekly 1:1: "What wins from the last 2 weeks?" If they blank, they're not logging.
  3. End of quarter: "Bring your top 5 wins." If they struggle, help reconstruct — then reinforce the habit.

Giving Ratings

Rate on demonstrated impact, not effort or tenure.

MistakeFix
Rating everyone the sameDifferentiate. Equal ratings = top performers leave.
Rating on effort ("worked so hard")Rate on outcomes. Hard work that doesn't ship ≠ top performance.
Recency biasUse their win log and all 4 quarterly reviews.
Holding to criteria you never sharedShare the rubric at the start of the year.
Sandbagging to "leave growth room"If they hit the bar, say so. Withholding earned ratings kills trust.

The Review Conversation

  1. Start: "How do you think the year went?"
  2. Share your assessment with specific examples for every point
  3. If gap between views: address directly with evidence, not opinions
  4. End with clear goals for next cycle
  5. Promotion candidates: "Here's what you need to demonstrate in the next 2 quarters"

Common Mistakes

MistakeFix
Goals scattered across 5 toolsOne doc per quarter: the Quarterly 1:1. Goals, actions, wins — all in one place.
Quarterly goals with no number"Improve performance" → "Reduce latency from 800ms to <200ms p95"
Annual review = list of tasksAnnual review = transformation narrative. "I went from A to B." Not "I worked on X, Y, Z."
Writing self-review the night beforeStart 4 weeks out. You're compiling 12 months into 1-2 pages.
No win log, reconstructing from memory5 minutes every Friday. If you don't write it down, it didn't happen.
"Partial" grades feel like failure"Partial" is data. It tells you where to aim next quarter. "Hit" on everything = aimed too low.
Development areas left blankInclude 1-2 real gaps with what you're doing about them. Shows self-awareness, not weakness.

Related: