Pull Request Titles and Descriptions
Write PRs that explain impact, not implementation.
TL;DR
- Self-review first - Review your own PR before requesting human review
- Address AI feedback - Fix critical/high severity issues from AI code assistants
- Don't explain what files changed - reviewers can see the diff
- Do explain the before/after state - what changed for users/systems
- Use risks table - Clear format for risk/impact/mitigation
- Include metrics - Performance gains, error reduction, before/after numbers
- Title format:
type: concise description
The Problem
Bad PR descriptions waste reviewer time.
Common mistakes:
- "Updated UserController.ts to add validation"
- "Changed the login flow in 3 files"
- "Fixed bug in payment processing"
What's wrong:
- Reviewers can see which files changed
- Doesn't explain what actually changed from user/system perspective
- No context on risks or deployment needs
- No self-review or AI feedback addressed
Reviewers need to know:
- What behavior changed?
- What are the risks?
- What do I need to deploy/configure?
- Was this already reviewed (by you and AI assistants)?
PR Creation Process
Follow these steps before requesting human review:
1. Self-Review
Review your own PR critically first:
- Check for code quality issues you might have missed
- Verify all tests are passing
- Look for potential improvements or refactoring opportunities
- Document any known limitations or future improvements needed
2. Address AI Code Assistant Feedback
If using AI code review tools (Gemini Code Assist, etc.):
- Carefully review all feedback from AI assistants
- Prioritize CRITICAL and HIGH severity issues
- Address all valid concerns before marking PR as ready for human review
- Document why feedback was not addressed if intentionally skipped
3. Iterate
After addressing feedback:
- Re-review your changes to ensure fixes are correct
- Run all tests and checks again
- Update PR description if scope changed significantly
- Check AI assistant feedback again to confirm all issues resolved
Then and only then, request human review.
PR Title Format
Structure
type: concise descriptionTypes:
feat:- New functionalityfix:- Bug fixrefactor:- No behavior change, code improvementperf:- Performance improvementinfra:- Infrastructure/tooling changedocs:- Documentation onlytest:- Test additions or changeschore:- Maintenance tasks
Examples
Bad:
- "Update payment controller"
- "Fix bug"
- "Refactor user service"
Good:
feat: add Stripe payment retry with exponential backofffix: prevent duplicate orders when user double-clicks checkoutrefactor: split 2000-line UserService into 5 domain servicesperf: reduce API response time from 3s to 200ms via database indexinginfra: add staging environment with production-like data
Why good titles work:
- Reviewer knows impact before opening PR
- Easy to find in git history later
- Clear what changed from system/user perspective
PR Description Template
Use this structure for every PR:
## What Changed (one-liner)
[Brief summary of the change]
## Why (one-liner)
[Business/technical reason for change]
## Before
- [Current behavior/state]
- [Metrics if applicable]
## After
- [New behavior/state]
- [Improved metrics if applicable]
## Risks & Mitigation
| Risk | Impact | Mitigation |
|------|--------|------------|
| [What could break] | High/Medium/Low | [How we prevent it] |
| [What could break] | High/Medium/Low | [How we prevent it] |
## Testing
- ✅ Unit tests added/passing
- ✅ Integration tests added/passing
- ✅ Manual testing: [Describe what you tested]
## Deployment Notes
**Feature Flags:**
- `ENABLE_PAYMENT_RETRY` - Set to `false` initially, enable after monitoring
**Environment Variables:**
- `STRIPE_WEBHOOK_SECRET` - Add to production .env
**Database Changes:**
- Migration: `20250104_add_payment_attempts_table.sql` - Run before deploying
**Config Changes:**
- None / [Describe]
## Rollback Plan (if high risk)
- Revert commit: `git revert <hash>`
- Feature flag: `ENABLE_PAYMENT_RETRY=false`
## Related
- Issue: #123
- Slack thread: [link]
- Design doc: [link]Examples
Example 1: Feature
Title:
feat: add Stripe payment retry with exponential backoffDescription:
## What Changed (one-liner)
Added automatic retry for failed Stripe payments with exponential backoff
## Why (one-liner)
5-10% of payment failures are transient - automatic retry recovers 80% of these without user friction
## Before
- Payment fails → User sees error, must retry manually
- No automatic retry for transient failures (network timeouts, rate limits)
- Lost ~5% of payments due to transient issues
- Payment success rate: 92%
## After
- Payment fails → System retries automatically 3 times
- Exponential backoff: 1s, 5s, 15s delays
- User sees loading spinner during retries
- Only shows error after all retries exhausted
- Expected payment success rate: 96-97% (+4-5% improvement)
## Risks & Mitigation
| Risk | Impact | Mitigation |
|------|--------|------------|
| Duplicate charges if retry logic has bug | High | Added idempotency key to all Stripe calls + Database constraint prevents duplicate `order_id` + `stripe_payment_id` |
| User waits too long (up to 21 seconds) | Medium | Show progress: "Payment processing... (Retry 2/3)" + User can cancel during retry |
| Increased Stripe API usage (3x calls on failures) | Low | Only retry on specific errors (network, rate limit, timeout) + Don't retry on card declined, insufficient funds |
## Testing
- ✅ Unit tests: Payment retry logic with mocked Stripe
- ✅ Integration tests: End-to-end payment with retry simulation
- ✅ Manual testing: Tested with Stripe test mode rate limit errors
- ✅ Load testing: 100 concurrent payments with 10% failure rate
## Deployment Notes
**Feature Flags:**
- `ENABLE_PAYMENT_RETRY` - Set to `false` initially, enable after 24h monitoring
**Environment Variables:**
- None
**Database Changes:**
- Migration: `20250104_add_payment_attempts.sql` - Adds `payment_attempts` table to track retry history - Run before deploying code
**Config Changes:**
- None
## Rollback Plan
- Feature flag: `ENABLE_PAYMENT_RETRY=false` (instant rollback)
- Revert commit: `git revert <hash>`
## Related
- Issue: #456 "Improve payment success rate"
- Stripe docs: https://stripe.com/docs/error-handlingExample 2: Bug Fix
Title:
fix: prevent duplicate orders when user double-clicks checkoutDescription:
## What Changed (one-liner)
Disabled "Place Order" button on first click to prevent duplicate orders
## Why (one-liner)
Users on slow connections double-click thinking button didn't register, creating duplicate charges
## Before
- User double-clicks "Place Order" button → 2 orders created
- User charged twice
- ~20 duplicate orders/week (0.5% of orders)
- Manual refunds required for each
## After
- First click → Button disabled immediately
- Second click within 5 seconds → Ignored
- Only 1 order created per checkout session
- Expected: 0 duplicate orders
## Risks & Mitigation
| Risk | Impact | Mitigation |
|------|--------|------------|
| Button stays disabled if API call fails | Medium | Re-enable button after 30s timeout + Show error message so user knows to retry |
| Legitimate retry blocked (user navigates back, tries again) | Low | 5-second lock only (not permanent) + Lock is session-based, cleared on page refresh |
## Testing
- ✅ Unit tests: Double-click simulation
- ✅ Manual testing: Clicked button 10 times rapidly, verified 1 order
- ✅ Tested timeout: Killed API server, verified button re-enables after 30s
## Deployment Notes
**Feature Flags:**
- None (low-risk change, deploy directly)
**Environment Variables:**
- None
**Database Changes:**
- None
**Config Changes:**
- None
## Rollback Plan
- Revert commit: `git revert <hash>` (if issues arise)
## Related
- Issue: #789 "Users reporting duplicate charges"
- Zendesk tickets: 45 duplicate order reports last monthExample 3: Performance
Title:
perf: reduce user search API from 3s to 200ms via database indexingDescription:
## What Changed (one-liner)
Added database indexes to user search, reducing response time from 3-5s to 100-250ms
## Why (one-liner)
40% of users abandon search after 2+ second wait - fast search improves UX and engagement
## Before
- User search query: 3-5 seconds response time
- Full table scan on 500K user records
- 40% abandonment rate on search feature
- 95th percentile: 4.2s
## After
- User search query: 100-250ms response time
- Added indexes on `users.email`, `users.name`, `users.company`
- 15x faster
- 95th percentile: 250ms
## Risks & Mitigation
| Risk | Impact | Mitigation |
|------|--------|------------|
| Index creation locks table during migration | High | Using `CREATE INDEX CONCURRENTLY` (no lock) + Run during low-traffic window (Sunday 3 AM PST) |
| Indexes slow down writes (INSERT/UPDATE) | Low | Tested: Write performance impact <5ms per operation + Read/write ratio is 100:1 (worth the tradeoff) |
| Disk space increase | Low | Indexes add 2GB (database has 500GB free) |
## Testing
- ✅ Load tested: 100 concurrent searches, all <300ms
- ✅ Verified EXPLAIN plan: Uses index, no seq scan
- ✅ Tested on staging (50K users): Index creation took 2 minutes
## Deployment Notes
**Feature Flags:**
- None
**Environment Variables:**
- None
**Database Changes:**
- Migration: `20250104_add_user_search_indexes.sql` - Creates indexes on `email`, `name`, `company` - Run with `CONCURRENTLY` to avoid locks - Estimated time: 15 minutes on production (500K rows) - **Run during maintenance window: Sunday 3 AM PST**
**Config Changes:**
- None
## Rollback Plan
- Drop indexes: `DROP INDEX CONCURRENTLY idx_users_email, idx_users_name, idx_users_company`
- No code revert needed (queries work without indexes, just slower)
## Related
- Issue: #234 "Search is too slow"
- Performance dashboard: 95th percentile was 4.2s, now 250msExample 4: Refactor
Title:
refactor: split 2000-line UserService into 5 domain servicesDescription:
## What Changed (one-liner)
Split monolithic UserService into 5 focused services for better testability and maintainability
## Why (one-liner)
UserService became a dumping ground (2000 lines) - splitting improves testability, readability, and onboarding
## Before
- `UserService.ts`: 2000 lines, handles auth, profile, settings, notifications, billing
- Hard to test, hard to find bugs, hard to onboard new engineers
- Average PR review time for UserService changes: 45 minutes
## After
- Split into: `AuthService` (login, logout, token refresh), `ProfileService` (name, avatar, bio), `SettingsService` (preferences, privacy), `NotificationService` (email, push, in-app), `BillingService` (subscription, invoices)
- Each service <400 lines
- Clear separation of concerns
- Average PR review time for changes: 15 minutes (3x faster)
## Risks & Mitigation
| Risk | Impact | Mitigation |
|------|--------|------------|
| Logic moved incorrectly, behavior changes | High | Zero behavior change - pure code movement + All 450 existing tests still pass + Added regression tests for critical flows |
| Import paths break across codebase | Medium | Used IDE refactor tool (Find & Replace All) + TypeScript compiler catches import errors + All unit tests pass |
## Testing
- ✅ All 450 existing unit tests pass
- ✅ All 120 integration tests pass
- ✅ Manual smoke test: Login, update profile, change settings, receive notification, view billing
## Deployment Notes
**Feature Flags:**
- None (no behavior change)
**Environment Variables:**
- None
**Database Changes:**
- None
**Config Changes:**
- None
## Rollback Plan
- Revert commit: `git revert <hash>` (low risk, all tests pass)
## Related
- Issue: #890 "UserService is too large"
- Tech debt doc: "Service splitting guidelines"Common Mistakes
Mistake 1: Listing files changed
Bad:
Updated:
- UserController.ts
- UserService.ts
- user.routes.tsWhy bad: Reviewers can see this in the diff
Good:
## What Changed
**Before:** Users could update their email without verification
**After:** Email change requires verification link (prevent account takeover)Mistake 2: No before/after
Bad:
Added email verification to user update flowWhy bad: Doesn't explain current state or why change matters
Good:
## What Changed
**Before:**
- User updates email → Immediately active
- Risk: Attacker changes victim's email, takes over account
**After:**
- User updates email → Verification email sent to NEW address
- Email only changes after clicking verification link
- Old email notified of change attemptMistake 3: Missing risks
Bad:
Added retry logic for payment processingWhy bad: No mention of duplicate charge risk
Good:
## Risks
- **Risk:** Duplicate charges if idempotency fails
- Mitigation: Stripe idempotency keys on all API calls
- Mitigation: Database unique constraint on order_id + payment_idMistake 4: No deployment notes
Bad:
Added feature flag for new dashboardWhy bad: Doesn't say how to enable it
Good:
## Deployment Notes
**Feature Flags:**
- `ENABLE_NEW_DASHBOARD` - Set to `false` in production initially
- After 24h monitoring, enable for 10% of users
- If no errors, enable for 100%
**Rollback:**
- Set flag to `false` - instant rollback, no code deploy neededMistake 5: Vague title
Bad:
Fix bug in payment processingWhy bad: Which bug? What changed?
Good:
[Fix] Prevent duplicate charges when Stripe webhook delayed >30sHow to Review PRs
When reviewing, check for:
1. Clear before/after
- [ ] Can I understand what changed without reading code?
- [ ] Do I know the impact on users/systems?
2. Risk assessment
- [ ] Are risks identified?
- [ ] Are mitigations reasonable?
- [ ] Should we add more safeguards?
3. Deployment plan
- [ ] Do I know what to configure before/after deploy?
- [ ] Are database migrations documented?
- [ ] Is there a rollback plan?
4. Testing coverage
- [ ] Are critical paths tested?
- [ ] Was manual testing done?
- [ ] Do we need QA review?
If any of these are missing, ask for clarification before reviewing code.
When to Skip This Template
Simple changes can be shorter:
Example: Typo fix
docs: fix typo in README: "installtion" → "installation"Example: Dependency update
chore: update lodash from 4.17.20 to 4.17.21 (security patch)Rule of thumb:
- Code change <20 lines, no behavior change → Short description OK
- Behavior change, new feature, bug fix → Use full template
- Infrastructure, database, config change → Use full template
Action: Write Your Next PR
Use this checklist:
Before opening PR:
- [ ] Self-review completed (checked code quality, ran tests, looked for improvements)
- [ ] AI code assistant feedback addressed (CRITICAL and HIGH severity issues fixed)
- [ ] Title follows
type: concise descriptionformat - [ ] Description has "What Changed (one-liner)" and "Why (one-liner)"
- [ ] Before/After sections filled with metrics where applicable
- [ ] Risks table completed (Risk | Impact | Mitigation)
- [ ] Testing section filled (unit, integration, manual)
- [ ] Deployment notes (flags, env vars, migrations, config)
- [ ] Rollback plan included (for high-risk changes)
Time investment:
- Self-review: 5-10 minutes
- AI feedback: 5-15 minutes
- Writing description: 5-10 minutes
- Total: 15-35 minutes
Payoff:
- Faster reviews (reviewer understands context immediately)
- Fewer questions in comments
- Safer deploys (risks documented upfront, rollback plan ready)
- Better git history (easy to find why changes were made)
- Higher quality code (self-review and AI feedback catches issues early)
Related
- Written Communication - How to write clearly and concisely
- Software Engineer Levels - PR quality expectations by level
Remember: Code review is asynchronous. A good PR description saves 30 minutes of back-and-forth in comments.