The skill: Deciding what level of quality a feature needs before it ships, then verifying it yourself before involving anyone else. Not writing test plans. Calibrating quality investment to risk.
Not everything needs the same quality bar
A billing change needs near-zero defects. A new tooltip can ship with rough edges. The skill is matching testing effort to the cost of failure. Ask: "If this breaks in production, what happens?" If the answer is "users lose money" or "users lose data," test exhaustively. If the answer is "a button looks weird on Safari," ship it.
The PM smoke test
Before you hand anything to QA or users, walk the happy path yourself. Then try it on mobile. Then try it as a brand new user with no context. You'll catch 80% of the embarrassing stuff in 15 minutes.
Write bug reports that save engineering time
Steps to reproduce, expected behavior, actual behavior, severity. Include a screenshot or screen recording. "It's broken" with no context wastes an engineer's afternoon. A good bug report gets fixed in an hour.
The "good enough" decision
When you find a bug before launch, run it through three filters: severity (how bad?), frequency (how many users hit it?), workaround (can they get past it?). A cosmetic bug affecting 2% of users with an obvious workaround ships. A data loss bug affecting 0.1% of users blocks the release.
Regression is the silent killer
Fixing one thing often breaks another. When a change touches a core flow (auth, payments, onboarding), test the adjacent flows too, not just the thing that changed. Ask engineering: "What else could this affect?"
When to push for automated tests
Manual QA doesn't scale. If a flow is critical, touched frequently, or has broken before, it needs automated test coverage. You don't write the tests, but you should know which flows are covered and which aren't. Ask: "If someone changes this code in six months, will a test catch if it breaks?" If the answer is no and the flow matters, that's a gap worth flagging.
Edge cases that actually matter
Empty states (no data yet), error states (network fails, API returns garbage), permission states (what does a viewer see vs. an admin?), and boundary states (what happens at 0, at 1, and at 10,000?). Skip the theoretical edge cases. Test the ones real users will hit.
QA is not a phase at the end
If you're only testing after development is "done," you've already failed. Review designs for testability. Ask during implementation: "How will we know this works?" Catch the assumptions early, not the bugs late.
Do's and Don'ts
Written with ❤️ by a human (still)