A checklist for pull requests

Nov 9, 2024

Why implement a PR checklist?

As a developer, it’s important to take the time to carefully review and test your code before submitting it for review. A pull request checklist can be a useful tool to help you ensure that you have done everything necessary to make your code ready for review and embed this as a repeatable and consistent process that then helps with minimzing bugs, incidents and improving overall quality.

It improves quality control

One of the main benefits of having a pull request checklist is that it helps to ensure that your code is of high quality. By following a checklist, you can make sure that you have thoroughly tested your changes and that they are working as intended. This can help to reduce the risk of introducing bugs or other issues into the codebase, which can save time and effort in the long run.

It makes the review more efficent

In addition to improving the quality of your code, a pull request checklist can also help to improve the efficiency of the review process. When you submit a pull request, it’s important to provide as much information as possible to help reviewers understand your changes. A checklist can help you to remember to include important details like updated documentation and a clear description of what your changes do. This can make it easier for reviewers to understand your code and to provide helpful feedback.

It encourages proactive communication

A pull request checklist can also help to encourage collaboration and communication within a team. By following a clear and structured process for submitting changes, you can help to ensure that your code is reviewed and tested in a consistent way. This can help to build trust and foster a sense of community among team members.

What to include in your PR checklist

Here is a copy of our PR checklist which we implement inside of Github as a Pull Request template. You can read more about how to do this here

—-

### Summary

<_Provide context of the change for the reviewer - just like you were talking to them_>

### Whats the worst thing that could happen?

- [ ] Compromise Customer Security (P0 Bug)

- [ ] Data Loss/Corruption (P0 Bug)

- [ ] Cause a system outage (P0 Bug)

- [ ] Break Existing Workflows (P0 Bug)

- [ ] Cause a deployment failure (P0 Bug)

- [ ] Cause a P1 Bug

- [ ] Cause a P2 Bug

- [ ] Cause a P3 Bug

- [ ] This is impossible for users to notice

- [ ] I don't know

### How well covered by automated tests is this change?

- [ ] Well Unit Tested

- [ ] Well Integration Tested

- [ ] Dev E2E Tests Have Passed

- [ ] Needs Manual Testing

- [ ] I don't know

### Risk Assessment

- [ ] 🔴 This change is high risk (and will be deployed with the on-call) 🔴

- [ ] This change is low risk (and I will deploy it myself once approved)

### Test Plan

<_Given all the above think about what tests are needed and have been performed in dev will be done for the prod release - either manual or automated_>

### Monitoring Plan

<_Describe what logs etc. you'll monitor at deploy time_>

### PR Readiness

- [ ] Jira Issue in PR Name

- [ ] I've thoroughly self-reviewed the code changes

- [ ] The code is in a pretty good state

- [ ] The code isn't in a great state but I've improved it

- [ ] The code is scary and I want to change as little as possible

### Reviewer Checklist

- [ ] Risk Assessment seems right

- [ ] Test & Monitoring Plans looks appropriate

- [ ] Code quality looks good

- [ ] Automated Test coverage is appropriate

—-

Here is a detailed explanation of each part of this checklist that we provide to our own developers.

Summary

The summary section is your moment to have a conversation with your reviewer through the PR. Imagine you're sitting at their desk, explaining what you've been working on - that's the tone you want to capture here. Your summary should paint a clear picture of the problem you're solving and why it matters. Share the context that led to these changes, explain any significant technical decisions you made along the way, and highlight specific areas where you'd particularly value their attention. This context helps reviewers understand not just what you changed, but why you changed it in this particular way.

What's the Worst Thing That Could Happen?

This section might seem pessimistic at first glance, but it serves a crucial purpose: forcing us to think through potential failure modes before they occur. By explicitly considering worst-case scenarios, we can better assess risks and put appropriate safeguards in place.

The checklist breaks down potential impacts by severity, starting with our P0 (Critical) concerns. A security compromise could expose customer data or create vulnerabilities in our system. Data loss or corruption could permanently impact our customers' work. System outages could bring down critical services, while broken workflows might disrupt customer processes. Deployment failures could impact our ability to ship other critical changes.

Moving down in severity, P1 bugs represent serious issues that significantly impact functionality but don't quite reach P0 status. P2 bugs cause moderate issues but usually have workarounds available. P3 bugs create minor inconveniences without blocking core functionality. Sometimes changes might be impossible for users to notice, which is its own important category to acknowledge. Finally, "I don't know" is a valid and important option - acknowledging uncertainty is better than making assumptions.

Test Coverage Assessment

Understanding the testing landscape for a change helps reviewers gauge its reliability and identify potential gaps. Well-unit-tested code indicates comprehensive coverage of core logic and edge cases at the function and class level. Integration testing shows how the changes work in concert with other system components. When development end-to-end tests have passed, it provides confidence that complete workflows remain functional. Sometimes manual testing is necessary, particularly for user interface changes or complex scenarios that are difficult to automate. Again, acknowledging when you don't know the testing status is valuable - it opens the door for important discussions about testing strategy.

Risk Assessment

Our risk assessment boils down to a critical decision that affects how we handle deployment. High-risk changes require careful coordination with our on-call teams. These typically include database schema modifications, API contract changes, critical path updates, or complex data migrations. These changes often have broader system implications and need extra safeguards during deployment.

Low-risk changes, on the other hand, can be safely deployed by the developer after approval. These might include UI enhancements, documentation updates, well-contained feature additions, or bug fixes with clearly defined scope. The key is being honest about which category your change falls into - underestimating risk can lead to preventable incidents.

Test Plan

Your test plan should directly address the risks you've identified above. Instead of generic testing notes, provide specific scenarios that need verification. Explain how each scenario will be tested, whether through automated means or manual verification. If special environments or conditions are needed, detail those requirements. Include steps for verifying rollback procedures work correctly - this is especially crucial for high-risk changes. Finally, outline any post-deployment validation needed to ensure the change is working as intended in production.

Monitoring Plan

An effective monitoring plan is your early warning system during and after deployment. Specify exactly which metrics you'll watch, including the queries you'll use to detect potential issues in logs. Define the alert thresholds that would indicate problems and identify which dashboards will help you maintain visibility into the system's health. Also consider how long you'll need to maintain enhanced monitoring - some changes require extended observation periods to catch edge cases or periodic issues.

PR Readiness

The PR readiness section helps set clear expectations about code quality and state. Including a Jira issue in the PR name isn't just about tracking - it provides crucial context about the change's purpose and scope. When you indicate you've thoroughly self-reviewed the code, you're communicating that you've taken time to catch issues before involving others. The state of the code matters too - whether it's in good shape, improved but not perfect, or minimally changed due to complexity. This honesty helps reviewers understand what they're getting into and focus their attention appropriately.

Reviewer Checklist

Reviewers have their own critical role in this process. They need to verify that the risk assessment aligns with the actual changes being made. They should evaluate whether the test and monitoring plans provide sufficient coverage given the identified risks. Code quality needs to meet our established standards, and test coverage should be appropriate for the scope of changes. This isn't about ticking boxes - it's about applying experienced judgment to ensure we're shipping changes responsibly.

What Makes a Good PR Review?

A truly effective PR review starts with completeness - taking time to thoughtfully consider and fill out each section of the checklist. Honesty about risks and limitations is crucial; trying to minimize concerns often leads to problems later. Clarity in your explanations helps reviewers focus their attention where it's most needed. Your test and monitoring plans should be specific and actionable, not vague gestures at potential approaches. Most importantly, provide enough context for reviewers to make informed decisions about your changes.

Common Pitfalls to Avoid

Experience has taught us to watch out for several common issues in PR reviews. Developers sometimes downplay risks in hopes of speeding up the review process, but this usually backfires by either causing incidents or requiring multiple review rounds. Test plans that don't specifically address identified risks leave too much uncertainty. When dealing with high-risk changes, vague monitoring details make it harder to catch problems early. Rushing through self-review often means pushing that work onto reviewers, which slows down the overall process. Perhaps most importantly, failing to consider edge cases in risk assessment can lead to unexpected problems in production.

Remember that a PR checklist isn't just another layer of bureaucracy - it's a powerful tool that helps us ship changes confidently while protecting our systems and customers. Taking time to fill it out thoughtfully often saves time in the long run by preventing issues and facilitating faster, more focused reviews. It represents lessons learned from past incidents and helps ensure we don't repeat those mistakes. We think of it as a conversation with your future self and your colleagues about what could go wrong and how we're preventing it.