Improve Code Review Participation and Responsiveness

Use Case. Share it with the roles that have interest in that.

Improving participation and responsiveness in reviews requires more than simply reducing merge time—it involves understanding whether reviews are happening, who is involved, and how thorough the process is.

This guide helps you identify teams with review bottlenecks or poor engagement, distinguish between healthy and unhealthy review speeds, and take action based on real, observable data.


Step 1: Start from the Insights Page

➡️ Where: Insights → Engineering Performance

Begin by adding the Average Time to Merge from Create metric. This gives you a top-level view of how long it takes for PRs to move from creation to completion.

➡️What to look for:

  • Teams or projects with significantly higher or lower average times than the company average.

📘

These are your outliers—either too slow (potential bottleneck) or too fast (potentially lacking proper review).


Step 2: Drill Down by Team or Project

➡️Average Time to Merge from Create can be split into two phases by adding new metrics in Insights:

  • Average time to first review (Pickup Time)
  • Average time to merge from review (Review Time)

➡️ Use the drill-down feature to investigate abnormal values. Focus on the teams with either:

  • Long times to merge (possible delays),
  • Very short times (possible skipped reviews).

➡️ This helps isolate where the delay is happening.

  • If the delay is in Average time to merge from review:
    • This might be due to:
      • Waiting on external approvals (e.g., QA)
      • Repository automations (branch protections, CI/CD gates)
  • If the delay is in Average time to first review:
    • The team might be slow to respond to review requests or not prioritizing them.

Step 3: Check Review Collaboration and KSI

➡️ Where: PR Insights → Collaboration Tab

Use this view to understand how reviews are distributed among team members.

➡️What to look for:

  • Knowledge Sharing Index (KSI): High KSI reflects strong team-wide participation.
  • Submitter-reviewer graph: A healthy team will show many cross-connections, not just one or two people doing all the reviewing.

📘

Poor collaboration can explain delays, while strong collaboration often aligns with steady review flow.


Step 4: Validate Review Thoroughness

➡️ Where: Merge Quality

Fast merges can be misleading if no reviews are happening at all.

Check for:

➡️Unreviewed PRs:

  • Rubber-stamped approvals (approvals without comments or delays)
  • Follow-on commits after approval, indicating unresolved feedback or missed issues

🚩

If a team has fast merge times and high rates of rubber stamping or unreviewed PRs, that’s a red flag.

👍

On the other hand, teams with longer merge times but no rubber stamping and high review participation likely have a deliberate, thorough process—which can be a strength, not a weakness.


Step 5: Monitor Both Ends of the Spectrum

Teams with high and low time-to-merge both need evaluation:

  • For high-time teams: Are they being blocked or just slow to review?
  • For low-time teams: Are they skipping reviews to move faster?

You want to find the balance between speed and depth of code reviews.


Step 6: Plan for Alerting and Proactive Monitoring ‼️‼️

➡️ To improve responsiveness:

  • Consider adding alerts when a PR is open without review for X hours.
  • Until Slack, Teams, and Webex alerts are available, surface these insights in the Project Health at a Glimpse feature.‼️‼️

These nudges help developers stay on top of pending reviews without needing to check dashboards manually.


Key Takeaways

  • Use Time to Merge from Create as a gateway metric, then drill down to find context.
  • Split the merge timeline to detect whether the problem is in review responsiveness or approval workflow.
  • Validate with KSI, review distribution, and merge quality metrics like unreviewed PRs or rubber stamping.
  • Review participation is as important as review speed—optimize for both.