Pull Request Insights

Identify the bottlenecks in your Pull Request cycles over the course of the sprint. Find outliers, and visualize high-level team dynamics and the underlying activities that can contribute to those dynamics.

Overview

Offers a holistic view of all the PRs in the specified time-frame. It enables contributors leaders to identify long-running Pull Requests, unreviewed Pull Requests that have been merged, and closed Pull Requests that have not been merged.

The Cycle Time indicates an organization's development velocity. The Cycle Time is the sum of four metrics: CODING, PICKUP, REVIEW and DEPLOY. More information can be found here: Cycle Time

🚧

The warning symbol near the PR number will appear for PRs that have not been reviewed yet, PRs closed without a review, PRs merged without a review or PRs that have been ignored by the user/ignored for a specific time-frame.

The bars on the Review Workflow graph represent the time it took for a PR to close. They are color coded as follows:

  • Open w/ review
  • Open w/o review
  • Merged w/ review
  • Merged w/o review
  • Closed
  • Ignored
  • Red dotted circle for High Risk

📘

The bubbles inside the bars indicate follow-on commits, while the half bars indicate comments.


The PR Modal

Clicking on a bar provides details about that particular PR in a comprehensive modal.

The modal contains:

  • On the left, all PRs from that category, grouped by repository.
  • Branch for the PR.
  • Status of the PR.
  • Time to first comment.
  • Open/Closed dates.
  • Work level - bars get filled based on difficulty.
  • The Cycle Time of the PR.
  • Risk: High/Medium/Low
  • Description of the PR - follow the best practices
  • Commits: the commits linked to the PR with an overview and a link to the commit.
  • Reviews: Reviewers are listed here
  • Comments: All the comments for the PR
  • Tickets: The tickets linked to the PR.
  • Deploys

Collaboration

Sharing Index

Sharing Index Calculation: A Deeper Dive into Collaboration Measurement

The Sharing Index is a metric designed to assess how well your team collaborates on code reviews. It quantifies the distribution of reviews across your team members and considers factors that promote fair and active participation. A higher Sharing Index indicates a more collaborative environment.

How It Works:

I. Gathering Data:

  • We start by collecting information on all pull requests (PRs) merged within a specific time period.
  • For each PR, we record who submitted it (submitter) and who reviewed it (reviewers).

II. Calculating the Base Sharing Index:

  • We count how many reviews each submitter received.
  • Using these review counts, we calculate the Gini coefficient, a statistical measure of inequality. In this context, it tells us how evenly the reviews are distributed among submitters.
  • We subtract the Gini coefficient from 1 to get the initial sharing index, a value between 0 (unequal distribution) and 1 (perfectly equal distribution).

III. Adjusting for Fairness:

  • Reviewer Participation: We want to encourage everyone who can review code to participate actively. We calculate the ratio of active reviewers (those who have done at least one review) to the total available reviewers. This ratio is then adjusted to a value between 0 (no participation) and 1 (full participation). This adjustment encourages a higher sharing index when more team members are actively reviewing.
  • Submitter Distribution: We aim for a balance where each submitter's code is reviewed by various team members. We analyze two aspects:
    1. Variety of Reviewers: We calculate the Gini coefficient based on the number of different reviewers each submitter had. A lower Gini coefficient means a more diverse set of reviewers for each submitter.
    2. Number of Reviews: We calculate the Gini coefficient based on the total number of reviews each submitter received. This ensures no submitter is overwhelmed with a disproportionate amount of reviews.
  • We average these two Gini coefficients and normalize them to a value between 0 (uneven distribution) and 1 (even distribution) (the resulting value is then subtracted from 1 and normalized by the total number of pull requests, the result is then capped between 0 and 1). This adjustment promotes a higher sharing index when reviewers are spread out more evenly across submitters.

📘

Why Normalize by Total Pull Requests?

This normalization step helps put the inequality in perspective. If a team has many PRs, a slight imbalance in reviewer distribution is less concerning than if they had only a few PRs. By dividing by the total number of PRs, we account for the scale of the review process.

IV. Final Sharing Index:

We multiply the initial sharing index by both adjustment factors. This gives us the final Sharing Index, a value between 0 and 1.

What the Sharing Index Means:

  • Closer to 1: Your team has a highly collaborative code review process. Reviews are well-distributed, with active participation from all available reviewers, and a good balance of different reviewers for each submitter.
  • Closer to 0: Your team's code review process might need some improvement. Perhaps a few people are doing most of the reviews, or some submitters consistently get feedback from the same small group of reviewers.

The Goal:

The Sharing Index is a tool to help you understand and improve your team's code review practices. By striving for a higher Sharing Index, you can foster a culture of collaboration, knowledge sharing, and high-quality code.


Reviews

Here you can see the overall trend of reviews for the PRs for the selected time period.


Submitter & Reviewer Metrics

Here you can view information regarding contributors who submit pull requests and those who are assigned to review PRs.

The Submitter Metrics report has 4 views you can switch between for review:

  • Responsiveness: is the average number of hours it takes to respond to a comment with either another comment or code revision.
  • Comments addressed: is the percentage of reviewer comments that were responded to with a comment or a revision code.
  • Receptiveness: is the ratio of follow-on commits. [NOTE: Receptiveness is a "goldilock" metric - you'd never expect to go up to 100%, and if you did it'd be indicative of a fairly unhealthy dynamic were every single comment lead to a change.]
  • Unreviewed PRs: is the percentage of PRs submitted that had no comments.

The Reviewer Metrics report also has 4 views you can switch between for review:

  • Reaction Time: is the average number of hours it took to respond to a comment.
  • Involvement: is the percentage of PRs a reviewer participated in. [NOTE: Involvement is a highly context-based metric. "Higher" is not necessarily better as it can point to a behavior where people are overly-involved in the review process, but there are certain situations where you'd expect to see this metric very high.]
  • Influence: is the ratio of follow-on commits to comments made in PRs.
  • Review Coverage: is the percentage of PRs reviewed.

Collaboration Map

The Collaboration Map shows a map of code collaboration, indicating which contributors reviewed whose pull requests.

  • If you hover over an contributor's name in the left column, you will see who reviewed their PRs.
  • If you hover over an contributor's name in the right column, you will see whose pull requests they reviewed.

Resolution

The Pull Request Resolution feature can help engineering leaders identify bottlenecks in the Pull Request cycle and optimize the review process.

Closed PRs

Every circle in the Closed PRs graph represents a Pull Request, and its position indicates how much time it took to be resolved.

Once you click on a circle, a modal will appear, containing more details about the specific Pull Request.


Close Metrics

The Close Metrics heat-map shows on the x-axis the number of PRs considered for the metrics on the y-axis .

Each metric's description can be found by hovering over the "?" symbol on each of them.

The Close Metrics feature focuses on six core Pull Request cycle metrics to identify bottlenecks over the course of the sprint:

  • Time to Resolve: A distribution of how many hours it takes to resolve a Pull Request.
  • Time to First Comment: A distribution of the number of hours between when a Pull Request is opened and when the first Reviewer comments.
  • Follow-on Commits: A distribution of the follow-on commits made on Pull Requests once they are ready for review.
  • Reviewers: A distribution of the number of unique reviewers per Pull Request.
  • Reviewer Comments: A distribution of the number of reviewer comments per Pull Request.
  • Comments: A distribution of the number of comments per unique reviewer.

Clicking on a metric square from the heat-map will open a modal with all the PRs included in that specific metric. Here you can view all the details related to the PRs.