Impact is a way to measure the amplitude of code changes that are happening in a more complex manner than measuring raw lines of code. Impact attempts to answer the question: "Roughly how much cognitive load did the engineer carry when implementing these changes?"

Impact is a measure of work size that takes this into account. Impact takes the following into account:

  • What percentage of the work is edits to old code
  • The surface area of the change (think 'number of edit locations')
  • The number of files affected
  • The severity of changes when old code is modified
  • How this change compares to others from the project history

The proprietary algorithm behind the Impact is similar to Google’s Page Ranking algorithm. It comprises multiple data points that we improve on a monthly basis to provide a metric that translates engineers’ output into both business value and cognitive load.

New Work - Brand new code that does not replace other code.

Legacy Refactor is the process of paying down on “technical debt” - and it is, traditionally, very difficult to see. New feature development oftentimes implies re-working old code, so these activities are not as clear cut as they might seem in Scrum meetings. As code-bases age, some percentage of developer attention is required to maintain the code and keep things current.

The challenge is that team leads need to properly balance this kind of work with creating new features: it’s bad to have high technical debt, but it’s even worse to have a stagnant product. This balancing act is not something that should be done in the dark, particularly when it’s vital to the success of the whole company.

Objectively tracking the percentage of time engineers spend on new features vs. application maintenance helps maintain a proper balance of forwarding progress with long-term code-base stability.

Help Others describes how much an engineer is replacing another engineer's recent code - less than 3 weeks old.

Churn is when a developer re-writes their own code shortly after it has been checked in - less than 3 weeks old. A certain amount of Churn should be expected from every developer.

Unusual spikes in Churn can be an indicator that an engineer is stuck. High Churn may also be an indication of another problem like inadequate specs. Knowing immediately as your team experienced churn spikes helps you have timely conversations to surface any potential problems.

Risk is a measure of how likely it is a particular commit will cause problems. Think of this as a pattern-matching engine, where Waydev is looking for anomalies that might cause problems. 

Here are some of the questions we ask when looking at risk:

  • How big is this commit? 
  • Are the changes tightly grouped or spread throughout the code base? 
  • How serious are the edits being made — are they trivial edits or deeper, more severe changes to existing code?

Active Days represent any day where an engineer contributed code to the project.

 

Throughput represents the total amount of code of new, churn, help others and refactored code.

Productive Throughput represents the proportion of code without churn.

 

Efficiency is the percentage of an engineer’s contributed code that’s productive, which generally involves balancing coding output against the code’s longevity. Efficiency is independent of the amount of code written.The higher the efficiency rate, the longer that code is providing business value. A high churn rate reduces it.

Technical Debt is the amount of refactoring code done by the developer.

Commits represents the amount of commits done by the developer.

Work Type represents the highest types of work an engineer is focused on (New Work, Legacy Refactor, Help Others, and Churn).

tt100 is the time it takes for an engineer to create 100 productive lines of code (code without churn).

PRs is the number of pull requests created by an engineer.

PRs Open is the number of open pull requests of an engineer.

PRs Closed is the number of closed pull requests of an engineer.

PRs Merged is the number of merged pull requests of an engineer.

PRs Merged Without Review is the number of pull requests merged without review by an engineer.

PR Comments Addressed is the number of comments addressed by an engineer in all pull requests.

PR Reviews Addressed is the number of reviews addressed by an engineer in all pull requests.

Code Review metrics

Submitter Metrics quantify how submitters are responding to comments, engaging in discussion, and incorporating suggestions. Submitter metrics are: 

  • Responsiveness is the average time it takes to respond to a comment with either another comment or a code revision;
  • Comments addressed is the percentage of Reviewer comments that were responded to with a comment or a code revision;
  • Receptiveness is the ratio of follow-on commits to comments. It’s important to remember that Receptiveness is a ‘goldilocks’ metric—you’d never expect this metric to go up to 100%, and if you did it’d be indicative of a fairly unhealthy dynamic where every single comment lead to a change;
  • Unreviewed PRs is the percentage of PRs submitted that had no comments.

Reviewer Metrics provide a gauge for whether reviewers are providing thoughtful, timely feedback. Reviewer metrics are:

  • Reaction time is the average time it took to respond to a comment;
  • Involvement is the percentage of PRs a reviewer participated in. It’s important to note that this metric is a highly context-based metric. At an individual or team level, “higher” is not necessarily better as it can point to a behavior where people are overly-involved in the review process, but there are certain situations where you’d expect to see Involvement very high, sometimes from a particular person on the team and other times from a group that’s working on a specific project;
  • Influence is the ratio of follow-on commits to comments made in PRs;
  • Review coverage represents the percentage of PRs reviewed.

Comment metrics are:

  • Robust comments are comments that have a length over 200 characters;
  • Regular comments are comments that span between 100-200 characters;
  • Trivial comments are comments that have under 100 characters.

Sharing Index metrics are:

  • PRs is the total number of PRs that were reviewed;
  • Sharing Index measures how broadly information is being shared amongst a team by looking at who is reviewing whose PRs;
  • Active Reviewers is the count of active users who actually reviewed a PR in the selected time period;
  • Submitters is the total number of users who submitted a PR in the selected time period.

There are six metrics that comprise the PR Resolution report:

  • Time to Resolve is the time it takes to close a Pull Request;
  • Time to First Comment is the time between when a Pull Request is opened and the time the first engineer comments;
  • Follow-on Commits is the number of code revisions once a Pull Request is opened for review;
  • Reviewers is the number of reviewers per Pull Request;
  • Reviewer Comments is the number of reviewer comments per Pull Request;
  • Avg. Comments per Reviewer is the average number of comments per Reviewer.

Pull Request Risk is a measure of how likely it is for a particular commit to cause problems. Think of this as a pattern-matching engine, where Waydev is looking for anomalies that might cause problems.

Some of the data points for the Pull Request Risk include:

  • The number of commits;
  • The size of the commits;
  • The spread of the changes;
  • The depth of the changes

PR Stats

  • Avg. Time to First Comment is the average time between when a Pull Request is opened and the first time an engineer comments;
  • Avg. Time to First Review is the average time between when a Pull Request is opened and the first time an engineer reviews the pull request;
  • Avg. Time Merge from Create is the average time from a Pull Request creation to it being merged;
  • Avg. Time Merge from First Commit is the average time from a Pull Request first commit to it being merged;
  • Avg. Time Merge from First Comment is the average time from a Pull Request first comment to it being merged;
  • Avg. Time Merge from First Review is the average time from a Pull Request first review to it being merged;
  • Avg. Time to Issue PR from First Commit is the average time from the first commit to creating a PR;
  • Merged Without Rebase is the total number of Pull Requests merged without a rebase;
  • Merged Without Review is the total number of Pull Requests merged without a review;
  • No. of Reviews Left is the total number of reviews across all Pull Requests;
  • No. of Comments Left is the total number of comments across all Pull Requests;
  • Without Comments is the total number of Pull Requests that have no comments;
  • Without Reviews is the total number of Pull Requests that have no reviews;
  • No. of Comments is the total number of comments in all Pull Requests;
  • No. of Reviews is the total number of reviews in all Pull Requests;
  • Merged is the total number of Merged Pull Requests;
  • Closed is the total number of Closed Pull Requests;
  • Open is the total number of Open Pull Requests;
  • Total is the total number of Pull Requests.
Did this answer your question?