Impact

Impact is a way to measure the amplitude of code changes that are happening in a more complex manner than measuring raw lines of code. Impact attempts to answer the question: "Roughly how much cognitive load did the engineer carry when implementing these changes?"

Impact is a measure of work size that takes this into account. Impact takes the following into account:

  • What percentage of the work is edits to old code
  • The surface area of the change (think 'number of edit locations')
  • The number of files affected
  • The severity of changes when old code is modified
  • How this change compares to others from the project history

The proprietary algorithm behind the Impact is similar to Google’s Page Ranking algorithm. It comprises multiple data points that we improve on a monthly basis to provide a metric that translates engineers’ output into both business value and cognitive load.

New Work

New Work - Brand new code that does not replace other code. 

Legacy Refactor

Legacy Refactor is the process of paying down on “technical debt” - and it is, traditionally, very difficult to see. New feature development oftentimes implies re-working old code, so these activities are not as clear cut as they might seem in Scrum meetings. As code-bases age, some percentage of developer attention is required to maintain the code and keep things current.

The challenge is that team leads need to properly balance this kind of work with creating new features: it’s bad to have high technical debt, but it’s even worse to have a stagnant product. This balancing act is not something that should be done in the dark, particularly when it’s vital to the success of the whole company.

Objectively tracking the percentage of time engineers spend on new features vs. application maintenance helps maintain a proper balance of forwarding progress with long-term code-base stability.

Help Others

Help Others describes how much an engineer is replacing another engineer's recent code - less than 3 weeks old.

Churn

Churn is when a developer re-writes their own code shortly after it has been checked in - less than 3 weeks old. A certain amount of Churn should be expected from every developer.

Unusual spikes in Churn can be an indicator that an engineer is stuck. High Churn may also be an indication of another problem like inadequate specs. Knowing immediately as your team experienced churn spikes helps you have timely conversations to surface any potential problems.

Risk

Risk is a measure of how likely it is a particular commit will cause problems. Think of this as a pattern-matching engine, where Waydev is looking for anomalies that might cause problems. 

Here are some of the questions we ask when looking at risk:

  • How big is this commit? 
  • Are the changes tightly grouped or spread throughout the code base? 
  • How serious are the edits being made — are they trivial edits or deeper, more severe changes to existing code?

Active Day

Active Days represent any day where an engineer contributed code to the project.

Throughput 

Throughput represents the total amount of code of new, churn, help others and refactored code.

Productive Throughput

Productive Throughput represents the proportion of code without churn.

Efficiency 

Efficiency is the percentage of an engineer’s contributed code that’s productive, which generally involves balancing coding output against the code’s longevity. Efficiency is independent of the amount of code written.The higher the efficiency rate, the longer that code is providing business value. A high churn rate reduces it.

Technical Debt

Technical Debt is the amount of refactoring code done by the developer.

Commits 

Commits represents the amount of commits done by the developer.

Work Type

Work Type represents the highest types of work an engineer is focused on (New Work, Legacy Refactor, Help Others, and Churn).

Code Review metrics

Submitter Metrics quantify how submitters are responding to comments, engaging in discussion, and incorporating suggestions. Submitter metrics are: 

  • Responsiveness is the average time it takes to respond to a comment with either another comment or a code revision;
  • Comments addressed is the percentage of Reviewer comments that were responded to with a comment or a code revision;
  • Receptiveness is the ratio of follow-on commits to comments. It’s important to remember that Receptiveness is a ‘goldilocks’ metric—you’d never expect this metric to go up to 100%, and if you did it’d be indicative of a fairly unhealthy dynamic where every single comment lead to a change;
  • Unreviewed PRs is the percentage of PRs submitted that had no comments.

Reviewer Metrics provide a gauge for whether reviewers are providing thoughtful, timely feedback. Reviewer metrics are:

  • Reaction time is the average time it took to respond to a comment;
  • Involvement is the percentage of PRs a reviewer participated in. It’s important to note that this metric is a highly context-based metric. At an individual or team level, “higher” is not necessarily better as it can point to a behavior where people are overly-involved in the review process, but there are certain situations where you’d expect to see Involvement very high, sometimes from a particular person on the team and other times from a group that’s working on a specific project;
  • Influence is the ratio of follow-on commits to comments made in PRs;
  • Review coverage represents the percentage of PRs reviewed.

Comment metrics are:

  • Robust comments are comments that have a length over 200 characters;
  • Regular comments are comments that span between 100-200 characters;
  • Trivial comments are comments that have under 100 characters.

Sharing Index metrics are:

  • PRs is the total number of PRs that were reviewed;
  • Sharing Index measures how broadly information is being shared amongst a team by looking at who is reviewing whose PRs;
  • Active Reviewers is the count of active users who actually reviewed a PR in the selected time period;
  • Submitters is the total number of users who submitted a PR in the selected time period.

There are six metrics that comprise the PR Resolution report:

  • Time to Resolve is the average time it takes to close a Pull Request;
  • Time to First Comment is the average time between when a Pull Request is opened and the time the first Reviewer comments;
  • Follow-on Commits is the average number of code revisions once a Pull Request is opened for review;
  • Reviewers is the average number of reviewers per Pull Request;
  • Reviewer Comments is the average number of reviewer comments per Pull Request;
  • Avg. Comments per Reviewer is the average number of comments per Reviewer.

Did this answer your question?