Software Engineering KPIs (Key Performance Indicators) are measurable values that indicate the progress of engineering teams’ performance towards business objectives. Therefore, they need to be consistent, broad enough to consider everyone’s effort, and, most importantly, measurable.
As they should represent the team’s or area’s work, it’s crucial to pick the right metrics to measure. Otherwise, metrics are useless.
It’s a common mistake I’ve seen in the industry to measure productivity by the number of lines of code (LOC), the number of commits, or even the number of deploys. Don’t get me wrong, measuring the number of deploys may be a good fit, but it should fulfill a purpose (which may not relate to productivity).
Usually, blog posts about KPIs concentrate lots of metrics, but few of those correlate the metrics with real objectives. So, in this article, I tried something different. I selected 5 Engineering KPIs metrics and then listed candidate objectives for them.
Time from Commit to Deploy
Finding the elapsed time from an engineer’s first commit in a branch to when that very same commit reaches production is the easiest way to measure the whole development flow. It’s easy because it can be automated by looking to git’s commits history.
Many managers look for User Story Lead Time, which means they look for the time spent from adding a card to the “backlog” to the time the card reached “done.”
Besides, just because a card is in the “backlog” column of a Kanban board doesn’t mean it’s ready for development. It may miss requirements, layout specifications, and test instructions, for instance.
That’s why they give you an overview of the product’s flow. But, as engineers, we are usually more concerned about improving the development flow, which starts in “doing” and ends in “done” (deployed in production).
So, this metric can relate to many objectives, here I name a few:
- Reduce the time to market of new features
- Reduce waste and rework
- Reduce the Cost of Delay
- Maximize engineering efficiency
The deployment step is at the end of the value perceived by the customer (be it an internal ). So, it is indeed related to productivity, as each release concludes a job.
However, the number of delivered features doesn’t relate precisely with business objectives. The goals established for a quarter are usually more abstract, for instance, reducing the time to market for new features.
That said, instead of using Deploy Frequency for measuring the team’s productivity, it could take part in measuring objectives as such:
- Reduce the time to market of new features
- Improve the responsiveness to failures and outages due to bugs
- Mitigate security issues
Code Coverage measures the portion of code sustained under automated tests. The higher the coverage, the better.
It is an excellent tool for indicating the code’s quality. Also, it could report the progress of the following objectives:
- Increase the product/platform stability
- Reduce churn (in case there is evidence churn relates to a buggy product)
- Scale the technology area
- Reduce the time to market (as performing manual tests take more time)
- Reduce costs in the long run
Pull Request metrics
Collaborations review the code of their peer before it gets merged into the main branch. The conversation ignited by this practice is a piece of vital information for measuring collaboration and engagement.
Pull Request metrics are a set of measurements extracted from pull requests. Below I picked a few of them:
- Time to Review: how much time do engineers take to open and merge a pull request?
- Time to First Comment: how much time do pull requests take to receive the first comment?
- Number of Comments: how many comments do pull requests receive?
- Cross Team Collaboration: do teams review other team’s pull requests?
And here is a list of objectives pull request metrics can assist in measuring:
- Spread knowledge through the team
- Introduce a Code Ownership Culture
- Lessen the ramp-up curve for junior engineers
- Create a harmonious and ever-learning environment
See Engineering Metrics Live: Click here to get a full demo of SourceLevel Engineering Metrics and how they can help your team.
Focus on urgent bugs
I’ve seen many companies implementing the objective of achieving “zero bugs” for the next quarter. The KPI would be the number of bugs found in production (by the team or by the end-user).
This approach is flawed, first of all, because it’s near impossible to eliminate all bugs. We’re humans, and bugs happen. Secondly, we can’t compare a cosmetic bug with a bug preventing the user from paying an order. In third place, if you only measure the number of bugs quantitatively, people tend not to register them. No bugs filled means goal achieved.
Measuring if the focus is on working on critical bugs can prevent such misunderstandings. For that purpose, you can combine metrics, like the number of urgent bugs (that are more severe and impactful than an agreed level), the time the team took to fix, and if the fix was definitive.
Of course, you can add more metrics. I strongly suggest having complementary metrics to ensure that the team takes no shortcuts to mask metrics values.
Some objectives focus on bugs benefit:
- Improve the product value perception by the final-user
- Reduce cost by focusing on adding more features (instead of fixing cosmetic or banal bugs)
- Foster a harmonious and effective culture
- Foster a collaborative culture between engineers and Quality Assurance personnel
There are plenty of articles out there that list relevant Engineering KPIs with metrics. However, it’s prevalent to see managers picking the wrong engineering metrics, which ends up in an undesired behavior of the team members.
That’s why Engineering KPIs must align with business goals. It gives direction to the team to game on it. So, managers must wisely choose which of the KPI measurements make sense for the context of their team.
In practice, managers define KPIs for already-established objectives. In this article, I listed possible objectives for each KPI metric to exercise in the opposite direction.
The idea behind the article is to help you to check whether a KPI measurement is a good fit or not.
This practice is particularly useful while reading an article with lots of Engineering KPIs metrics. It helps me a lot, and I hope It can help you as well.