Feature Announcement: More Repository Metrics!

September 17, 2019 in Announcements, Code Quality

A few weeks ago, we announced a new feature: Organization-wide metrics. We made available three charts that can help you and your team to raise questions and understand the impacts of process changes in all your repositories.

Today, we are glad to announce that those three metrics are available for each repository!

You can find this page by clicking on the Stats link. It stays in the header of each repository page, right before to Hotspots and Quick Fixes.

We are excited to delivery more repository-specific metrics because we understand that the contribution behavior may vary from repo to repo. The ability to drill down helps you and your team to see how each repository impacts in the whole company.

Reasons the development interaction among repositories differ, include their importance, size or age, programming language, or development phase. For instance, Pull Requests in a legacy project are usually not as dynamic as in a recently-started one. Also, often their size is more extensive and their complexity higher. It impacts directly into lead-time.

Pull request size may depend on its role. Purely front-end or back-end repositories tend to be smaller and faster to approve then monolithic-full-stack ones. If have multiple applications in the same repository check our subapps feature.

That’s why those graphs matter. They’re crucial to understand context-specific data and help you to improve your process and code quality. You can find them in the “Stats” link, in the header of any Repository Page.

In case you are not sure what the charts are useful for, I recommend you to read our previous announcement. However, if you are in a hurry, I summarized it below:

  • Pull Requests: this chart shows the dispersion of Lead-Time for pull requests. It’s a great tool to understand how many days are taking pull requests to be closed/merged.
  • Lead Time Histogram: lead-Time is the number of days a pull request remains open. Whether it was closed or merged, it does not matter.
  • Merged Pull Requests: it shows how many pull requests were closed or merged on a specific day. This chart is especially useful to understand the impact of high lead-times. As you decrease your lead-time, more pull requests are closed or merged by day.

Feedback appreciated

We are regularly polishing our product so that you can expect more information about your productivity, code quality, and process health in the future.

By now, I’d like to ask: are these charts useful? Feedback is always appreciated. Don’t hesitate to tell us your thoughts!

Start a 14-days-trial now and check how are your repositories and subapps doing!

Feature Announcement: We support Swiftlint!

September 10, 2019 in Announcements, Swift

Many companies are adopting Swift. They are migrating their codebases or starting new projects in this programming language. As developing to any other devices, it is essential to pay attention to good practices and style conventions. That’s why we are glad to announce that SourceLevel runs Swiftlint as one of our engines!

It is crucial to keep your code consistent and maintainable, be you a lonely programmer or part of a team. If you care about your code, Swiftlint really can help you out.

Folks from Realm maintain this linter, and it is open-source, just like the other linters we run. You can learn its options and parameters in Swiftlint’s README.md and configure them using your already known .sourcelevel.yml file.

Feedback appreciated

Feedback is always appreciated. If you have anything to share, don’t hesitate to get in touch!

Are you using Swift? Start a 14-days-trial now and get Swiftlint feedbacks in your pull request comments!

Accounts are now Organizations and they got a dedicated page!

September 6, 2019 in Announcements

It’s part of our culture to be always polishing our product. During the evolution, some times, names get obsolete. It happened to the term Account, that was not communicating well what it became.

We renamed it to Organization, which describes much clearer what it is. Organizations group repositories together and also are where you manage your GitHub integration and billing information.

In this process, we took the opportunity to create a dedicated page for each Organization. There now you find a list of the latest 5 reviewed repositories below a merged pull requests chart. You can still list all Organization’s repositories by clicking on “Repositories,” located in the left menu. It makes your life easier and tidy!

If you belong to multiple organizations, you can switch them by using the Organization Jumper. When you log in, SourceLevel redirects you to the last-selected Organization’s page.

In case you want to navigate among your repositories, you don’t need to look by them in the Repositories listing page. You can quickly change your working repository by using the Repository Jumper, also located in the header. This Jumper already existed, the only difference is that now it is scoped by the Organization.

We love feedback!

Did you like it? We want to hear from you! Share your thoughts with us!

Are you not using SourceLevel yet? Start using it now! It’s free for 14-days and permanently free for open-source repositories.

Feature Announcement: Organization-wide Metrics

August 28, 2019 in Announcements

We’re very excited to announce a significant feature: organization-wide metrics.

SourceLevel measures the lead-time of pull requests of your entire organization, that is, pull requests of all your repositories and plot three charts. Each chart shows different aspects of your process.

They are available by clicking on the Metrics and Stats link below your organization name.

Metrics and Stats link in the organization page

Below, I explain a little bit more of each chart and in what they may help.

Lead-time of your team

This chart shows the dispersion of Lead-Time for pull requests of all repositories. The gray points are closed or merged pull requests, and the blue ones are pull requests that remain open.

Example of organization-wide chart of open/closed pull requests

Example of organization-wide chart of open/closed pull requests

It’s a great tool to understand and improve your process. Using the above chart as an example, here are some questions your team could raise:

  • Why is that pull request taking more than 14-days to be closed or merged?
  • What happened from 11 Aug to 15 Aug that pull requests took more than average to be closed or merged?
  • Why are there too many open pull requests? Should we limit our WIP?

Lead Time Histogram:

Lead-Time is the number of days a pull request remains open. Whether it was closed or merged, it does not matter.

Example shows most of pull requests closed in at most 4 days

Example shows most of pull requests closed in at most 4 days.

Now you can tell just by watching the chart how many days most of the changes take to be merged or closed. In the example above, most of the pull requests are closed or merged at most in 4 days.

Merged Pull Requests

Walk day by day and see how many pull requests were closed or merged on that date. It is an essential chart if you need to understand how external events impact productivity.

Chart shows the number of pull request closed in each day

Chart shows the number of pull request closed in each day

This chart is especially useful to understand the impact of high lead-times. As you decrease your lead-time, more pull requests are closed or merged by day. It means code is being shipped to production faster, that is, a smaller time-to-market.

Expect more metrics soon

We’re working hard on finding useful metrics for you and your team. More repository-specific metrics are coming soon, so keep an eye on our Twitter (@sourcelevelhq) and blog!

We love feedback!

Have you got any suggestions or improvements? We want to hear from you so that we can boost and polish our product together! Don’t hesitate to drop us a line.

If you got curious to know how is your company doing, start our 14-days-trial now and find it out!

What my team and I learned when improving code quality in an existing codebase

July 11, 2019 in Code Quality, Teams

In this article, I present a list of learnings and a general process for you to achieve a better code quality. The ideas I described here are based on my experience and don’t represent the only way to achieve a good and healthy code. In case you don’t know where to start from, I hope this blog post helps you.

Learnings from code quality improvement

I’ve worked along with many companies running in different business domains. However, I’ve noticed that some premises worked just like the same in all of them. Here I show them as learnings. I take these points very seriously when defining an improvement process; otherwise, the chances of succeeding in improving and maintain the quality of your code are meager.

  • It’s not a single person job. For real. Everyone in your team must be committed to improving existing code, and more importantly, writing quality code for new features.
  • Freezing new features while refactoring won’t work. When your team doesn’t deliver anything for a while, business or product people tend to make sure refactoring turns into a curse word, just like Abracadabra or Hocus pocus.
  • It’s not a proper time for “move fast, break things” mindset. Big refactoring and architectural changes already have a more significant potential of breaking things; going fast only increases your chances. Focus on improvement at its pace and be sure that the more automated tests you get, the more comfortable with changes you get. Changing the production code without unit tests, acceptance tests, and integration tests are very risky. Remember, inserting too many bugs during refactoring may get this word cursed as well.
  • Trace a strategy and keep your team aligned. That’s the best way to improve your code-base quality continually. From time to time, stop what you’re doing, check the metrics, change strategy (if needed), align your team, and keep working. Learn with your mistakes and improve this process as well.
  • Make quality improvements something usual. It is a never-ending job. If you want to create this culture and make it last, I strongly recommend integrating practices into your development flow.
  • Each team has its problems and pace. It is a bad practice to compare “improvement scores.” The environment gets much tension, and the efforts of quality improvement soon turn into efforts for lessening a metric to be on top of a leaderboard or whatever. Different teams have a different view of quality and may be in different contexts. Compare your team with the very same team. “Is the team having progress?” that’s the question that matter.
  • Look for the short and the long run. When choosing a strategy, don’t pick only big working items. It kills your motivation as showing its progress may take some time. Think in the short-run (low hanging fruits and small fixes) and the long run.

With them in mind, here’s a suggestion of how to deal with messy code and how to take it to a better level.

Dealing with poor quality code

1. Measuring your current code quality

You’re telling me your code quality is poor. However, is it? You won’t know until you measure it. Also, it allows you to see the progress of your changes. Are they improving code quality or just adding more complexity and code smells?

If you don’t know what measure, I’ve suggested in the last blog post 3 metrics that you can automatically collect by linters. You may run them by your own or by using a product like SourceLevel to do it for you. It’s up to you, as long as you measure it.

2. Tracing an improvement strategy

When each member of your team works by their own, you lose track of the improvement progress and stick with the feeling of getting nowhere. That’s why you need a good strategy because it gives focus to the team.

A good strategy may vary depending on the business and on the phase of the product. If you work in a startup and hasn’t launched your product yet, you need to keep developing new features and fixing bugs to achieve that time-to-market date. Usually, this phase of the product introduces a lot of smells and design problems. However, that’s ok, because you don’t know your product very much. You’re learning the domain, and it takes time to get the big picture of your business. In this phase, I think the best strategy is introducing the least issues as possible and keeping track of the existing ones.

If your product is already launched and running, you can spot all those codes smells, calculate methods and classes cyclomatic complexity, duplicated code, etc. Then, you need to merge those improvements to the product evolution plan.

As I said, don’t stop delivering is something I’ve learned. Refactor what really matters, what impacts for your near future work and business demands. It works just like any other technical debt and must be included in or diluted into the roadmap.

When I say you should dilute into the roadmap, I mean don’t feed more than one board. Prioritize your board according to your business needs. Insert low hanging fruits and small changes through the selected cards for your sprint. Refactoring and significant changes may lead to an independent card as long as it makes sense for your business.

Be transparent to your team and prioritize issues that make future work easier and faster to change. If solving an issue has no significant gain for the next couples of weeks, let it there. Remember, you don’t want to decrease the number of quality issues merely, you want to be improving your quality continually.

Talk to your team and show them the value of improvements. Keep your product and business people engaged and aware of the successes and failures. Show how it impacts them by using their language: delivering faster, fewer bugs, better performance, and money saved or gained.

3. From time to time, share the results with your team

As I said, this is not a single person job. So, sharing the progress of your code-base improvement has a positive impact on both keeping your team engaged and motivated. Visibility is also essential for other stakeholders, like product or business people, or even the CTO.

It doesn’t matter if you’re a manager, a tech lead, or a developer. Gather everyone and celebrate small wins and, most importantly, the big ones. Make sure everyone is involved with the process.

You may ask, when and how should I share the progress?

Each team may find a proper opportunity. However, here go suggestions: you can have a 10 minutes walkthrough of the metrics during a Review Meetings (for those who run Scrum), a weekly 3 minutes follow up during a stand-up meeting, or even a separate monthly meeting only for communicating the numbers and aligning next steps. The time between the meetings depends on how fast the changes are going on. The faster you can see progress, the shorter the time between them.

Answering how to share the progress, I suggest plotting charts with metrics results and they’re history. They’re the most effective way to tell stories with numbers. Just by seeing a chart, you can tell whether your team has done an excellent job or not. Also, they help you a lot asking questions about your process.

4. From time to time, review your strategy

As your team members keep working on the source base, they add new issues and solve old ones. It is essential to keep your metrics aligned with a good strategy. If metrics change too much, it’s time to review your strategy.

Your business may change too. If your competitor launches a new feature and your product must to catch them up, you’ll probably have to work harder on this. You may have to suspend a significant refactoring (in case the feature can’t wait for it), or even debate the trade-offs of inserting more poor quality code versus having more time for improving it. That’s the perfect time to stop and draw a new plan. Don’t abandon improvements, even though you may not work on them for a short time.

5. From time to time, review your metrics

Another essential aspect of a health code is to define what quality is. If you know this, then measuring does the job of telling you how good it is. To achieve this, you need to measure the right things.

The problem is that we won’t know for sure what’s the right things to measure before we start measuring it. That’s why collecting new metrics and stop collecting unnecessary ones is a job to do regularly.


Having a health code-base takes time, and the development flow must integrate as part of the process. Here I presented some learnings (so you don’t have to fail for those reasons) and a general process you can implement at your company or team.

Share your thoughts in the comments below.

If you don’t know what to measure or don’t want to collect the metrics manually, check SourceLevel’s supported linters and Sign up for a 14-day trial.

Ebert is now SourceLevel

June 24, 2019 in Announcements

As we begin a new phase for our startup, we decided to rename it to SourceLevel.

We think it better reflects what we’re doing: bringing team metrics and automation for code reviews to improve your team.
We’ll be rolling out this change in the following days. Our new home is https://sourcelevel.io.

We’ll update our website, documentation, and our GitHub app.

You don’t need to change anything on your side. Even the .ebert.yml file will continue to work, although you may want to change it to .sourcelevel.yml. 🙂

Team communication

June 21, 2019 in Teams

Communication has been a hot topic in the software development industry for some time now and its importance is only going to rise as the industry grows.

That being said, it is not easy to communicate well since people within a company may have different personalities and even cultures, making it had to convey information in a way that reaches everyone. What can we do to make it easier and more efficient then?

Good communication at a company level is something that most software developers don’t have to deal in the early stages of their career, so I will focus on the communication inside a team and what helps me in my day-to-day work. I consider that small companies behave like a team most of the time, so the knowledge should still be applicable in these cases.

If the following topics sounds obvious to you, this is a good sign.

Reviewing code

The first thing a team should do is to create a standard when asking for code reviews. It is easy to lose precious time arguing about the best syntax or reviewing code that is still in progress to only discover that most of it changed and your comments are now irrelevant. To solve this it is important that a team:

  • Decides beforehand on the expected syntax (avoid bikeshedding)
  • Make it clear when the code is ready for review

An example for the last bullet point would be writing in the title of a pull request that this is a work in progress (WIP), and removing the [WIP] from the title when the code is ready. Tagging a pull request as WIP is also helpful with labels.

Writing code

A much bigger problem in the long run is not giving the right amount of context to the rest of the team when writing or changing code.

Sayings like “good code doesn’t require comments” only exacerbates the problem in my opinion, since the code can’t possibly express the context in which it was written and the knowledge is lost as soon as team members leave the team.

One idea to address this issue is to answer questions like “Why is this being done?”, “What are we trying to achieve” and “What could go wrong” when asking for review. These questions will serve as a guideline for the review and will be extremely valuable during a future refactor of the code. Commit messages are a good medium to convey some of this information.

If your team works with pull requests it is valuable to create a standard template. The idea is to explain to reviewers what is going on so they don’t have to go to the trouble of figure it out by themselves and potentially asking question that have already been answered somewhere else.

Making decisions

It is easy to make decisions, the hardest part is remembering why the team decided on them after a few months away from the problem. I have already mentioned the idea of answering questions when writing code, but some things are better done in a “formal way”. Creating documents with a proper explanation of the situation and having an easily accessible place to store them is of utmost importance. In these documents we can detail the situation, add diagrams and so on.


If you are discussing something with your coworker and doesn’t seem to reach a consensus over the problem don’t extend the discussion on slack, pull request, email and so on. Nothing beats face-to-face communication. Most of the times the situation will be solved and the stress level will be kept to a minimum. A video call would be the second best option when people are not in the same physical place.


It’s all about context and setting expectations. By getting those topics right the team will work in an efficient way, avoid useless discussions that over time contribute to burn-out of team members.

How to properly mirror a git repository

June 21, 2019 in Development Tools, Git

When people talk about mirroring a git repository, usually we have a simple answer in mind:

Just git clone the repo and you’re set!!

However, what we want with mirroring is to replicate the state of an origin repository (or upstream repository). By state, we mean all the branches (including master) and all the tags as well.

You’ll need to do this when migrating your upstream repository to a new “home”, like when switching services like GitHub.

As with most tools, there’s a lot of ways to accomplish that, but I’ll be focusing on two of them. The difference lays on whether you already have a working copy of that repository or not.

Mirroring a git repository without a local copy

If you haven’t cloned the repository before, you can mirror it to a new home by

$ git clone --mirror git@example.com/upstream-repository.git
$ cd upstream-repository.git
$ git push --mirror git@example.com/new-location.git

This will get all the branches and tags that are available in the upstream repository and will replicate those into the new location.


Don’t use git push --mirror in repositories that weren’t cloned by --mirror as well. It’ll overwrite the remote repository with your local references (and your local branches). This is not what we want. Read the next section to discover what to do in these cases.

Also git clone --mirror is preferred over git clone --bare because the former also clones git notes and some other attributes.

Mirroring a git repository if you already have a local working copy

By working copy, we mean a “normal” repository, in which you have the files that are being tracked into git and where you perform commands like git add and so on.

In this case, you may have a lot of local branches and tags that you don’t want to copy to the new location. But you do have references to remote branches. You can view them with git branches -r. If you pay attention to that list, though, you may notice that you have a lot of branches that were already deleted in the upstream repository. Why?

Cleaning old references to remote branches

By default, when you do a git fetch or git pull, git will not delete the references to branches that were deleted in the upstream repository (you may view them in your .git/refs/remotes dir). We need to clean those old references before mirroring them to a new location.

To do so, run

$ git fetch --prune

This will update your references to the origin repository and also clean the stale branches reported by git branch -r.

Finally, mirroring the repository to a new location

Now we’re ready to send those updated references back to the origin repository:

$ git push --prune git@example.com:/new-location.git +refs/remotes/origin/*:refs/heads/* +refs/tags/*:refs/tags/*

Ok, what just happened here?!

We want those references inside .git/refs/remotes/origin to be the LOCAL references in the new location. The local references there will be stored in the refs/heads dir. The same thing happens to tags.

The + sign indicates that we want to overwrite any reference there may already exist.

--prune means we want to delete any reference that may exist there if we don’t have such reference in our refs/remotes/origin/* (and tags) references.


Git is certainly not an easy tool to learn. Although when you do, it turns into a very powerful and flexible tool.

If you want to learn more about it, please see the excellent book written by Scott Chacon and available for free.

What about you? Have any tips on git you want to share?

Measuring Code Quality: 3 Suggestions On How To Start And Why They Are Important

June 17, 2019 in Code Review

If I ask you to tell me how much quality your code has, how would you answer it? There’s not a correct answer, and responses vary depending on seniority or past experiences of each one. As there is no consensus of what’s quality, here’s a list of aspects I think your code must have to be considered good (not necessarily in this order):

  • It must work. Whatever the feature is, it should only do what it was designed to do. No more, no less.
  • There must be automated tests to ensure it is working and to describe how it works.
  • Code is well documented (at least as much as it needs).
  • The code is clear and shows its intention by having well-chosen variables, methods, and class names.
  • It is easy to maintain, or in other words, it is easy to change.
  • It has no security breaches.
  • It is fast (at least fast enough for its usage).

Different features may be added depending on the kind of software you’re building. You may need your code to meet some legal issues if you work in a bank or an insurance firm, for instance.

OK, we got attributes of a good code. Why should I measure it?

In my opinion, measuring is the best way of looking at the situation as it is. It is essential to listen to the feelings of developers and your senior developers. They may have a very accurate sense of codebase health status. However, you can’t rely just on it.

Developers are humans, and as humans, they are influenced by emotions, by egos (have you ever heard lines like “my code is beautiful, I’m an artist”?) and also by their experience.

This last aspect, the experience is the most crucial one. Depending on how much messy code has a developer dealt with, its definition of quality changes. That’s why measuring is so important.

You get a standardized view of your code health.

Another great thing about metrics is looking at its history to make questions. For instance, if your number of code smells in September is 33 and October it goes down to 22, you could tell your team “what have we done? Let’s keep doing this because it improved the overall code quality”! Alternatively, in the worst scenario, asking your team, “why are code smells growing so much this month?” may raise important issues, like your team being pressured by the product team to deliver faster and faster without looking to quality.

These numbers allow you to ask questions, understand your process and the impacts of your changes.

OK, I agree that measuring is essential. What should I measure?

Many companies look to the number of bugs as an indicator of quality, but the fact is that it relates more to the quality of the process than to the health of your codebase. Of course, bad code leads to more bugs, however not every bug is due to bad code.

Well, for me, it is. Moreover, it tends to happen when there’s no one looking for quality in the team. We don’t usually think about quality before writing a new feature, nor we think about quality when done. We only think about quality during the development of a feature where we find a lot of… Let’s say, “what the fuck.”

That’s the reason the most common metric for software development is “WTF/minute,” as illustrated in the cartoon below by Thom Holwerda.

WTF/minutes metric

Despite being funny, it raises the question how should I measure quality then? Well, there isn’t a right answer to that. However, we’re going to present some ideas.

Here is a list of useful metrics that you may measure. They are simple to calculate and there are automated tools to calculate all of them (there are plenty more if you google for it).

Measuring code complexity

How easy it is to change is directly related to its quality. So, the first health metrics should be related to its complexity. Calculating complexity can be hard, but there are a lot of open-source tools available for this purpose.

It consists of calculating the number of possible lines executed in all possible arrangements for a given piece of code (or throughout your entire code base).

The formula is Cyclomatic Complexy = Edges – Nodes + (Nodes with exit point * 2). Let’s see it in practice. Given the following code, written in Ruby using ActiveModel API:

post_changes = post.changes || {}
if post_changes.present?
  ChangeHistory.create!(entity: 'Post', entity_id: post.id, changes: post_changes)
  if post_changes[:slug].present?
      path: post_changes[:slug].first,
      redirect_to: post_changes[:slug].last

The following image shows the code as nodes and edges of a graph that represents the path of its execution. Notice that it ignores the fact that bang methods (methods ended with an exclamation mark) may raise an exception and it would stop the execution.

In this case, the complexity would be:

Number of edges (E) = 9
Number of nodes (N) = 8
Nodes with exit points (P) = 3

Applying the formula: N - E + (P * 2)
Cyclomatic Complexity = 9 - 8 + (3 * 2)
Cyclomatic Complexity = 7

OK. Now, let’s try to reduce this number by refactoring the code. A good strategy is to reduce the number of nested ifs, and we could achieve this by the following piece of code:

post_changes = post.changes || {}

if post_changes.present?
  ChangeHistory.create!(entity: 'Post', entity_id: post.id, changes: post_changes)
if post_changes[:slug].present?
    path: post_changes[:slug].first,
    redirect_to: post_changes[:slug].last

Just by looking at this code, one could say it is easier to understand and more comfortable to change. Its intention is way more straightforward: save the post, create a ChangeHistory record if post attributes changed and then, if slug has changed, create a redirect rule. Here’s the graph:

Now let’s do the math:

Number of edges (E) = 8
Number of nodes (N) = 8
Nodes with exit points (P) = 2

Applying the formula: N - E + (P * 2)
Cyclomatic Complexity = 8 - 8 + (2 * 2)
Cyclomatic Complexity = 4

As we see, the second piece of code, although it does the same job, is less complicated than the first one. That’s how measurement can help your code to be easier to understand: by knowing where are the most complicated points. So you and your team can work on it until it is acceptable for your rules. Yeah, acceptable is the word, since there is not a magic number. To determine it, you need first to understand your context.

A good practice is: doesn’t matter which number, keep the job on lessening it.

Measuring code smells

Back in history, when we would go to the woods to collect food, and there was no expiration date impressed on it, we had to use our senses. If you didn’t want to get food poisoned, you would smell it first.

That’s the same analogy to the code. It doesn’t necessarily point to a bug or a rotten piece of code. It just raises the smell that something may not be good. It can be an anti-pattern, duplicated code, bad practices, anything that could be an alert for developers.

Static Analysis can find those smells in an automated way. There are open-source linters that do an excellent job looking for them. The metric here is the absolute number of them found in your code. Very easy to calculate.

Measuring security issues

Good code is secure. That’s why it is crucial running an automated tool that looks for SQL injection, data leaks, and other potential security issues that are common in the daily development routine.

Automated tools are not perfect, and some times, they trigger false-positive alerts. However, it is no excuse for not running them. If your codebase has too many false-positive alerts, you totally should start taking these alerts more seriously.

Static analysis software looks for known patterns (like using eval or not sanitizing your data before inserting in the database) and sums them up. That’s the metric. As simple as that.


The concept of quality may vary depending on the company, on the product stage or even by each personal view. To keep or improve the quality of your code, you need to define what is quality for your team and measure it. There are a lot of other metrics available. Keep your quality definition in mind and look for metrics that ensure your code matches it. There already are many tools designed to maximize and engage discussions about quality. Look for them!

In this blog post, I suggested three metrics, all of them (and much more) are present in our solution. SourceLevel runs automatically for every pull request open and keeps the history of the master branch so you can raise questions about your process looking to data, not opinions.

Sign up for free and enjoy our trial for 14 days!

What is a linter and why your team should use it?

May 31, 2019 in Code Review

More and more teams have adopted the usage of linters and other static tools in their development process. Some integrated them in the IDE of their preference, others automated by running them as an additional step in their CI. Also, there are those who run both ways.

What ‘s a linter, then?

According to Wikipedia, linter

is a tool that analyzes source code to flag programming errors, bugs, stylistic errors, and suspicious constructs.

The first linter was written by Stephen C. Johnson in 1978 while working in the Unix operating system at Bell Labs. After that, many other linters have been written for different purposes and languages, not only C.

First linters used to check the source code and find potential optimizations for compilers. But, over the years, many other checks and analysis would be included in the process of linting:

The usage of linters has also helped many developers to write better code for not compiled programming languages. As there is not compiling time errors, finding typos, syntax errors, uses of undeclared variables, calls to undefined or deprecated functions, for instance, helping developers to fix it faster and reduce bugs prior to execution.

Linters have evolved

Linters have evolved. They started with those simple checks, but nowadays they are getting more and more sophisticated. They perform Static Analysis, enforce configuration flags, check for compliance with a given style-guide or security rule, and a lot more.

Let’s explore some of these checks and how they can be useful for you.

Static Analysis

Static Analysis means that automated software will run through your code source without executing it. It statically checks for potential bugs, memory leaks, and any other check that may be useful.

If you’re a Python developer, you may already know Radon. It can count the source lines of code (SLOC), comment lines, a blank line, and other raw metrics, but also, it can calculate a “Maintainability Index”, which may be very important in some projects.

That’s just an example. There are plenty of other linters that perform Static Analysis checks.

Code standardizing

From https://xkcd.com/1285/

Standardizing your code is a great way to move the conversation to a more productive level. Having a guideline and running linters against the codebase will avoid aesthetical changes in your pull request, like replacing all tabs for spaces, indenting a variable assignment or even line breaks after a given number of characters.

Maximizing meaningful changes will take your discussion to topics that really matter, like architectural decisions, security issues, or potential bugs.

By the way, security issues and potential bugs also can be avoided by linters!

Security Issues

If you’re into Rails, you have probably heard about Brakeman. It’s a Static Analysis Security Tool. It’s very handy to find potential security issues. For instance, it runs checks looking for SQL Injections when using ActiveRecord #find_or_create_by and friends. It also adds checks for XSS, config options and much more.

Ruby is not the only language with this kind of engine. Ebert supports more than 40 engines for different languages. Brakeman included.

Potential bugs

JavaScript has some tricks. One of the most famous is the difference between == and ===. It’s a good practice, and it avoids a lot of debugging time, to always use ===. If you enable, for instance, ESLint to check for that, it can tell you what part of your code is using == and even replace it for you.

Performance improvements

Every experienced developer knows not only the importance of performing software but a lot of tricks that improve it. The problem is: what about newcomers? How can you pass this knowledge forward? Even senior programmers can miss a technique or two. So, why not let an automation software do it for you?

Did you know that in CSS the universal selector (*) may slow down a page loading time? Or that unqualified attribute selectors have the same performance characteristics as the universal selector? Avoiding them is good practice.

Many linters include performance check. They can add different kinds of performance improvements for experienced and newcomers developers. CSSLint is just an example.

And many other aspects

To infinity and beyond! There are lots and lots of linters for different programming languages, configuration files, and even for software integrations. Any check that matters and can be automated may turn into a linter.

If you work in a very specific scenario you may have to write your own, but that’s not too likely. Checks for HTML Accessibility features, Internalization potential errors, grammar errors, and many others are already there, open-sourced, waiting for you to download, configure and start using.

Benefits of linting

According to Ferit T., linting improves readability, removes silly errors before execution and code review. But, as mentioned, linting may do more complex jobs, like detecting code smells or performing static analysis of your codebase.

But, in practice, what are the advantages of linting?

It improves code review discussion level

If your Pull Request has no typos, nor unused variables, and is compliant with the style guide, the conversation tends to be focused in the architecture point of view. That’s it. Pull Request is a great place to point performance issues, securities vulnerabilities or suggesting better abstractions. You don’t need to get in that single or double quotes or tab vs spaces discussion.

This is a productivity gain for sure.

Makes code look like written by a single person

To the medium and long term having a solid code base that looks like written by the same person is good. Maintainability and evolution are easier because everyone tends to understand what’s written faster and clearer. It prevents bugs, makes the job more joyful for developers and accelerates the time to market of new features.

Gives visibility of your codebase health

Is your code healthy? You won’t know until you measure it. A good way of doing so is to adding a step in your CI/CD pipeline to measure the evolution of your code health status. Better than that, you can take actions as soon as possible when you see its health to decay. Such actions may imply on creating technical debt cards in your board or even raise the issue during your agile retrospective or architecture committee meeting.

Spreads awareness and ownership over code quality

Experienced developers can look into a couple of files and tell how easy is a software to change. They probably know where to look: are variables names descriptives? how many lines does a method take? Is there any superclass?

Well, not every employee will have this kind of look over the code. Especially newcomers. Having a linter to tell developers where code smells are is a good way to spread this knowledge and make the team responsible for the changes.


Linters will help you get more productive and save you time and money. They will drive your team to better decisions (those oriented by data) and share ownership over the quality.

SourceLevel allows you to configure almost 30 different linters to automatically run for each pull request opened by your team. It also provides you a nice dashboard with charts showing the results throughout your product’s lifecycle.

Interested? Check all supported linters and give it a try for 14 days!