When we launched SourceLevel, we had a great focus on Automated Code Review, through 30+ Open Source linters (including support for Ruby, JavaScript, Elixir, Golang, PHP, Python, etc.). During product validation we have introduced some use case metrics which includes Work In Progress, Engagement, and Deliveries.
Last year, we released a new product completely focused on Engineering Metrics. And we’ve decided to sunset the Automated Code Review feature, improving our Analytics product bringing visibility over every corner of the delivery pipeline in a Data & Analytics Solution for Engineering Teams.
We still consider Code Quality metrics relevant, but we don’t think we should keep running the linters since a lot of free alternatives are available to be included in your Continuous Integration pipeline, such as GitHub Actions.
In the future, we are considering importing issues identified by these runners through our API. We’d recommend including the linter that you use to your GitHub Actions pipeline or consider installing Danger.systems (which supports Ruby, Python, Swift, Kotlin, and JavaScript) for your organization.
We’ll start contacting existing customers to notify about Automated Code Review sunset and schedule a turn-off customer-by-customer.
Currently, Analytics is providing metrics for Software Engineering Management, to see a glimpse of what we offer, go to our Engineering Metrics page.
Co-founder and CTO at SourceLevel.
JRPG-fan that enjoys solving functional programming challenges.