Throughout his career, this question has often been raised, but, as is clear from its frequency, there is no really good answer to it.

There are metrics on the code for each programmer - how many lines are written, etc. They do not mean anything by themselves, but their change may indicate either a change in the nature of tasks (for example, assistance on other projects, more communication with customers, etc.), or a change in the motivation of the programmer (loss of interest, illness without sick leave, etc. So this is a tool in the hands of the manager, but not something absolute.

Similar to the metrics of working hours.

Metrics for a project (without reference to programmers) are:

  • code quality metrics (something like Sonarqube)
  • coding style compliance metrics (for example, ktlint) This is useful, well, if it is checked at every MR/PR. Quite objective indicators that can and should be made mandatory.

From git, without a relatively specific programmer, you can get a metric for the frequency of class/method changes (code churn rate). This shows the hottest code: you need to look at it, think about refactoring and additional tests. At the same time, I have not seen enough good utilities (working at the level of methods/functions/classes for Kotlin). More often at the file level, for example https://github.com/garybernhardt/dotfiles/blob/main/bin/git-churn .

The number of lines of code (SLOC) is also an important metric. It shows how much logic is embedded in the project (and how difficult it is to figure it out). But this is more of a project metric, rather than a specific developer. This is particularly well integrated into Rails:

+----------------------+-------+-------+---------+---------+-----+-------+ | Name | Lines | LOC | Classes | Methods | M/C | LOC/M | +----------------------+-------+-------+---------+---------+-----+-------+ | Controllers | 176 | 149 | 10 | 18 | 1 | 6 | | Helpers | 38 | 35 | 0 | 4 | 0 | 6 | | Models | 183 | 147 | 5 | 20 | 4 | 5 | | Libraries | 0 | 0 | 0 | 0 | 0 | 0 | | Integration tests | 0 | 0 | 0 | 0 | 0 | 0 | | Functional tests | 855 | 686 | 9 | 3 | 0 | 226 | | Unit tests | 684 | 568 | 7 | 0 | 0 | 0 | +----------------------+-------+-------+---------+---------+-----+-------+ | Total | 1936 | 1585 | 31 | 45 | 1 | 33 | +----------------------+-------+-------+---------+---------+-----+-------+ Code LOC: 331 Test LOC: 1254 Code to Test Ratio: 1:3.8

– immediately something can be said about the project simply by these figures.

SLOC shows support efforts well, and Heat of code development efforts – https://hitsofcode.com . Hit of code takes into account not only the current code, but also the deleted one.

From a relatively recent one, the metrics of the application (DORA) are good:

  1. Lead time for changes – the time from getting the code into main to getting it to clients
  2. Change failure rate – % of releases for clients that require immediate fixes
  3. Deployment frequency – the frequency of release of new versions for customers
  4. Mean time to recovery (MTTR) – the average time to recover from a failure

(it is clear that we are interested in technical, not product/marketing metrics in this note)

As a result, there are several classes of metrics, and it makes sense to use them all. At the same time, there are no metrics that by themselves would show to reward a programmer or fire him.