Manager always need ways to evaluate the cost and the value of things. Some of the things to evaluate are the programmer productivity, the value of a piece of software, and the maintainability of a piece of software. And some managers make the huge error to evaluate the programmer productivity, or the value of a piece of software by counting the number of its source lines of code (SLOC, for short). Of course, in this count comments and empty lines are kept away, but nevertheless nowadays this count is moot.
It used to have some meaning in the ’60s and ’70, for programs written in COBOL, FORTRAN, or big programs written assembly language, for the following reasons.
Those languages have lines of code of almost the same length and complexity. Therefore, for such languages, the number of lines of code is a good measure of the complexity of the software. But with structured and even more with object-oriented programming languages, a single line of code may go from trivial to extremely complex. In addition, modern languages allow lines of virtually any length, to split a single instruction into several lines, and to join several instructions in a single line. Therefore, a programmer, in an attempt to look more productive, could split single instructions in many lines.
Those languages do not encourage code reuse by the single programmer. It is possible to write subroutines in those languages, but usually a master designer decided exactly which subroutines should be written, and a humble programmer shouldn’t take the responsibility to design a subroutine. Therefore, the possible code reuse was decided by the designer, not by the programmer. In addition, a typical COBOL, FORTRAN, or assembly subroutine is long several hundreds of lines. Instead, modern languages encourage the programmers to write many small subroutines, even as short as one single line, and in such a way many common behaviors may be factorized out. Again, a programmer, in an attempt to look more productive, could avoid factorizing out common behavior and simply duplicate it every time it is needed.
Therefore if programmers are evaluated according the number of lines the write, the resulting code becomes much worse.
The same reasoning holds if the number of SLOC is used to evaluate the value of an existing piece of software. The owner of that software could split every line of it in an attempt to make it appear more valuable.
Nowadays, the only use of the SLOC count is in evaluating the maintainability of a piece of software. The more are the SLOCs, the harder it is to understand them all.
Actually a better metrics is the number of tokens, i.e. symbols, because a line like the following one
OperatingValueAtEnd = OperatingValueAtStart + ValueOfCurrentOperation
that contains only 5 tokens, is definitely easier to understand than the following one
a = f(b, c + 3) * d
that contains 12 tokens, even if the former is much longer. And of course, splitting a line do not change its maintainability.