In a recent software quality audit, our static analysis tool Teamscale found that the comment completeness ratio of the application under study was 96%. But when examining these comments manually, we found that the majority of them was generated automatically and therefore of limited use.
This example illustrates that a combination of software tool and human expertise is necessary to get a holistic picture of and ensure software quality. To argue for this position in more detail, this blog post sketches software quality tasks which should be performed by software tools, software quality tasks which should be performed by human experts and software quality tasks which should be performed jointly.
Software quality is an important aspect of a software application, especially for long-living software which is used and evolved over decades. To ensure software quality, much effort is put into quality activities like testing, bug fixing, code reviews, (static) quality analysis, and refactoring. Furthermore, many tools have been developed in the last years to automatically assess software quality and support developers to improve the quality of their application.
But often a software tool is introduced and induces a short-term improvement of software quality but has negligible positive effect on software quality in the long run. We argue that to achieve such a long-term positive effect, a combination of software tool and human expertise is necessary.
The example of a recent software quality audit demonstrates the interplay of software tool and human expert: When analyzing the Java source code of the application under study, our static analysis tool Teamscale found that the comment completeness ratio was 96%, i.e., that 96% of all public types, attributes and methods were commented. This appears to be a very good value and would rank the application very high in our benchmark of industry software.
But when we manually analyzed a subset of these comments, we found that many of them were trivial comments that were generated from identifiers in code in order to adhere to the company policy »More than 95% of types, attributes and methods have to be commented«. The following code listing shows an example of such a trivial comment:
/**
* Instantiates a new organizational unit.
*
* @param mapping
* the mapping
* @param form
* the form
* @param request
* the request
* @param response
* the response
* @throws SecurityException
* the security exception
* @throws IllegalArgumentException
* the illegal argument exception
*/
public OrganizationalUnit(final ActionMapping mapping, final ActionForm form,
final HTTPServletRequest request, final HttpServletResponse response) throws SecurityException,
IllegalArgumentException {
// Constructor logic ...
}
When further investigating the matter, we found that about three-fourths of all existing comments were generated and therefore useless, bringing down the comment completeness ratio from 96% to 24%.
Warning This blog post describes my experiences from 0.5 years of work as a software quality consultant (which was performed together with colleagues working in this field for more than ten years and borrowing from their experience). Hence, the blog post is highly subjective and incomplete.
The following non-exhaustive list sketches tasks of software tools in software quality activities:
The following non-exhaustive list sketches tasks of human experts in software quality activities:
There are several tasks that can be performed only by software tools and not by human experts (for example ‘Aggregate information’ or ‘Analyze software quality’ for large code bases or frequent changes). Similarly, there are several tasks that can only be performed by human experts and not by software tools (such as ‘Define relevant quality aspects/ scope of analysis/ quality goals’ or ‘Configure/ customize/ administrate software tools’).
Furthermore, there are several tasks that have to be completed jointly by software tools and human experts because each contributes a subpart of the overall task (for example ‘Analyze software quality’ or ‘Perform tests’). Hence, we conclude that a combination of software tools and human expertise should be used in software quality activities (‘man and tool’ instead of ‘man vs. tool’). Only the combination of both gives a holistic picture of software quality and only human commitment ensures software quality and its improvement.