Skip Navigation Links > Coding > Review


You can't control what you can't measure and this axiom certainly applies to the construction phase of software development projects, which unfortunately fail at an alarming rate. Therefore, in order to obtain a clear picture of the state of your software and be in the position to manage its complexity, you need continuous feedback from the code review process. kiviate graph

With industrial-strength tools and a knowledge of metric theory, you should be able to extract from the source code just the information you need and to present it in graphical form to quickly identify problem areas and see trends. Other display forms of a more technical nature are intended for developers to isolate problems at the subsystem, package, class, method, and statement-level. Every identified problem should have a concise description and a suggested solution in order to reduce the time required to make corrections.

Code reviews should occur in parallel with other development activities to ensure that each incremental release is built upon the quality of the previous release. Performing a big-bang code review for a completed project is possible, but it typically yields a large number of coding violations and may also expose some questionable design decisions that are difficult or impossible to refactor.

Whether applied incrementally or at the end of a major release, for the review process to be successful it is very important that managers and developers not use this activity for the purpose of measuring individuals or assigning blame.

Use an agile code review process. Feedback reports should be concise, focused, and understandable to the extent that meetings are seldom required. The amount of rigor used in any review process is determined by the nature of application under development. In any case it should involve the following activities:

  • Audits: This activity performs static checks of the source code to compare the actual usage of the language against recommended best practices. Audits can check for everything from poor exception handling techniques to the inappropriate use of naming, operators and types. The primary aim here is to find and resolve issues related to readability, maintainability, robustness, and defects.
  • Metrics: This activity measures the use of design concepts such as cohesion, coupling, encapsulation, inheritance, polymorphism, and complexity. For example, deep inheritance hierarchies can lead to fragile code and unpredictable behavior. Another metric measures the amount of coupling between objects; excessive interaction between an object of one class and many objects of other classes may be detrimental to the modularity, maintenance, and testing. Other measurements include heuristics such as the ratio of comments to code.
  • Business logic: Validating business logic for correctness requires unit testing classes that probe components at the interface-level to determine if actual results equal expected results. If unit tests are not available then you are limited to manual inspections and testing via user interfaces every time the code changes.
  • Performance profiling: This activity profiles a running application to detect things like performance bottlenecks and memory leaks. The objective is to eliminate problems caused by inefficient algorithms and memory usage, excessive object creation, I/O blockage, excessive network traffic, and objects that are not releasing resources.
  • Thread analysis: This activity profiles a running application to uncover concurrency problems such as potential deadlocks, thread stalls, and race conditions that can cause data corruption.
  • Source coverage: This activity profiles a running application in search of code that has not been invoked by either unit tests or the application itself. Code untouched by the program counter usually exposes a weakness in the test plan, an unsophisticated development environment, or the need to create more unit tests. It may also reveal the presence of dead code which will not be invoked under any circumstances, but is nevertheless expensive in terms of maintenance costs. Source coverage measurements can also identify hot spots, which are regions of code that are under heavy load. Hot spots can reveal load balancing problems.

Two important artifacts produced by the iterative development process on the way to working code are analysis and design models. These models should also be subject to review since they represent the behavior of the system and serve as structural blueprints for programmers, testers, and system integrators.

Source code is generally valued over models and documentation because it is the end product and in many organizations it is often deemed to be the only artifact worth reviewing. However, a professional source code review cannot reveal if you are inadvertently building the wrong system.



Quote Start The development process must provide continuous feedback from static analysis & metrics tools plus a formal feedback report from the manual peer review.


Quote Start Every reported problem should provide one or more suggested solutions.


Quote Start Code reviews prevent the accumulation of technical debt.


Quote Start Each code review is assigned a grade to determine if the software can be released.

Top-of-page | Review | Testing | Refactoring | Tutorials