Peer Reviews in Software: A Practical Guide

by Karl E. Wiegers, Addison-Wesley, Boston, MA, 2002. 232 pp., $39.99(paper). ISBN 0-201-73485-0.

Reviewed by Charles Ashbacher

BOOK
REVIEW


PDF Version

Few programmers dispute the premise that having fellow programmers rigorously examine your code is one of the best ways to find defects. However, it is also potentially more dangerous than the traditional testing techniques. Recent revelations in the area of open source software have pointed out a dangerous flaw in the “many eyeballs” strategy of creating programs. The main argument that open source advocates have made is that with so many people examining the code, the likelihood of errors being overlooked is substantially reduced. That argument has been proven false, recent examinations of open source software versus commercial equivalents have shown more errors in the open source product than in the commercial.

The second major problem with peer reviews of software is that it has so much potential to be a political and social disaster. Putting your work forward for critical examination is not for the faint of heart and the reactions of the examiners can go either of two negative ways. The first is that the comments become too negative, leading to hard feelings and a sense of confrontation. In the second case, the comments may be too gentle, where a reviewer holds back for fear of hurting feelings and generating the potential for reprisals when their code is being examined.

With all of these potential problems, many software shops make the rational decision to avoid formal peer reviews, relying on informal contacts and a separate testing group. Others are adopting the extreme programming development technique, where programmers work in pairs, essentially creating a trusted peer who will be less likely to move to either extreme of the criticism scale.

However, all of these potential failures can be avoided if peer reviews are done correctly. Wiegers sets down very detailed plans for the rigorous examination of the software, also clearly explaining the limits that must be set. He is very emphatic about the need for preparation and the setting of limits in the length of individual sessions. Wiegers is also very specific in setting down explicit and rigid political rules concerning how the sessions should be conducted and the limitations of the role of managers in the sessions. There also must be rules that are clearly stated up front concerning the use of the data for employment review purposes. The rules here are simple, all data collected in the peer review process is never to be used for any employee evaluation process. Unfortunately, the world is not an ideal place and some managers will simply use the peer review data as a simple way to evaluate employees.

My experience as a developer makes me very skeptical that formal reviews can be cost effective. In a previous place of employment, one of our first review sessions degenerated so fast that within fifteen minutes three of the people were crying. We eventually managed to pull it together, but after that formal reviews were conducted with velvet gloves and there were still hard feelings weeks later. This created costs that almost certainly exceeded any of the advantages of the reviews. Nevertheless, despite this skepticism, I believe in Wiegers approach, as long as it is followed.

In the intense and dangerous game of software development, there are so many ways to fail and so few ways to succeed. Any additional weapon that we have in our fight against our virtual arachnid enemies is to be welcomed, and Wiegers has described a powerful one. However, be careful, because like some bugs, this technique packs some powerful venom that can do a great deal more harm than good if mishandled.


Cite this book review as follows: Charles Ashbacher: Review of "Peer Reviews in Software: A Practical Guide", in Journal of Object Technology, vol. 2, no. 1, January-February 2003, pages 121-122. http://www.jot.fm/books/review6