Tuesday, September 23, 2008

automated QA vs. manual QA

Introduction
For this assignment the class revisited their code from CodeRuler and implemented Ant, Checkstyle, Findbugs, JUnit, and PMD on the CodeRuler program. The goal of the exercise was to get an understanding of human versus machine quality assurance.

The task
I was partnered with Tyler Wolff and he was able to work on his stack while I got to work on MyRuler.java. After getting a feel for invoking the various builds with Checkstyle, Findbugs, JUnit, and PMD in the stack assignment. Performing quality assurance on MyRuler.java went a lot smoother than checking the stack assignment. After invoking the tools on MyRuler.java it was discovered that the creators of CodeRuler did not follow established coding conventions. Checkstyle returned numerous tab errors from the classes built by the CodeRuler makers. Regarding MyRuler.java, Checkstyle returned six errors which ranged from a missing period in Javadocs to using an asterisk in import statements. PMD returned fours errors that range from changing to an arrayList to labeling the switch statement. Findbugs did not find errors in MyRuler.java.

In this assignment the comparison of human error checking to machine error checking revealed how the two look at code. First, the computer will not consider the overall strategy of MyRuler.java, it could care less about how the knights attack the peasants first. When quality assurance tools run through the code. The only bits and pieces the tools care about are how the curly braces being in the right place and do the Javadocs have periods at the end of the sentence. When a human looks at the code they do care about the knights attacking peasants and strategy implementation. A human also cares about the curly brace being in the right place, because such nuances goes to readability. If a human cannot read the code how can they maintain the application? But when it comes to a co-worker or classmate examining code that spans over a thousand lines they might miss that @return statement. Having both a human and machine performing quality assurance on a program can only lead to readable-functional applications. The human can catch strategy flaws while the computer will catch style and coding defects.

Conclusion
A person is really good at catching overall design deficiencies however, they are mediocre at best when it comes to picking out coding errors i.e., bugs, class usage, improper declarations, etc. Machines are better at finding improper declarations and making suggestions as to how something should be declared.