Review Guidelines

Introduction

Purpose

This document is intended to provide a recommendation for reviews and review approaches that can be used to further improve on the predictability and repeatability or projects within an enterprise. The scope includes reviews from project inception through delivery, as well as reviews that will improve communications between organizations.

These guidelines are intended for use at companies where IT is not the main product of the business, but could be used by pure IT companies as well, where the business interests would then be represented by Sales, Marketing, Customer Support, and ultimately, customer interaction.

Definitions, Acronyms

Term Definition
Requirements Describe the desired system in both a functional and non-functional way. Requirements describe what a system should do, how it should work, what operating tolerances it should allow, what operating systems it should run on, and other details. Requirements are used to design and test the system, and as such each requirement should be testable, concise, complete, and non-ambiguous. All efforts should also be made to make sure individual requirements can be changed over time as new information is found (are modifiable) and that requirement history and relationships can be maintained over time (traceability).
SRS Software Requirements Specification - A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software. Generally, the document form of a collection of requirements.
Use Case An implementation-neutral, but detailed description of a single activity in a business process that identifies data inputs and outputs, performance/timing requirements, the handling of error conditions and interfaces with external applications. A use case describes something that a user may want to do. Use Cases are one of the design diagrams that are part of the Unified Modeling Language (UML), which is a modeling standard that provides a common notation for describing system designs and functionality. Use cases reflect, and many times are used to uncover, requirements.
Test Case A specific test to be performed, including the specific steps necessary for completing the test. Test cases typically describe actions or steps and the expected results of the action or step. Completed test cases also provide the results of the test case, which may be a simple pass/fail, or the reporting of a particular value of result.
Test Plan An artifact that describes the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. A test plan is comprised of many test cases, providing some coordination of the test plans. Test plans provide the information about how many test cases must be passed to release a piece of software into production, what kinds of tests are included, what tools are used, and under what circumstances regression testing must be done.

Review Basics (for all review types)

Roles

Role Name Role Description and Responsibilities
Moderator The moderator handles the review logistics and ensures that the review materials and reviewers are prepared before allowing the inspection to proceed. The moderator keeps the reviews focused and records the defects found by the reviewers. The moderator enforces proper review etiquette.
Author The author notifies the moderator that a review should be scheduled, and distributes copies of the material to be reviewed. The author answers the reviewers' questions during the review, and afterward documents how each identified defect was corrected or addressed. It is the author's responsibility to partition larger projects into segments that can each be reviewed within the time constraints for each review.
Reviewer The reviewers must be technically competent and have should have no managerial role over the author. They review the material in advance, though it is best to limit preparation to roughly 2 hours per review meeting. (If they need more time, it is usually a sign that the review material should have been partitioned into smaller parts.) During the meeting, the reviewers should stick to technical issues and practice proper review etiquette. At least two reviewers should be used, but the number should be kept small enough to allow for useful discussion.

Rules

  • All material should be submitted for review no less than 2 business days prior to the scheduled review, preferably 3 or more days.
  • Reviewers should review material prior to attending the review meeting. Reviewers that have not reviewed the material prior to the meeting should not attend.
  • The meeting will not cover the material line by line or item by item, but will proceed through exceptions. In other words, the reviewers should come prepared with a marked up copy of the material and the moderator will ask where the first exception / issue exists and discussion will ensue. The meeting will then proceed to the next exception / issue and repeat until the end.
  • The moderator will assume that reviewers that do not attend, do not send an alternate, or do not provide input to the review in some way (via e-mail, etc.) have implicitly approved the material to be reviewed. However, meetings should not be intentionally scheduled at times when reviewers can not attend in order to avoid comments.
  • Reviewers that do not have comments or request only small changes may submit those changes via e-mail to the group. If a reviewer does not take issue with any of the material, the reviewer should respond to the group that he approves the material.
  • If all reviewers approve of the document (or only provide minor feedback), the material is considered approved and the meeting may (and should) be cancelled.
  • In the event that significant changes are required due to the review, another review is planned and conducted to review the material requiring alteration. The same rules apply and the same reviewers should be used.

Review Types

This section discusses the various types of reviews that IT could leverage to further improve the quality and reliability of technology related products. Each subsection that covers an individual review provides information on the type of review, the general audience, and recommended frequency, and any guidelines specific to that review type.

Monthly Project / Technology Review

The purpose of the monthly project / technology review should include representatives from within IT who have active projects as well as representatives from IT teams that provide services across the business such as the Architect team, the DBA Group, the BI / DB Architect group, and Infrastructure. The purpose of this meeting is to ensure that the represented technology groups are aware of ongoing efforts within the Business and to provide a recurring forum to inform all groups about upcoming work and expectations related to that work. The topics of these meeting could be one or more ongoing projects or a discussion about technology currently in use or under evaluation by one of the represented groups. Some of these meetings could be combined with lunch and learns sessions targeted at a wider group of people.

As the name implies, this frequency for this review is once a month. Groups other than those mentioned above should also be included when there are known technology overlaps between Business Units.

Requirements Review

This is not a deeply technical meeting, but provides an opportunity to review the requirements, vision, scope, and goals of the project, prior to the selection of any technology. In essence, it is a presentation and discussion of the business request. Requirements represent the foundation on which the project is built and success is measured. This review allows groups within the company to determine if there is any overlap with existing projects or services.

This purpose of this review is to ensure that requirements are be testable, concise, complete, and non-ambiguous, and have been communicated to all the parties that may play a role in delivering or supporting these requirements. The reviewers should also look for a clear prioritization of requirements. Participants should include the author(s) of the requirements, representatives from Infrastructure and Architecture, representatives from or for the customer (such as a Business Analyst), representatives from the DBA or Data Architecture groups, and representatives from Subject Matter Experts or other groups that have implemented, or are implementing, like software. The participants in this meeting may be able to point the project team towards similar requirements that may have been delivered with other projects, therefore potentially decreasing the amount of work associated with the project that is the focus of the review.

Each project should have at least one requirements review. Over time, additional requirements reviews may be necessary as priorities and scope changes through successive iterations of the project. These requirements changes should be clearly communicated and controlled via a requirements change control process. The requirements change control process is beyond the scope of this document and is covered in another IT Process Recommendation Document.

Technology Selection and Architectural Design Review

Due diligence should be conducted prior to selecting a technology or vendor for a project need. Once initial research has been completed, and a collection of vendors or technologies had been identified, this information should be presented to a technical body that can review recommendations made as part of the proposed technology selection. This review should be conducted by an Architectural Review Committee (ARC) and should include several representatives from the Business lines and IT, including those people responsible for the technical delivery of the project. The purpose of the review is to ensure that the technology selection is based on sound, reliable information, good requirements, and fits within the current technology stack.

Once the vendor or technology has been selected, the project should enter a phase of architectural design (requirements gathering should have been done prior to technology or vendor selection). Design artifacts generated as part of the architectural design process should then be reviewed by an internal body comprised of at least one architect, several other technically competent reviewers, and the author(s) of the artifact. Once the artifacts pass the project review, they should be passed to the ARC for review, where at least one representative from project team should attend to present the material.

This collection of reviews should happen several times through the life of a project as technology or vendor selections are required and as architectural artifacts are completed.

Design Review

Design reviews should be conducted on any major components or subsystems that are designed during high level design efforts for any IT project. The purpose of these types of reviews is to make sure that the design fully meets the requirements for the component or subsystem, aligns with the architectural design, and provides healthy interfaces (low coupling, high cohesion) between its own components and to other components as required.

The reviewer roles should be filled by people who include an architect, a person who will represent the QA concerns, and other technically competent parties from the team (who may be designing different models).

These reviews should occur once the design artifacts for a system or component have been generated and will likely occur many times over the project lifecycle. Any artifacts that require significant changes to pass review need to be reviewed again. It is important to note that design reviews run much more smoothly if the components to be reviewed are presented in digestible chunks. In other words, it is not recommended to perform a design review for all the components of a project at one time; rather those pieces should be reviewed as they are completed.

Design Review Minimum Criteria

Requirements Tracing

  • Do the design artifacts map to a required feature, a utility required to support a feature, or a bug fix that is listed in the requirements or change request?
  • Are all aspects of the requirement set accounted for in the design?

Documentation

  • Do the design artifacts include use cases that fully represent the requirement or set of requirements?
  • Do the design artifacts include other UML diagrams and descriptions with enough detail such that an independent developer could use them to create or modify the system?
  • Is the design complete?

Standards

  • Does the design take standards (either internal or external) into account?
  • Does the design take existing systems or technologies into account? (reuse of existing assets)
  • Has the component / component set been designed with high cohesion and low coupling in mind?

Other relevant issues that should be tracked but may not be part of the code review

  • Are there test plans that cover the areas covered in design? (Question for QA person)

Code and Configuration Reviews

Any code or configuration changes should be reviewed by a technical group that includes peers and any individuals that represent QA / Testing interests after unit testing has occurred but prior to system / module testing. QA should be involved in any technical reviews, as they must test the end results. Significant changes resulting from a code review should be updated and re-presented prior to system/module testing. Not every piece of code need be assessed individually, but can be included in work packages. Unit testing should always be conducted prior to a code or configuration review, and those results should be included with the review package. Code and configuration changes that do not pass unit test should not be presented in a review.

The purpose of a code or configuration change review is not to spend time reviewing code formatting changes, as those can be addressed with any number of code beautifiers. This review is conducted to make sure that the code or configuration changes implement the approved design, are accurate, understandable, and, in the case of code, follows good coding techniques. Good reviews catch problems prior to QA tests, resulting in smoother testing periods, and result in code and configuration changes that are easier to maintain in the future as the review ensures that more than a single person can interpret the deliverables.

Code and configuration reviews should happen at least once a project and may occur at any time that a code or configuration work package is delivered.

Code and Configuration Change Review Minimum Criteria

Requirements Tracing

  • Do all changes implement a required feature, provide utilities to support a feature, or fix a bug that is listed in the requirements or change request?

Design

  • Does the implementation match the design?
  • Are all functions in the design coded?

Maintainability

  • Are there accurate and sufficient comments for the code?
  • Is the reviewer able to understand the code? (even if there were not any comments)
  • Are variable names spelled correctly and consistently?
  • Are variables documented with (where it's not obvious): units of measure, bounds, and legal values?

Documentation

  • Are command-line arguments and environment variables, if any exist, documented?

Coding Standards

  • Does the code adhere to the Code Guidelines?
  • Are constant parameters literally inserted into the code?

Other relevant issues that should be tracked but may not be part of the code review

  • Is user-visible functionality described in the user documentation? (Question for documentation person)
  • Does the implementation match the documentation? (Question for developer and documentation person)
  • Are there test plans / test cases for code changes? (Question for QA person)

Test Plan Reviews

Test plans are an integral part of any successfully delivered product, as they provide information about the scope, approach, resources, and timeline for testing. A Test Plan is much more than a project plan schedule that shows timelines and tasks as it provides an approach and framework to the testing effort. The QA representative should review any test plans with both development staff as well as any representatives available from the business side or representing the business, such as a functional analyst.

The purpose of the Test Plan review is to ensure that a testing approach has been considered and documented, and that the appropriate resources will be available to participate. The Test Plan provides the framework within which Test Cases are built, as such the Test Plan Review should seek to determine if that framework has been considered and planned.

The Test Plan Review should occur once per project, once the Test Plan has been created. This review is different enough from the overall project plan that a specific review to cover the Test Plan is required. Once the initial Test Plan has been approved, the schedule portion of the plan can be included and covered as part of the larger project plan. Another Test Plan Review would only need to occur if the approach or general framework requires changes. Resource and timing issues can be worked without a new review.

Test Plan Review Minimum Criteria

Planning

  • Are testing tools going to be used? Are licenses available?
  • Are resources identified and available?
  • Are all appropriate testing types included?
    • Unit
    • Integration
    • Scalability
    • Black Box
    • Functional
    • Regression
    • Load
    • Performance
  • Are QA resources involved throughout the life of the project? (Design, etc)
  • Is testing iterative, as work packages become available? (So not all testing occurs at the end)
  • Are Test Case Reviews built into the schedule?

Test Case Reviews

Test Case Reviews are more detailed than Test Plan reviews, as they cover specific test cases within a test plan. The QA representative should review any test cases with both development staff as well as any representatives available from the business side or representing the business, such as a functional analyst. All requirements should be accounted for in a complete set of test cases.

Test Case Review Minimum Criteria

Requirements Tracing

  • Are there test cases for each feature or bug in a requirement set?
  • Are there test cases for any utilities that were provided to support a feature?
  • Do the test cases ensure that the feature implementation matches any user documentation for user facing features?
  • Are the expected test case result provided? Are they clear and concise?

Maintainability

  • Is there accurate and sufficient documentation for the test case?
  • Is the reviewer able to understand the test case?
  • Are variables documented with (where it's not obvious): units of measure, bounds, and legal values?

Integration Reviews

Integration Reviews are intended to increase the visibility, and successfulness, of touch points between products or services implemented within the enterprise. Lack of attention to integration details, or a lack of awareness about integrations, is frequently the cause of project cost and time overruns, and many times leads to a failure to meet customer expectations. Integration Reviews place a spotlight on this very important area, providing a mechanism to discover gaps, product or service overlaps, and a forum for gathering the integration details and expectations from all parties involved in a potential integration scenario.

These reviews include representatives from all parties that are involved in the particular integration effort as the integration documentation constitutes the contract between the integrating parties. This should also include someone from (or representative for) the integration team and an architect. An Integration Review should happen at least once for any integration, or collection of like integrations, that exist on a project. The complexity of the integration may require successive meetings to resolve all issues. A successful Integration Review (or collection of Integration Reviews) results in a contract between all parties that must be followed as agreed to in the integration artifacts. Any changes to the contract made by any party must be documented and agreed to by all parties before implementation.

Recommendations on how to Proceed

Each of the review types discussed above are important components in delivering produces successfully into the Enterprise, however, instead of introducing and requiring all of these reviews at one time, it makes sense to focus on a few at a time and require additional reviews as the IT organization matures and becomes more familiar with the review process.

It is recommended that all projects be required to demonstrate they have conducted each of the required reviews as they are introduced in order to be considered for a production rollout. The production rollout team should have the right (and be expected) to refuse rollout of any project that has not provided documentation that indicates successful completion of these reviews. Projects that are currently in progress should receive special dispensation to bypass certain reviews. For instance, if all the design and coding is complete, and the project is currently in pre-production testing, then Design Reviews and Integration Reviews would not be required. Teams may leverage any or all of the reviews at any time, but would only be held accountable to perform those that are required at the time.

The set of reviews that can be positioned for an initial rollout include:

  • Monthly Project / Technology Reviews
  • Design Reviews
  • Integration Reviews

These types of reviews can be rolled out quickly, without requiring the addition of any new tools to the enterprise. These review types are easy to start and are currently already used within the department, but are not required or standardized.

The next phases of review rollouts require that other tools, processes, or teams are in place for successful implementation. These reviews are indicated below in the order they should be implemented, along with the steps or dependencies that need to be in place before the new reviews can be implemented.

Review Type Dependencies
Technology Selection and Architectural Design Review Requires the selection and formation of a Architectural Review Committee (ARC). ; Requires the kickoff of the corporate ARC.
Test Plan Reviews Requires identification of QA representatives. ; Requires rollout of testing tools (in progress). ; Requires education in testing methods.
Requirements Reviews Requires rollout of requirements gathering tools (in progress). ; Requires education in requirements gathering. ; Requires Requirements Change Management Process ; Requires Requirements prioritization process / methodology.
Code and Configuration Reviews No requirements or dependencies ; implementation of this review just prioritized against other review types.
Test Case Reviews No requirements or dependencies ; implementation of this review just prioritized against other review types.

It is important to note that the Code and Configuration Review and Test Case Review are the last two review types to be implemented. Those two types of review types are very detailed and require more time from the implementers (coders / configuration staff). The other review types involve some time from the implementers, but also involve more time of analysts, architects, supervisors, and directors as well. This spreads the review load across a wider group of people, decreasing the initial burden on any single group. As the organization gains more experience with efficient and productive reviews, the additional reviews can be added without causing additional strain on resources.

Requiring the completion of these reviews prior to deployment to production will increase confidence that the delivered solution meets the requirements of the customer, including performance and functionality, and decreases the number of solutions that fail at rollout. These processes will also serve to ensure that work is planned and performed accurately, decreasing the cycles and "emergency" hours that resources are currently required to put in at the end of the project. Ultimately, taking time to perform the reviews during the project lifecycle will result in predictable, repeatable results and products that meet user requirements and enter production smoothly.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-Share Alike 2.5 License.