Software Testing Plan

for

UTAS Reconnaissance Mission Debrief System

Version 1.0 to be verified

Prepared by Matthew Alioto, Jason Buoni, Harry Chris, Joshua Gagne

05FEB2013


Revision History

1 Overview

2 Testing Strategy

2.1 Acceptance Testing

2.2 Integration Testing

2.3 Unit Testing

3 Entry and Exit Criteria

3.1 Acceptance Test

3.1.1 Shipping/Live Release Entry Criteria

3.1.2 Shipping/Live Release exit Criteria

3.2 Integration Test

3.2.1 Integration Test Entry Criteria

3.2.2 Integration Test Exit Criteria

3.3 Unit Testing

4 Defect Tracking

5 Appendix

5.1 V model diagram


Revision History

Name

Date

Reason For Changes

Version

Team

13DEC2012

Initial Setup

1.0

Team

13DEC2012

Draft S1 - S3

1.1

Jason

19JAN2013

Edit S2 - S3

1.2

Matt

23JAN2013

Integration test data

1.21

Josh

04FEB2013

Formatting & flesh out

1.3

Matt

05FEB2013

Revised testing methodology & added integration test info

1.4

Josh

14FEB2013

Defect tracking plans

1.41


1 Overview

RMDS will be testing using the “V’ model[1] with distinct unit, integration, and acceptance testing phases. This model of testing works well with the incremental waterfall process.

Acceptance tests will be tied to the finalized Software Requirements Specification. Integration tests will be tied to the finalized product architecture. Test coverage metrics and other aspects of specific unit testing policy will be tied to the detailed design of architectural modules. A test plan will be maintained through creation of the SRS, Architecture, and Design documents to facilitate the generation of useful test cases.

During the requirements gathering phase, the team will develop use cases along with the acceptance criteria for each requirement. Use cases and formal requirements may be used as impromptu acceptance tests until a separate acceptance test document is developed. Every functional requirement will have an associated acceptance test case.

Integration testing will be critical during the implementation phase. Areas where significant testing will be needed are identified during architectural design and are documented below. It is critical to update the test plan as changes are made to the architecture. Integration tests will be conducted after the unit tests for a module have passed.

Unit testing will be the first form of testing implemented. Unit tests will be generated through jUnit and djUnit by the developer responsible for a given module. Unit test cases will partially be developed based on detailed class design, and will partially be developed by individual programmers as aids for the completion of their assigned module.


2 Testing Strategy

Unit and Integration testing will be performed continuously, with Acceptance testing performed in a distinct phase at the end of a given increment. Code reviews will be conducted at the beginning of the testing phase. Unit and Integration tests should consume most of the time dedicated towards testing, and will be logged under “Development Testing” on developer timesheets. Regression tests will be performed on Unit tests both prior to Integration tests and as a matter of course. Team members will be responsible for committing clean, tested code.

2.1 Acceptance Testing

There will be a distinct Acceptance testing phase before the release of each increment. Since Acceptance testing will not be performed until the conclusion of the first increment in week 2 of the second quarter, the SRS document will serve as an informal Acceptance test guideline until formal tests are developed and documented.

Tests will be based off of functional requirements. System features with the highest priority will be tested first. Requirement pass fail criteria will be determined by the appropriate use case generated by that requirement. Traceability will be documented with the Acceptance test document; requirements and use cases will be formally tied to a given test with 100% coverage in each direction.

Defects detected during acceptance testing will be given the highest priority among bug fixes. Priority between acceptance defects will be based on the priority of the associated formal requirement.

Due to the sensitivity of the data,
testing will be limited to members of the team along with members of UTAS chosen by the sponsor. Realistically, GoodShipAces will have sole responsibility for performing Acceptance tests before passing RMDS to the sponsor.

2.2 Integration Testing

Integration testing will be continuous and represents the primary testing effort for this project. As modules are completed, Integration testing will ensure that they function flawlessly as parts of the RMDS as a whole. Testing will be conducted using the “sandwich” model, in that integration will be neither exclusively bottom-up or top-down. Integration tests will be developed using the same technology as Unit tests, i.e. jUnit and djUnit.

There will be two levels of Integration tests. One level concerns testing of various problem points in the RMDS architecture which involve concurrency or other complex interactions. The second level concerns testing of completed Panels and their integration with the RMDS as a whole.

2.2.1 High Priority Interactions

Integration test modules will be directly tied to the project Architecture. Test modules are specified for areas which involve complex interactions or concurrency and will be a focus of special attention above and beyond ordinary Unit tests.

These Integration “pain points” all represent a high priority.

1.         Test: Extraction of Mission Data Extract to Panel Data

        Scope: Per Panel

        Modules: PanelDataExtractor, Context

2.         Test: Loading of Panel Data into PanelView

        Scope: Per Panel

        Modules: PanelDataParser, DataAccessor, PanelController, PanelUpdateRunnable,

    PanelView

3.         Test: Marshalling Panel Data into WorldWind domain objects

        Scope: Per Panel requiring WorldWind

        Modules: PanelDataParser, DataAccessor

4.         Test: Updating PanelView from Clock

        Scope: Per Panel

        Modules: Clock, Context, PanelView, PanelUpdateRunnable, PanelController

5.         Test: Updating Clock from Timeline

        Scope: System

        Modules: Clock, Context, TimelinePanelController

6.         Test: Synchronizing Panel Data access from PanelExtractor and PanelDataParser

        Scope: System

        Modules: PanelExtractor, PanelDataParser, DataAccessor

2.2.2 Panel Integration Testing

Once the overall system architecture is functioning with a test Panel, real Panels will begin implementation. Individual Panels will need to be tested beyond Unit tests of their various components, and so Integration tests will be developed to discover any issues with adding any Panels to our application. Panel Integration tests will be completed before the Acceptance test phase. The difference is that Panel Integration tests will attempt to discover any programming errors and ensure that the Panel functions with RMDS as a whole; Acceptance tests will cover how well the Panel meets its requirements.

2.3 Unit Testing

Unit tests will be written on a per-developer, per-module basis. There is no requirement for all methods to be unit tested; the team will generally not focus on testing every trivial method, such as getters and setters. Unit testing will be done throughout the development phase of each increment. Unit tests will be developed in jUnit and djUnit and will be completed alongside development.

Unit testing pass fail criteria is determined by the number of tested methods passed.
The team will monitor the percentage of the code unit tested, but this will not be a determining factor in pass fail criteria for unit testing. Regression testing should reveal nothing but green before modules or Panels are sent up for Integration testing.

Priority of bugs detected during unit testing will be based on the number of feature requirements they impede. Typically, Unit testing bugs will be given a higher priority over Integration testing bugs. Defects that affect the system as a whole will be given the higher priority over regressions affecting a single panel.

3 Entry and Exit Criteria

3.1 Acceptance Test

Informal test cases will be created as a byproduct of formal requirement and use case development in the requirements gathering phase. Formal Acceptance test cases will be defined prior to the integration test stage ge of a given increment.

3.1.1 Shipping/Live Release Entry Criteria

Before Acceptance testing can begin, the following criteria must be met:

3.1.2 Shipping/Live Release exit Criteria

To complete the Acceptance testing phase, the following criteria must be met:


3.2 Integration Test

3.2.1 Shipping/Live Release Entry Criteria

Before Integration testing can begin, the following criteria must be met:

3.2.2 Shipping/Live Release exit Criteria

To complete the Integration testing phase, the following criteria must be met:

3.3 Unit Testing

Unit tests will be written using jUnit and analyzed using djUnit. Unit tests will not be formally specified in the same sense as Integration tests; they are the responsibilities of individual developers. All test performed on a given module will have to pass in order for implementation to be considered complete. In addition, Unit tests will reveal broken or low-quality code which should not be committed.

The full Unit test suite for a module must be run successfully before submitting a module for Integration testing. Note that there is no formal “Unit testing phase”.

3.3.1 Shipping/Live Release Entry Criteria

Before Unit testing can begin, the following criteria must be met:

3.3.2 Shipping/Live Release exit Criteria

For Unit testing to be considered successful:


4 Defect Tracking

Defects will be tracked by utilizing a third party bug tracking tool.  The actual tool used is TBV FreeBugBase.  FreeBugBase is a 3rd party bug tracking tool, use of this product would be primarily for clearness and would primarily contain a placeholder, criticality and reproduction of bug.  This tool will let us collect the bugs and later submit them as per request.

Newly added bugs will be given a priority based on the number of feature requirements the bug impedes. Bugs that are regressions will be marked as such. The team will estimate the effort required to repair and retest the bug.

The team will discuss which bugs will be in and out of scope during each Tuesday meeting. The team will determine the scope of each bug on a case by case basis. Factors for each decision consist of the priority of the bug along with the effort required to fix it. Regression bugs will get special consideration as they may have detrimental effects to requirements that may not be focused on in the current iteration. The team will explain to the sponsor why any bugs were moved out of scope. The sponsor will have the ability to agree or disagree with the teams decision on each bug, but the team retains the right to refuse to in scope any items that they see as detrimental to the project schedule, or items whose risks are deemed to high.


5 Appendix

5.1 V model diagram


[1] See appendix 1