ASKSG
Name | Date | Reason For Changes | Version |
Christopher Wood | 2/15/13 | Stubbed out the document with the required sections | 0.1 |
Christopher Wood, Ryan Knapp | 3/5/13 | Added content in preparation for phase 3 | 0.2 |
ASKSG will be tested at three different levels of granularity. Unit testing will be done to check the correctness of individual modules in both the backend and frontend of the system, though the increased complexity and large number of dependencies in the backend makes it difficult to perform unit testing for anything but the analytics portion of the system. Much of the work performed by the data collection component is centered around managing integration with the external services, and relies on loading configuration data from the database, initializing external client (proxies), and persisting response data correctly. For this reason, integration testing will be conducted among services in the backend to ensure they are implemented correctly. The last tier of testing will be usability testing, and will be conducted during the third and final phase of the project. This will be done to ensure our system is easy to use and operate by SG representatives, thus supporting fast turnaround times for student inquiries.
In addition to the automatically-generated tests generated by Spring Roo, we will also write unit tests for the Analytics module. These unit tests will be used to determine if the semantic data is correctly calculated using small samples of conversation data items and semantic scores returned from Chatterbox.
In a similar vein, we will also write unit tests for the JavaScript code responsible for parsing the semantic analysis data and generating the appropriate figures. These tests will be written as part of the Angular.js framework, and to support this testability, we will follow the proper controller and model development guidelines set in place by the framework maintainers[1]. To summarize, this involves good separation between the DOM manipulation used to generated the images with D3.js and the controller logic that modifies the DOM. We will leverage Angular directives to modify the DOM and make the code more susceptible to unit tests.
As previously stated, the services module in the backend is not a suitable candidate for unit testing. There are far too many inter-module and external dependencies that are needed in order for the services to operate correctly. Therefore, we will rely solely on integration tests to fill this gap.
Integration testing will be performed for the following major backend tasks:
Service creation, operation, and persistence
We will write integration tests that validate our ability to call into the external services that serve as a data source for our system. They will exercise the code that includes client construction, configuration hydration, service and configuration selection, and response persistence. They will also validate the service response where appropriate.
To facilitate service integration testing, we will take advantage of mockito and awaitility. Mockito will allow us to isolate some external components during integration testing, while awaitility will allow us to easily wait for asynchronous events to finish processing, an important concern as many of our service calls run in an asynchronous context.
We will also test failure scenarios for our services, which include merging, availability, backoffs, and failure to persist response data. Merging is discussed in more detail below. Availability testing will include mocking out client objects, and simulating failure responses to ensure that retries are correctly performed. Backoffs and availability concerns are already partially mitigated by many of the provided clients, however we will ensure where appropriate that we correctly backoff due to service errors, and fail gracefully after an appropriate amount of time. Additionally, we will write tests to ensure that we correctly resume connection attempts for the same set of data after an appropriate interval has passed. If we exhibit failures to parse a message or persist it, after a certain number of retries we must ensure that we skip the message. Finally, we will test the scenario where we fail to persist a message that was sent out through our service, and ensure that we pick it up again when merging in new content.
Spring interceptor channels
Spring interceptor channels are configured with each of the Spring Social api endpoints. Similar to the model used internally (the union of service and provider configuration identity used as an identifier), Spring uses the handle for a specific service bound to a specific channel as a unique endpoint. The interceptor channels also pick up new emails over imap. When new messages are sent through the service, they will travel back out over the defined interceptor channels. The testing in this area is related to the configuration of service objects. It is critical that we maintain an updated version of endpoints, to avoid making requests against deleted service configurations, or to ensure we pick up newly added service configurations. Our testing here will entail tearing down and adding new services, and ensuring they are picked up by the data pull process, as well as open message channels being removed or added to.
Spring Interceptor chains
The analytics portion of the application will employ interceptor classes, in a chain of responsibility pattern to pick up messages added by data consumption components. Each element of this chain will process messages in order, primarily in an event driven annotation style (and will be unit tested individually). Testing of the chain of responsibility will primarily be arranged around any dependent components, as we lack some of the stronger guarantees provided by a more comprehensive architectural pattern. Additional testing will ensure that messages that are dropped are passed on to the next element of the chain, and reprocessed as appropriate in the event of local or system wide failures.
Merging conversation data
Merging occurs when it is necessary to sync new updates in an individual conversation, which may include out of band updates from the account owner, which we need to take particular care to identify as such. In more complex services, such as reddit, which allows multiple paths or threads of conversation, we need to accommodate the insertion of messages that are not in chronological / contiguous order. We will construct a variety of tests focusing around unparsable messages, linear and nonlinear updates, and updates with some information removed (as if a message were deleted in the external source).
Usability testing will be be partitioned into two major tasks. The first task, or set of tests, will be conducted over the span of four weeks using volunteers from the RIT student government. The purpose of this first set of tests is to ensure the usability of the main dashboard interface, both for managing conversation data (i.e. viewing and replying to messages) and generating semantic analysis reports.
The second task will only be conducted over a shorter time span of two weeks and will focus on the public facing website available to the RIT community. While there is not much functionality exposed on this website, it will be useful from an aesthetic perspective to see if the system is appealing to new users in the RIT community.
Each of these tasks will follow similar iterative processes. On a weekly basis, we will select a random group of volunteers for the test. For the first task, we will then select a subset of the product features outlined in the Requirements Specification Document. For each of these features, we will present the user with a brief description of the task they need to accomplish, and then prompt them to complete the task. We will keep track of the total amount of time and number of mistakes made while trying to complete the task. We will also log any verbalized comments the volunteers make during the course of the test. To gather information about the user’s recall, we will repeat the tests with users in more than one test session, and compare the timing and accuracy results for each test session. All of this information will be used to identify difficult or confusing system features.
In addition, we will also distribute a survey to all volunteers at the end of each test session. The questions on this survey will be entirely qualitative, and will be tailored to address the quality of the system with regards to the tasks identified for the specific tasks selected for the session. We will also ask the following general questions:
The test session for the second task will be slightly different. In particular, we will distribute a survey containing qualitative questions about the look and feel of the ASKSG website and their personal opinion about using it. We will use this information to iteratively update our UI design to yield a more aesthetically pleasing design for the entire RIT community. Also, since this type of usability testing will be primarily conducted through a survey, we will not be formally monitoring each respondent as they use the public-facing website.
[1] "AngularJS — Superheroic JavaScript MVW Framework." 2010. 10 Mar. 2013 <http://angularjs.org/>