Planning to Test

Randomly flicking through screens and reports 'giving them a go' is not effective in finding bugs and confirming functionality. User Acceptance Testing should be structured and documented. The first part of the User Acceptance Testing is the creation of the 'test plan' document.

The test plan lists all of the functionality that you intend to check, how you will check it, and the results that will confirm that the test was successful.

The test plan typically contains the following columns:

  • Test number (just a sequential number for reference)
  • Description (a short description of the functionality being tested)
  • Process (a description of what you are going to do to perform the test)
  • Result (expected output or effect)
  • Date (the date this test was last performed)
  • Who (who performed this test last)
  • Pass (Yes or No)

An easy way to start building a list of tests in the test plan is to look at each element of functionality from the specification. For example, if you know you are meant to be able to create a 'Councillor' then add as many tests around this process as you can think of. Additionally, you can create tests by thinking of end-to-end processes. An example of the thought process might be "To create a report I know I need to create authors, divisions and meeting dates". This then leads to a number of individual tests leading up to creating the report.

For every item of functionality you identify for testing you should try to think of multiple 'positive' and 'negative' test plan items. A positive test is a where you expect the software to accept all the input in a 'typical world' and the result will be good. An example would be to say "Can I create a log entry?" A negative test is where you expect the software to be challenged by the input or action. You're testing whether the function will handle a bad situation well or not. An example would be to say "What happens if I delete a log entry that is related to another log?".

Planning to Record Issues

To allow Infocouncil to effectively understand, reproduce and correct an issue (i.e. bug or malfunction) we need a certain level of information. To collect this information in a structured way we suggest creating an 'issue register'. Items in the issue register can be created as User Acceptance Testing is performed.

Again, the issue register can be a simple table with the following columns:

  • Issue number (just a sequential number for reference)
  • Test number (reference back to the test plan)
  • Description of what when wrong or what is required
  • Reproduction steps (how to reproduce the error or understand the requirement)
  • Specification reference (If this is an item from the specification indicate where to find it)
  • Urgency level between 1 (correction is urgent) to 3 (correction would be good but can wait)