Teaching to the Test, or Validating the Application?

When it comes to standardized testing, there are a number of allegations about the results. Are teachers “teaching to the test”, or does the test validate the teacher’s performance? Similar issues arise when it comes to software test scripts.

• Are test steps following the order of operations users would follow, or is it the simplest path for the test developer?
• Are test scripts written to test every function, or those that are easiest to test?
• Are test plans written to increase the number of steps is minimized, or is the test plan written to cover every major contingency?
• Is the test written to cover the “happy path” and get great results or generate situations where it is likely to error?
• Is the test vigorous enough to weed out those applications that are ready from those that aren’t prepared for the real world?
• Are test plans written to be easy to follow, or are they difficult to understand?
• Are you testing what matters to users, administrators or both?
• Are you updating the software plan with each version, or reusing a test plan until you come across a problem caused by a failure to test it prior to a software version release?
• Are you writing test plans to meet IT standards or testing requirements, or both?
• Are you writing test plans to meet IT standards that may not even be necessary?
• Does the software test plan take IT security into account?
• Are metrics based on logical methods like the number of defects found or the percentage of the functionality tested, or are metrics based on an easily manipulated metric like the number of test plan steps?
• Does the test reflect today’s criteria? Are you reusing test plans for installed software though you’ve migrated to a Software as a Service model?

Advertisements