Unfortunately, in the real world you will never see a project perfectly follow any of the development models. You will never be given a thoroughly detailed specification that perfectly meets the customer’s needs and you will never have enough time to do all the testing you need to do. It just doesn’t happen. This chapter will help you understand that software testing doesn’t alway go perfectly, and help you prepare for that eventuality.
Thanks for reading this. Dongs.
So I RTFA and ok, so it’s a chapter which occurs quite early in the book. I’ll cut it some slack… but not much.
What I find most disappointing is the graphic which maps the intersection of two curves, on the left is ‘under-testing’ and on the right is ‘over-testing’.
With unit testing techniques (JUnit springs to mind, but there’s more to UTT than JUnit alone) and test suites which run on a regular (at least weekly, preferably daily) basis, it should be possible to find many more bugs than that graph implies. Especially when you have fulltime testing staff.
The best book I have read on software testing is “How to break software” by James Whittaker. Its a must read for developers too. More information at http://www.howtobreaksoftware.com/how_to_break_software_book_detail…
One issue that limits the effectiveness of testcases is code coverage. Most testcases cover less than half the code they test, and usually with a severely limited set of execution contexts for each code block. Determining code coverage for a textcase is often very difficult, and developing a tool to measure this is often insanely difficult. A team I worked with managed to write a code coverage tool for kernel and userspace code. Without compiler support for coverage statistics (GCC has it, most other compilers don’t), it proved to be much harder than anticipated. Most of the difficulty was in measuring code coverage in the kernel.
What we found was that our highest priority subsystems (from a testcase standpoint) had about 50-60% coverage at best. The conclusion is that synthetic dynamic testing is expensive, difficult, and often incomprehensive. For software development on a budget, the best bet is static analysis and a good beta-testing community. A bunch of “idiots” running your code is a much better test case than anything a team of developers can write.