When discussing whether to use automation or manual testing, it’s best to find the tool or method that fits your purpose, not whether one or the other is best in all circumstances. Ultimately your choice should depend on context: the architecture of the system under test, whether low-level design specs are available, how you’ve organized your iterative processes (whether in Agile or waterfall), what is the competency of your automation engineers, how many times will the automation be used, are there existing test cases and are they well-designed. It’s not a zero-sum game, you can mix automation and manual testing; in fact you can use automation to aid manual testing.

 

Not everything we call automation is worth the investment. Automating 500 tests doesn’t mean that you’re achieving 10 times the coverage that automating 50 tests would provide. Those 50 tests could be core go/no-go quality gate triggers. All 500 of the other tests could boil down to 2 tests, with 498 equivalence variations. Too many of us believe that numbers and metrics carry intrinsic meaning; they may have meaning, but only in context.

 

You need to prioritize what you cover, and throw out any tests whose failure will not be regarded as significant enough to pay an $85/hr developer to fix. Has the code been developed with attention to the ease of testing? Just as some tests cannot be done without automation, other tests are too difficult (expensive) to automate to justify automation.

 

Manual testing is – or can be – much more than the default approach for limited budgets.  Manual testing is all about what you know, and what you can do with that knowledge. Some of the best testers I’ve had in my teams have had production-level development skills, but they apply that knowledge to figuring out where vulnerabilities are likely to be found in end-to-end scenarios. Many years ago, when Microsoft began taking secure code seriously, their best tester (i.e., found more bugs than anyone else, with the highest percentage of high severity bugs) did all of his testing manually, but only insofar as he walked through the code to spot likely vulnerabilities, after which he would figure out what tests would prove the case, which he would execute manually.

 

For some kinds of automation — again, context is everything — the process itself of developing automation is extremely effective in teaching testers how the SUT operates.  I managed a team that included an engineer who in the ‘90s was instrumental in defining the limits of monkey testing. His ability to find bugs by manual testing was enhanced by figuring out the best time in the SDLC for using monkey testing.  Guessing won’t advance your cause.  When are monkey testing’s results “actionable”?  For example, when will monkey testing generate data that can help developers improve error handling? When you’ve figured that out, your approach to other kinds of testing evolves, taking in a higher level of abstraction.

 

On the negative side, I’ve seen people who focus entirely on automation sometimes lose the ability to identify with the end-user. It may not matter when building API-based automation, but someone needs to monitor the interface between UI and middleware – I would much rather that the engineer is as interested in user experience as in the efficiency of algorithms.