In the first article of this series, Software Testing: A Time-Wasting Headache or a Necessity?, we examined the importance of testing. Now we will explore the effectiveness of two basic testing methods: manual and automated.
As we stated in the first article, testing ensures that a process, system or software does what it is supposed to do, does what it needs to do, and does what your customer expects it to do.
At the core of all testing is the test case. A test case is any step or combination of steps that, if followed correctly, allow operation of a system or process. Testing determines if the process works when the steps are correctly followed. It also determines that an incorrect step results in a failure.
A good example is the familiar website login screen with 2 fields, username and password, and 2 actions, login and cancel. Testing this screen is not as simple as it may seem, because there are 26 possible combinations of these 4 items, and testing must ensure that all possible combinations work correctly. Do not believe me? Well, here is one obscure test case; a user enters a correct username but no password and presses “login” or “OK”, the result should be a login failure. This test case helps ensure the web browser has checked the password and it is confirming that the website is not just letting anyone login without a password.
What type of testing should you use? Read on.
Manual Testing: To Err is Human
“Manual” testing means just that: a person conducts each step of a test case and observes the results.
In the above example, manual testing can determine if the system is checking the user’s password by performing a simple execution: enter username, then click “login.” However, the testing does not stop there. The human tester must try every combination of correct user name/no password, incorrect user name/correct password, correct user name/incorrect password, and so forth. Moreover, all 26 possible combinations in the test case must be run in multiple browsers* on multiple platforms.
The risk of human error in manual testing is higher than in automated testing. This is because manual testers can experience what we call “snow blindness”: losing track of where they are in the testing process. The constant repetition of nearly identical steps can cause them to become lackadaisical about carefully observing the outcome. After numerous repetitions, human nature leads testers to anticipate an outcome. Performing the same sequence over and over reduces their focus to identify problems. As a result, less “bugs” or problems are observed during testing causing the defect rate to go up.
For this reason, testers are often rotated and replicate each other’s testing in order to ensure consistency and improve accuracy of testing. Obviously, this adds time and cost to testing, especially for complex processes. However, depending on what you are testing and how many test cycles you are running, manual testing may be the most cost effective solution; more on this later…
The Cupcake Case
Some situations are suitable for manual testing, and some –– especially those requiring judgment –– require it. Consider the cupcake business in the previous testing article. They need a human tester to make sure each batch tastes good, looks great, and has uniform quality in every way. Initial tests would include:
- Look test – Does it meet the visual appearance requirement?
- Taste test – Does it taste right?
- Nonconformance testing – For example, was the correct sweetener being used? Go to the kitchen and check the sweetener type and brand name to make sure they meet the requirements. Then checking the amount used, etc.
The bottom line: Because manual testers are human, there will always be the possibility of error. This has been proven through many studies, the most famous of which is Raymond R. Panko’s work on Human Error Rates at the University of Hawaii (2008) which shows that from basic to complex tasks people can have an error rate of 0.5% for simple actions to 5% or higher for more complex tasks.
Automated Testing: Avoids Human Error, Saves Time and Money
Unlike a human being, software doesn’t get tired or inattentive. When testing requirements reach a certain complexity or scale, it is more effective and efficient for a person to perform system analysis and program automated test cases rather than to manually conduct the testing.
If correctly programmed, the automated testing software will almost always find more defects than manual testing in a shorter amount of test execution time, increasing the quality of the final product being tested. In many cases, the testing can be scheduled to run at night, allowing a tester to return in the morning, confirm the results and work with teams to remediate the problems or defects.
Wright1 has found, because of the efficiency and effectiveness automated testing brings to an organization, that after the third time automated testing is executed, companies will start seeing cost savings over manual testing. This is a key, we have observed that the break-even point for the cost of manual testing vs. automated is during the third test cycle run. (S. Wright, personal communication, September 2014).
So, Manual Testing vs. Automated Testing?
Testing is a balancing act to ensure the best level of quality giving the time and budget. Both manual and automated testing has a place in testing. We cannot say manual testing is without value nor can we say automated is the best approach. It really depends on your needs and what are you trying to test.
Thus, we recommend evaluating the pros and cons of both manual and automated testing for your product and then select the option that makes sense for you and your organization.
Where Do We Go From Here?
We have just explored the basics of manual and automated testing. As you can see, testing is complicated and needs to be tailored to the organization and the product being tested. In our next testing article will move beyond manual and automated testing and examine performance and security testing.
Do you have questions about software testing? Post a comment or email us, we will be glad to correspond with you.
- Panko, R. (2008). Human Error Website.[Retrieved] 2014, [from] http://panko.shidler.hawaii.edu/HumanErr/
- Software Testing. (2014). [Retrieved] 2014, [from] http://en.wikipedia.org/wiki/Software_testing
- Software Testing Help. (Undated). Types of Software Testing. [Retrieved] 2014, [from] http://www.softwaretestinghelp.com/types-of-software-testing/
- Wright, S. (presenter). (2012). Justifying the Cost of Quality (PowerPoint presentation)
- Borysowich, C. (2010). Deliverable: Quality Management Plan. [Retrieved] 2014, [from] http://it.toolbox.com/blogs/enterprise-solutions/deliverable-quality-management-plan-30210