"Loading..."

What To Automate

What To Automate

Everywhere I turn these days, it seems like people are automating simply to say they automate. In a previous article titled Do You Really Need Automation, we explored the fact that many organizations aren't gaining a tangible benefit from automation but didn't explore how to target automation to make sure it adds value. We will do that here now.

What You Should Automate

Good automated test cases should serve three purposes; save time, increase quality by reducing tester error on difficult to test areas, and increase repeatability/consistency of testing. The best cases cover all three. The following are areas that automation fits well the majority of the time.

Data Generation

Your test case can only be as good as the data you feed it. Generating data manually is inefficient and time consuming. Most software companies, even pure blackbox manual shops, could save significant amounts of time by just automating the generation of data for test purposes. Strangely though, data generation is almost completely overlooked as a valuable form of automation and is often relegated to a setup step in a UI automation tool. Even something as simple as reusable INSERT SQL queries shared amongst the QA team can save massive amounts of time. In fact, I consider SQL data generation so fundamental to QA, I have taught every single QA person who I have worked with, from n00bs to senior, how to generate and check data via SQL and other database technologies, assuming the projects we are working on utilize SQL or other databases. Why? The database and data structure doesn't change very often and when it does, rarely changes radically. Also, damn near ever test requires at least SOME data to run. The payoff in time savings is almost immediate, unlike other types of automation. Need to test Username and passwords? Create a script that generates some users with normal and funky characters! Need to test an analytics system requiring years and years worth of financial data but have to release next week? Write a looping script that generates the data you need! Even though its time consuming, I have written apps with simple web interfaces that even the least technical QA team and technical sales team members can utilize to add specific types of data to various systems. The benefits have almost always outweighed the cost in the long run. Data generated for manual test cases can also be leveraged as setup cases to feed automated test cases later when time allows for automated test case creation.

Unit Tests

While this one may seem like a no brainer on the surface, not all software shops write unit tests and QA rarely involves themselves with testing close to the code even though Unit Testing can provide huge benefits. But wait a minute! Isn't unit testing the domain of the developers? Yes and no. Ultimately, Quality is the responsibility of the entire team, not just QA, but QA should be the champions for good quality software deliverables. Even if the QA team members themselves can't read code, knowing what the developer covered in the Unit Tests can provide valuable insight into what may or may not turn up as issues in the software. With good rapport, open communication, and a little trust, the QA team could even work directly with the developers to translate QA created component test cases into unit test cases, saving valuable QA team later in the project. Any time software can be tested early and tested often, everyone wins.

Alternatively, the QA team could take a more direct role in unit testing. While the original coder should write the unit tests, a QA team member with a working knowledge of code could clone or extend unit tests written by developers to cover additional component test cases. Dev and QA aren't usually interchangeable roles but with a few technical QA team members, rapidly creating a test driven development framework in collaboration with the developer may actually be possible. At the very minimum, QA could begin testing before interfaces have been created and by helping out, could actually cut dev time AND QA time simultaneously. There is even a newer class of tools, called Business Driven Development (BDD) automation tools, where the dev team writes test harnesses into the code using the tool's framework an the QA team can execute tests using plain-ish domain specific language.

Frequently Run Tests

Have you ever had a test that you end up running every single time a build is kicked off? It's the same test EVERY time but the area of the system is so fundamental that NOT running the case would feel like heresy. Ever worse, the development team seems to inexplicably break that particularly critical function from time to time even though they say they "didn't touch it". High frequency tests that remain relatively static are great candidates for automation. If the test needs to be run with every single build, even better. Like any good automated test case, cost of maintenance has to be factored in to be an automation candidate. Even if a test case must be run frequently, if the area of the system is undergoing active change or will be redesigned imminently it may be better to wait until at least the architecture is stable before considering automation. Once a case becomes regression, however, having a decent suite of automated cases that can be run with every build can act as an early warning system for build quality and help attain one of the higher planes of QA goals: Test early, test often.

Anecdotally, I worked on a project for a small startup developing a highly secure application where every build, at least one of the developers managed to break the login retry counter due to a highly specialized security token method. It didn't seem to matter how many times the bug got fixed, the functionality broke again even though no one had touched the login function. One day a user would lockout after 2 tries, another 8, and 5 the next regardless of how the user's retry value was set. To make matters worse, authentication checks were scattered across multiple areas of the code base. Just this case would have been roughly 2 hours worth of manual testing per build and in the early phase of the project, builds would occur as often as 3 times per day. This test was the textbook example of a high frequency run automation candidate and became the first major test case in what later become our automated smoke test.

Areas Difficult or Impossible to Test Manually

Some tests just can't be run manually. The usual reasons are long test execution times or lack of a human interface. Though few and far between in most software jobs, occasionally we run into scenarios where a test must be run over a period of days, weeks, or even months. You could hire someone with insomnia and OCD like me to stare at your software 24/7 for weeks on end and respond to system requests for user inputs or you could create an automated test to get the job done. My vote is on the later. Of course, some software has no user interface what-so-ever. Two great examples are software acting as middleware and APIs. Unlike true client/server apps where you could, in theory, test services or underlying middle tier code by running manual cases against the UI or an API, in middleware systems data goes in one side, the software does something magical with it, and transformed data comes out the other side ready to be sent to or consumed by the target system waiting on the other side. Even if you wanted to test this type of software manually, you really couldn't. The interface is software to software. I GUESS you could create a big red button that sends sample data through every time a users clicks it or dump files into an inbound folder if the system uses one but for only a little bit of extra time, you could create a half-way decent simple automation framework.

Another example of "difficult to test" are tests that are prone to human error. Humans aren't the most precise creatures on the planet and aren't great at tasks that require high degrees of precision, like timing something in your head. You can try this yourself. Start counting off seconds in your head for one minute while your friend start a timer as soon as you begin. Chances are, you are off by a significant margin. Test cases that require timing over a long period of time, counting a large number of items, inputting long stings without a defined pattern, or measuring weight are generally considered error prone and need automation to test properly. Think of a critical system like a pacemaker designed to speed up a slow heart rhythm. You could pull out a stop watch, hit start, and count the number of heart beats per minute and the average duration between heart beats but chances are, your internal clock isn't up to the task and your reaction time on the start/stop button isn't precise enough so releasing the pacemaker software tested in this manner will kill someone and bankrupt your employer. When human error is likely, automate.

Load/Performance Tests

Almost any client/server application could benefit from at least some degree of Load Testing. If nothing else, identifying the upper limited of what your system or server will handle could be useful as a proactive step for monitoring if and when usage of your software usage increases to an identified dangerous where you know it will explode sending virtual shrapnel in all direction. Better yet, identifying slow areas could actually help the development team improve user satisfaction by proactively fixing slow functions, leading to wider adoption and happier stakeholders.

Barring a massive test team spread across a large geographic area, Load Testing by nature has to be done through automation. Even if you intend to only use a single node or client to blast a single service, you still need to automate. You could try to recruit everyone in the office but you still aren't going to be able to generate enough data, traffic, requests, or whatever the unit of load you are using to place even a basic system under sufficient load. Unless you recruit Chuck Norris or the intended user is a pack of wild sloths, you just can't click fast enough. Of course, I could just spam all of my Facebook friends to see if they will help out in exchange for pizza and beer but I work in software so only have 10 friends, assuming you count my mom, wife, and the profile I made for my dog. Kidding of course... I think?

Tests That Require Multiple Configurations or Data Sets

Configuration, data intensive, and cross browser testing can be a massive pain in the butt. Basically, you are running the same test cases over and over and over and over but using a different configuration or data set each time. Boooring!!! If you don't have a cocky intern in desperate need of a reality check on what it really means to be QA and can force said intern to spend days running this stuff for you without hang himself/herself with a mouse/keyboard cord or network cable, its very unlikely more senior members of the team are going to want to do these tests. let alone cost effectively. As long as your chosen automation tool has good support for data driven testing and configuration testing, the computer isn't nearly as likely to complain about repetitive testing and can do it far better and faster than a human could.
This area is also one of the few areas that UI automation can actually be useful as long as necessary precautions are taken (see What NOT to Automation below). With Google releasing new versions of Chrome every 3 seconds and Microsoft somehow managing to find new and increasingly creative ways to make IE, now Edge, less compatible with the internet, applications aimed at the "general internet population" could make good use of cross browser UI automation. Automation should still be limited to key areas that won't likely change any time soon and does not dynamically generate UI identifiers. UI testing dynamically generated identifiers just isn't worth the hassle even though it can be done. Similarly, applications that utilize a wide variety of data sets, like enterprise analytics and statistical analysis systems, pretty much require automation to keep testers sane and launch projects in a reasonable amount of time.

Summing Up What To Automate and How to Note Your Identified Cases

Regardless of a cases qualification as a good automation candidate, automation will have a higher initial time investment than manual testing. On projects with a tight deadline, automation usually won't help shorten the QA cycle for that project but could yield benefits later (i.e used in a regression suite). The trick is making sure the ROI nets positive over a period of time rather than focusing on short term needs. One technique I have used throughout the years is to run through the ROI process quickly in my head as I am writing test cases and mark test cases that would be good candidates for automation. Is this test reusable later? Does it meet the criteria outlined in this article? A number of test case management software suites have fields for this by default. By leveraging your test case writing process to identify automation candidate cases you can get testable cases now without losing time to task switching to writing automated cases and leave a record for yourself and other QA team members if time runs out before getting to automation or you have to prioritize which test cases you can/can't run. As you add a test case to your automation suite, check it off in the manual test case repository and reference the automation identifier (name, number, etc.) so the team is aware that the case is covered via automation.

What NOT to automate

Knowing what NOT to automate is often just as useful as knowing what to automate. I feel it bears repeating. Good automated test cases should serve three purposes; save time, increase quality by reducing tester error on difficult to test areas, and increase repeatability. If an automated test fails to achieve at least two of the three items, the test is likely not a good candidate for automation. This can't be stressed enough. Automating the wrong things will achieve exactly the opposite of what you want; huge amounts of wasted time, decreased quality, and stale test cases that are out of date almost as soon as they are written. Automation should not be viewed as a silver bullet but rather another tool in a tester's toolbox to be used where appropriate. Even companies who heavily invest in automation still do a good degree of manual testing. I am talking 50% or more manual even with a "fully realized automation suite" in many cases.

The UI

I'll just come out and say it. UI automation rarely saves time nor does it increase quality. The UI is the layer furthest from the underlying code, is subject to more frequent changes, and is the least testable via automated tools since someone will have to manually run most of the UI tests to find usability issues that aren't possible to catch with automation. Throw in a very bad UI object model and you have a maintenance nightmare. Seriously, I can't tell you how many times I've seen UI test suites devolve into an unmaintainable pile of useless tests and have seen teams spend more time maintaining automation than actually testing. Counter intuitively, the UI is the most frequently automated (from a QA perspective anyway) part of the system, with a majority of automated tools concentrating on this area. Test closer to the core system!

Areas of the System Subject to Frequent Change

Why dig a hole just to fill it in again tomorrow and dig it somewhere else? Time is valuable and there is no sense automating something you know is going to change unless there is no other way to test the functionality. When identifying test cases that are good candidates for automation, consider the change risk of the area under test. With the current popularity of Agile, this issue is almost an absolute guarantee. Using Agile provides a feedback loop for the stakeholders to see working software and suggest changes to the implementation in a short period of time. Since change is almost a given in Agile (and older processes for that matter), waiting to write automated test cases until the team has received and implemented stakeholder feedback could prevent days of wasted effort and allow the test team to better spend their time elsewhere. For methodologies other than Agile, the project backlog or roadmap is a good place to look to identify areas at risk of imminent change. Are there multiple iterations coming up that will affect an area of the system? Might be best to hold off or dig into the specifics to see which test cases are most exposed to change risk and which are likely to remain unaffected for the foreseeable future. When in doubt, talk to the product team and the project's stakeholders. Communication is key and they can likely provide valuable insight about system expectations as a whole, not just automation.

Single Use Disposable Functionality

Aside from test cases that cannot be tested manually, regression testing and repeated runs over a period of time are the primary reasons QA teams usually decide to automate. If the software will be used for a single release iteration and will be thrown away after a short period of time, don't bother with automation. Consider this case: A company wants to gather user feedback on their custom website so they have their web developer code up a simple feedback form. The team does not intend to use the form after feedback has been gathered. Even though user inputs, data, and likely the service underlying the form (if one exists) could otherwise be a candidate for automation, the team doesn't intend to maintain the code and will throw it away as soon as they are done gathering data so no sense in wasting time writing an automated test case when an entry level QA person could likely run 1 or 2 quick test passes quickly.

Share:

0 Comments

Leave a comment