Emotional Test Automation

In April I hosted a session at the Software Testing and Performance conference entitled Why Test Automation Fails. I actually tried out a new delivery style for me similar to a town hall format where I attempted to elicit participation from the attendees. Unfortunately, it didn’t work out as well as I had expected, but at one point a consultant who attended the session offered their opinion expressing what they thought automation can’t do.

He stated automation can’t empathize, project, anticipate, recognize, judge, predict, project, evaluate, assess, become resigned, get frustrated, invent, model, resource, collaborate, decide, work around, strategize, charter, teach, learn, appreciate, question, refine, investigate, speculate, suggest, contextualize, explain, elaborate, reframe, refocus, troubleshoot, and THINK.

To this I exclaimed, “I am not sure I want my automation to get frustrated! ” At the time I was thinking frustration is typically not a productive emotional trait because it usually manifests itself as “a deep chronic sense or state of insecurity and dissatisfaction arising from unresolved problems or unfulfilled needs” (Merriam-Webster Medical Dictionary). But, while flying home to Seattle that evening I pondered how an emotional condition such as ‘frustration’ might be applied to a tool such as test automation. (Because make no mistake about it; to me test automation is a tool similar to a screwdriver in a toolbox, or a global positioning system on a boat. These tools are designed to be very effective for their purpose. And when test automation is designed and used correctly it also serves a very valuable purpose to those who use it as a tool to increase their productivity.)

So, I sent an email to the person and asked him for the list and restated what I said at the conference.  I wrote, “Like I countered, I am not sure that I want my automation to get frustrated (and honestly I am not quite sure of the point you are trying to make there)… ” He responded, “When a tester gets frustrated with some aspect of an application's behaviour (It's taking too long to respond! I have to click on this annoying confirmation dialog! This sequence of steps doesn't fit with my workflow! This set of options confuses me!), I can infer that an end-user will too. An infinitely patient human tester would ill-equipped to identify problems in the application that would annoy or irritate a normal human being. Automation is infinitely patient (unless we're programmed it not to be, via a timeout); if it has to wait, it will wait emotionlessly. We cannot expect the same of our customers. It's not that I want automation to get frustrated; I don't know how to make it do that. But I do want to get information on how the product feels to humans--so I use human testers for that.

Other than the fact this person misquoted me in his lightning talk at StarEast 2007, he also acknowledged in his email, “It’s not that I want automation to get frustrated” as well. It is usually not a productive use of one's time to discuss issues with individual's who argue against their own points. It's sort of like someone saying "Elephant's can't fly. It's not that I want elephants to fly, but they can't...so there!" But, I see in his response that he describes other emotions such as ‘annoyance’ and ‘irritation’ (to rouse to impatience or anger; annoy – American Heritage Dictionary) to describe ‘frustration.’ Now annoyance and irritation are things I can relate to because I sometimes do get annoyed and irritated. So, let’s examine the emotional condition of irritation (because frustration generally implies a negative, and often counter-productive emotion) as described in the quoted email above and determine whether or not 'irritation' can be simulated by a computer program.

When I design an automated test I never let the test run “infinitely” because I know that a ‘hang’ or inordinate delay may cause an unexpected reaction in my test execution. (For example, if I am trying to ‘click a button’ on a dialog I must first make sure the dialog with the button instantiates and has focus and the button is not ‘grayed.’) A best practice for designing automated tests is to use polling with a finite timeout. Simply put polling is looping on a conditional statement until either the condition occurs, or the max poll count is reached. If the maximum poll count is reached before the condition occurs the automated test ‘decides’ the application is not in the ‘predicted’ and expected state and a timeout occurs implying that “It’s taking too long to respond” based some predetermined value to simulate irritation (or ‘frustration’).  If the automation becomes ‘irritated’ then I can cause the automation to ‘become resigned’ by designing the automation to log an unexpected error (rather than a failure), and collect information about the machine state and the automation state and log that information for closer examination by a tester.  (I don’t really think about this as programming an artificial emotional response; I think about it from the perspective of good automated test design.  But, in retrospect I guess it is simulating ‘irritation.’)

For example, if I think an application is taking too long to shut down rather than writing a bug report stating something like “The application takes too long to close” I can write a method similar to the following example that actually measures the amount of time it takes for the application under test (AUT) to close (and assuming the method returns true).

public static bool CloseAutProcess(
    Process autProcess,
    int maxPatience,
    out int levelOfIrritation)
{
    levelOfIrritation = 0;
    autProcess.CloseMainWindow();
    if (!autProcess.HasExited)
    {
        while (!autProcess.HasExited && levelOfIrritation < maxPatience)
        {
            autProcess.WaitForExit(1);
            levelOfIrritation ++;
        }
    }
    if (autProcess.HasExited)
    {
        return true;
    }
    return false;
}

In this example, the maxPatience variable sets the maximum amount of time (in milliseconds) I am going to wait for the AUT to shutdown before becoming ‘irritated.’ The levelOfIrritation variable accurately measures the time in milliseconds (give or take a few nanoseconds). Now, if the shut down time exceeds reasonable expectations (based on a comparative analysis of similar programs, or stated or implied requirements) I can write an objective bug report and allow management to decide if the 'level of irritation' needs to be adjusted based on solid factual information rather than some ambiguous personal patience threshold, or lack thereof. Because at the end of the day, my managers may value my opinion, and sometimes opinions are important deciding factors. But, as a software testing professional my role is to provide reliable, accurate, and detailed information to my teammates and other stakeholders.

(BTW…I can use event handlers to identify the frequency and number of ‘annoying confirmation dialogs” and deal with them. And, with regard to the other things described as ‘frustrating above (“This sequence of steps doesn't fit with my workflow! This set of options confuses me! ” ) I am thinking these would most certainly be discovered while designing the automated tests (because test automation doesn’t design and develop itself).

So, I am still of the mindset that I don’t want my automation to develop a deep chronic sense or state of insecurity and dissatisfaction (def. - frustration) and I don't want my automation to have 'needs.'  And even though I can simulate some degree of artificial ‘emotional’ conditioning in test automation, I don’t want my automation to have emotions any more than I want a hammer or saw or other tool to develop human emotions. Test automation is simply a tool, and it is a tool developed and used by humans (who are emotional) who constantly utilize cognitive processes (including emotional responses) while solving difficult problems or working towards achieving specific goals. The power of a tool is limited only by the skills, ability, and knowledge of the persons designing, developing, and/or using that tool.

I agree that automation can’t THINK (although I am not overly convinced thinking is an emotion). That is why highly skilled people design and develop great test automation, and they don’t simply rely on record/playback or simple rote scripts for automated testing.

Comments

  • Anonymous
    June 10, 2007
    The comment has been removed

  • Anonymous
    June 11, 2007
    Hi Anutthara, It would be an interesting post on your blog to discuss the OGF measure, how it's determined and how it is used. Thanks for the nitpick...I corrected the misspelling.

  • Anonymous
    June 11, 2007
    The comment has been removed

  • Anonymous
    June 11, 2007
    The comment has been removed

  • Anonymous
    August 01, 2007
    I am fascinated with the advances computing, and have always approached computing from the perspective