Delen via


Emoting software: more thoughts on simulating emotions...

I am fascinated with the advances computing, and have always approached computing from the perspective of what can this tool do for me to make my life easier. As a professional tester I have a lot more work to do then I can reasonably accomplish in the limited timeframe allotted for most projects. So, well-designed test automation is a great tool that frees up some of my cycles performing mundane tasks that need to be accomplished.

A few months ago I had an email exchange and an industry consultant regarding test automation and emotions that I blogged about and he later talked about at a Star conference. In that post I also tried to illustrate a simple technique called polling for simulating irritation (or frustration) of a task that is taking too long to complete via an automated test.

The consultant I had the email exchange with wrote, "I would want my automation to feel frustration, to recognize it, and to act on those feelings in some way that provides valuable information to the product.  But until we've got not only artificial intelligence, but also artificial emotional intelligence, that ain't gonna happen."

When I design an automated test I often think of the various ways to achieve exactly what it is I am trying to prove or disprove with the test. Then I think of the things that can go wrong (such as race conditions, errant message boxes, tasks taking too long to complete, making simple decisions based on Boolean states, etc.) and design the test in such a way that can logically deal with those situations. So, as I thought about our conversation and automated test design I asked myself, can automation do more than simple mundane tasks? Can automation make decisions or perform tasks based on practical reasoning or simulated emotions?

It seems that some researchers in the Netherlands are unlocking doors with artificial intelligence that may eventually lead to advances in smarter test automation design. Researchers at Utrecht University are hard at work on an emotional robot (a cat none the less) that simulates 22 emotions including "anger, hope, gratification, fear, and joy," used in complex decision making processes. Marvin Minsky stated "...we all have these things called emotions, and people think of them as mysterious additions to rational thinking. My view is that an emotional state is a different way of thinking."

I agree with the researchers, and I "don't believe that computers can have emotions," and also mostly agree with their statement "that emotions have a certain function in human practical reasoning." (I say mostly because I do know that some emotions express by some people are completely irrational and result in impractical reasoning.) Perhaps AI in test automation this is still a long way off, but I am always looking for ways to improve and become more effective and more efficient. I am always learning and looking for ideas to improve myself and my skills.  So, based on this research I now ask myself, are there cost effective emotional logic patterns to simulate rational reasoning, and I can or should I employ that in the design of some of my automated tests to make them more robust?

Just a thought. Isn't technology great!

Comments

  • Anonymous
    August 01, 2007
    I am fascinated with the advances computing, and have always approached computing from the perspective

  • Anonymous
    August 10, 2007
    >>>test automation is a great tool that frees up some of my cycles performing mundane tasks that need to be accomplished. I always wondered what types of tasks in Testing would be mundane or no brainer? Even a simplistic test that we can imagine can have multiple possible inputs and equally complex outcomes. We ASSUME certain tests/Tasks as mundane according to our model of the system. Would not it be wrong to term any task of testing as mundane? Yes, I can think of tasks like test data generation according to a well documented business rule - as some what mundane .... What percentage of Testing can be assumed to be mundane? Any thoughts? Shrini

  • Anonymous
    August 11, 2007
    Hi Shrini, Mundane was perhaps not the best choice of words, because I don't consider any test I design to be banal or unimaginary, or even ordinary. So, let me clarify my implied connotation of 'mundane' in this context. Tests (both manual and automated) can be micro (discretely focused on specific functionality) or macro (user scenarios or exploration) and professional testers will design a library of  tests that span this spectrum. For example, a build verification test (BVT) is a micro-type test. Most tests in a BVT suite check for something very specific. The BVT suite is ran after each build (which may occur daily in iterative development models commonly used by many teams at Microsoft). So, the frequency of the BVT suite becomes 'ordinary' in the sense the tests are ran repetatively, and 'ordinary' in the sense that I expect my BVT suite to pass (In other words, my BVT suite is a baseline functionality of each new build and I expect that new check-ins do not destablize the build). If there is an error in the BVT suite it is extra-ordinary or unexpected. In this example, I would consider it valuable to automate these tests to free up some of my time to design (either manual or automated) a greater number of more complex tests. Which brings us full circle to the intent of this post. I intended to provoke readers to think about the complexity of test design, and imagine different ways to use tools and design tests; including designing tests to emulate some human emotional traits for practical reasoning. My approach in life when confronted with a problem is to never give up. Problems are temporarily unsolvable due to limitations of current technologies, information, knowledge, or skill sets.

  • Anonymous
    August 13, 2007
    The comment has been removed

  • Anonymous
    August 14, 2007
    Hi Shrini, As I said in a previous post, I was responsible for the BVT test suite on international versions of Windows 95. The BVT suite was transparent to the developers, and we still found plenty of build breaks and other defects. I think it is a common misconception to assume the pesticide paradox is a bad thing (perhaps because some testers assume the role of testing is to simply find bugs). I think the pesticide paradox can be used to our advantage in testing. If developers use my automated tests, or if I can teach developers to write better unit tests to prevent defects upstream, how is this a bad thing? Also, I think there is a big difference between a simple scripted test and well designed test automation that includes variability while adhereing to its intended purpose. (I will have to blog about this at some point.) In our internal courses we discuss state transition testing and use exercises designed to teach people how to build abstract models of features in order to get new perspectives, check assumptions, and increase domain knowledge. I have been professing for years that testers need to develop their design and analysis skills in order to succeed in this profession. The ability to abstract out a feature set to prove or disprove a particular hypothesis is a necessary design skill of testers in my opinion.