Bagikan melalui


Usability Stockholm Syndrome

One of the many ways we test designs with real people is through usability testing. Although in Office 2007 we've greatly expanded the range, scope, and types of testing we've done to include everything from remote testing to extremely early deployments to longitudinal studies, we still do our share of standard usability tests.

What is a standard usability test? Well, normally it works like this: a test subject drives to Microsoft and comes into our labs. The usability engineer responsible for the test greets them, answers any questions they might have, and then sits them in front of a computer which is running the software or prototype being evaluated. The usability engineer runs the test from an adjoining room, while the designers and program managers responsible for the feature being tested either watch in person or from their offices via a video feed.

In many tests, the test subject is given a set of tasks to complete, and asked to verbalize their thoughts as they go: what their expectations are, what they're looking for, if they're getting frustrated... things like that. Other times, people are given more open-ended tasks, such as "make a document exactly like this printout" or even just "make a nice looking resume." Sometimes we have people bring their own work to Microsoft and they complete it in our testing environment.

Most times, the usability subject has filled out a screener ahead of time which helps us judge how much of an expert the subject is at using the software being evaluated. The point is not to exclude anyone, but to help us analyze the results--we do test everyone from ultra-novices to super-elite power users.

When a test is done (usually between one and two hours later), the subject is given a software gratuity as thanks for donating their time, and the cycle of improving the design begins anew.

Back when I first was exposed to usability early in my Microsoft career, my expectation was that people were really going to be super-critical. After all, the software is usually in a pretty rough state during the tests and this was people's one chance to really let Microsoft have it and let out their rage at things not working as they expected them to.

But it turns out that this impulse is generally wrong. In fact, people tend to be much less critical of the software designs they're testing than they probably should be.

I think of this as a form of "Stockholm Syndrome," in which a hostage becomes sympathetic to his captors.

Number one, people are coming to our labs, as a guest of Microsoft. There's a little piece of human nature which says you don't go to someone's house and then insult them. They come to our place, we're giving them free stuff--no wonder they subconsciously want to please us a little.

Secondly, people have an innate tendency to blame themselves when they can't complete a task, instead of blaming the software. You hear a lot of "oh, I'm sure this is easy" and "I'm so embarrassed I can't figure this out."

Maybe this comes from taking tests when you're in school, knowing that every question has a correct answer and if you get one wrong, it's your fault. Maybe computers are still so complex that people feel like they should have to undergo training in order to use them correctly, and failing a task in usability plays on that insecurity.

Whatever the cause, this tendency to not criticize the software is a major risk to the results of standard usability testing. Our usability engineers are well aware of this and take great pains to ask test subjects to be critical and reassuring them that "it's not a test of you, it's a test of the software."

But there's always the potential of skewed results, and this is one of the reasons we've supplemented standard testing by initiatives in which we watch the software more in the real world--including technology to perform tests remotely on people's home computers, far away from the bright lights and cognitive din of our on-campus labs.

Interested in participating in usability research as a participant? Visit https://www.microsoft.com/usability.

Comments

  • Anonymous
    March 20, 2006
    Is this why anyone thought that the disappearing menu items in Office were a good idea? You know the ones.. the menus only show the "used" menu items and then there is a double arrow at the bottom to show all menu items. Ugh.

  • Anonymous
    March 20, 2006
    "But it turns out that this impulse is generally wrong. In fact, people tend to be much less critical of the software designs they're testing than they probably should be."

    Surely you know by now that the way you can get people to be more critical is to have them post anonymously on an internet forum?  The only problem is that in addition to them being critical, you'll have them being rude and arrogant, so maybe it's not a perfect solution.

  • Anonymous
    March 20, 2006
    The comment has been removed

  • Anonymous
    March 20, 2006
    Maybe you should consider using an outside company for testing? In that case people would tell more about how they truly feel and not be afraid of hurting anyone's feelings in the process.

  • Anonymous
    March 20, 2006
    Slashdot trolls arrived early today, must be spring...

  • Anonymous
    March 20, 2006
    The comments to this blog have gone seriously downhill since the ignorant hoards of the Net found it.

  • Anonymous
    March 20, 2006
    Stockholm syndrome, as in the town name in Finland.

  • Anonymous
    March 20, 2006
    The comment has been removed

  • Anonymous
    March 20, 2006
    I think this blog needs moderated comments turned back on again. :/

  • Anonymous
    March 20, 2006
    Some comments need to be cleaned.

  • Anonymous
    March 20, 2006
    PingBack from http://usabilityblog.dk/?p=40

  • Anonymous
    March 20, 2006
    The comment has been removed

  • Anonymous
    March 20, 2006
    I deleted out the offending and lewd comments.

    Sorry they persisted for as long as they did.

  • Anonymous
    March 20, 2006
    Jensen,

    Is there much usability testing done outside of the US?

    Always interested to learn how you do usability testing. Thanks for the continuing feed of information and sorry to hear about all the problems with the comments system.

    Max

  • Anonymous
    March 20, 2006
    Interesting way of putting it - Stockholm Syndrome.

    I see a similar reaction in some projects. Initial acceptance testers love feature A, but months down the track the same users start to hate it.

    Maybe people just love any new thing, but after a while, they get over it. Then they really start to evaluate something properly, without the self induced hype.

  • Anonymous
    March 20, 2006
    The comment has been removed

  • Anonymous
    March 20, 2006
    Jensen, I have to say that I completely agree with your "Stockholm Syndrome".  We've seen identical reactions in the usability testing that we do; we've just never thought of giving it a name.

    One thing that we started doing which has helped us immensely towards getting back more honest feedback is telling the participants that we (the testers) had nothing to do with the design of whatever it is that they might see (that may or may not be true).  They're not going to hurt our feelings in any which way.  

    This has had a remarkable impact in getting more "blunt" comments from our users.  A few of them will mention off hand to us "well, since you didn't have anything to do with this, lemme tell you something..."

    Incidentally, we don't call our participants "subjects" - we call them "test participants" or "users".  "Subjects" makes them sound like white lab mice that we're experimenting on.  And as you state, we're not testing them, why call them something that's tested on?

  • Anonymous
    March 21, 2006
    Tim,

    That's very interesting!

    Incidentally, we don't really call our participants "subjects" either.  We call them "participants."

    For some reason, I thought calling them "subjects" in the article made it more clear to people what I was talking about. :)

  • Anonymous
    March 21, 2006
    A means of avoiding this syndrome (by whatever name is being used) is to do the testing in a neutral atmosphere, that is, an office site that is not specifically at a Microsoft facility.  There is no point (and it would show a lack of integrity) in hiding who is doing the testing, but I suspect you would get less of the "sympathy" factor involved if it was not at a Microsoft office.

    Another approach (which has other dividends as well) would be to have the user test both the Microsoft and a competing product (they do exist, don't they????).

  • Anonymous
    March 21, 2006
    <blockquote>"One thing that we started doing which has helped us immensely towards getting back more honest feedback is telling the participants that we (the testers) had nothing to do with the design of whatever it is that they might see (that may or may not be true)"</blockquote>

    Maybe I should be a lawyer, but what I always say is, "I didn't make all this stuff you are going to be looking at, so don't worry about hurting my feelings..." They assume that means I didn't make ANY of it, but I am just saying I didn't make ALL of it :) Anyway, I feel better about it because I am not lying to the participant, which I tell them up front I am not going to do.

  • Anonymous
    March 21, 2006
    Have you tried to do this in a less controlled environment? For example, set up a Terminal Server and let users access it for a week or so. For that time, give them a few practice tasks to accomplish with the new piece of software. A task should maybe take them a maximum of an hour to do, and you'd collect direct feedback from them afterwards via a questionnaire. After they had the chance to practice using the new piece of software, invite them to your location and do a conventional usability test. I think your results would be a lot better though, because the user wouldn't be discovering things for the first time, but rather would try to actually be somewhat efficient in doing the task.
    Personally, when I got Beta 1, I was very thrilled and loved the UI without questioning it for the first few weeks. After I had started really using it though, I started questioning how certain things worked.

  • Anonymous
    March 21, 2006
    Slightly off-topic.  Hopefully not warranting of deletion.  I understand the value of honest criticism and imagine that it is well worth all the hassle and expense of recruiting users, poring over videotape, etc.  Vocal users are far more helpful than polite ones.  This being the case, I can't understand why Microsoft has made it so difficult to report issues (at least for those of us that are not part of an official usability study).

    I'm a software developer and consider it a professional courtesy to report bugs or even usability issues when one stumbles across them.  I recently encountered a minor bug in Internet Explorer and figured I'd submit it via the Microsoft bug tracking site.  Couldn't find one.  Hunted around on the web for a bit and learned that there's a 'Feedback' button in the 'Help' menu.  The trail leads to a support phone number.  Next thing I know they want to charge me for tech support!  Or "maybe you can send a physical letter," I'm told.  Am I missing something here?  Seems like you're throttling one of the most efficient conduits of product feedback.  Is it because there is just too much chaff?

  • Anonymous
    March 22, 2006
    I'm just wondering whether this is related to Orm's theory on Demand Characteristics.  Participants who have volunteered want to be "good participants".  Maybe they feel that being critical of a product, rather than just saying good things about it, is not what the people running the tests want them to do.  It would be interesting to do some pre/post testing interviews to try and figure out exactly what participants viewed the purpose of usability testing as.

    I do think you've got a point about testing though.  Regardless of how much people are told the test isn't about them they always seem to think it is.

    Will

  • Anonymous
    March 27, 2006
    The comment has been removed

  • Anonymous
    March 28, 2006
    The comment has been removed

  • Anonymous
    March 28, 2006
    The comment has been removed

  • Anonymous
    December 04, 2006
    PingBack from http://www.microisv.com.ph/blog/beware-of-the-usability-stockholm-syndrome.html

  • Anonymous
    August 11, 2007
    PingBack from http://www.nakedui.com/beware-of-the-usability-stockholm-syndrome.html

  • Anonymous
    October 15, 2007
    The comment has been removed

  • Anonymous
    May 31, 2009
    PingBack from http://portablegreenhousesite.info/story.php?id=10795

  • Anonymous
    June 08, 2009
    PingBack from http://insomniacuresite.info/story.php?id=4291