Compartir a través de


HOWTO: Design and run an API usability study

A few people have asked me about how I design and run API usability studies. I'm running an Indigo study this week so I thought I would describe the steps that went into setting this study up.

The study is a follow up to the Indigo study we ran in October and which identified a number of usability issues, the biggest of which was the extensive use of attributes. Since then, the team redesigned parts of the API and were keen to get fresh usability data on the new design to see if the problems were resolved. To get the new study kicked off, I met with Steve Swartz (one of the Indigo architects) and David Palmann (a program manager on the Indigo team responsible for usability) in December 2004 so that they could describe the design changes to me.

We discussed the earliest date that we could run the follow up study. The implementation of the new API design wouldn't be complete until some time in February and then with the necessary testing that needs to take place, it looked likely that the earliest a stable build would be available wouldn't be until some time in March. We tentatively scheduled early March to run the study.

One of the first trade-offs to make in setting up a study is deciding when to do it. You need to make sure that you run the study with a version of the API that looks pretty much the way that the team have designed it but you don't want to wait until the API is completely implemented and tested since there probably won't be much time to respond to any changes suggested by the usability study. We thought that we would be able to secure a reasonable build of the new Indigo API by early March which would still give the team enough time to respond to any usability issues.

With a rough timeframe in mind, I drafted out a plan for the work that I would need to do to prepare and run the study. Typically the very first thing that I like to do is to get hold of a build of the API such that I can spend some time coding against the API to get a feel for the API and to identify any potential usability problems in the API. This build needn't be the exact same build that I will use in the study itself, just something that I can get up and running and playing with. At this point in time (late December), a build wasn't yet available so I had to put off being able to play with the API until some time later in January.

This would leave me with at most four weeks to play with the API and prepare the study if we were to run the study in early March. Normally this would concern me since one risk is that once I start playing with the API, I discover that it is pretty complex and that preparing the study materials is likely to take a significant amount of time. Since I am typically running/designing more than one study at a time it can be quite difficult to find the time to prepare the materials for a complex API when there are other studies going on.

In this case though I wasn't worried since I already had materials ready from the October study. I just needed to update those materials based on the new changes. The study materials that need to be prepared are basically an API introduction document that each participant gets before they come into the lab and the task list that participants work on when they are in the lab. The API introduction is a very important document - it gives participants enough of an intro to the API so that when they come in to the lab they have some idea of what to expect and what they will be working on. However, the intro needs to be tuned to the needs of the study. It's important that the intro gives participants enough information that they feel comfortable working on a task in the usability lab, but not so much that we just tell them what to do. Very often I end up writing this document myself from scratch based on my experiences using the API for a few weeks. I try to look back on my experiences using the API and try to document answers to questions that I had while getting to grips with the API. Quite often these will be things that I needed help from the API team to figure out.

We try to send the intro document to participants at least a week before they come into the lab so if I wanted to run the study in early March., I really needed to have a draft of the document complete by mid February so that it could be reviewed before being delivered to participants. I was able to reuse the document that was sent to participants for the October study and edit this to reflect the new API. In late January I spent an hour with Steve Swartz making changes to the original document.

At this point I still hadn't used the new API myself. I found that when I was working on the changes to the tutorial document that Steve had suggested there were still some niggling questions that remained which would have been easily answered had I been able to use the API and explore for myself. So I waited until a reasonably stable build was available for me to play with. In mid-February David was able to provide me with a Virtual PC image containing a build of the API that I could use. This helped me complete the edits to the intro doc so that it reflected the new API.

Around the same time, other projects I was working on required some attention. There were UI reviews for Visual Studio Team System and Joey Lawrance had just started his internship with us. He and I were working on a pretty big study that was to take place throughout February and this took higher priority. This meant that our original plan of running the study in early March wasn't feasible so we had to postpone the study for a couple of weeks. But I had a build of the API now so I could get at least think about getting started on the task list.

In my next posting I'll talk about how I go about designing tasks for an API usability study.

Comments

  • Anonymous
    March 29, 2005
    I am particularly interested in how you can adapt an API so that attributes are not needed. For example, the design-time part of an object model is currently mostly addressed by creating separate classes inheriting from interfaces or base classes, such as IDesigner. Then these are associated with the run-time classes via attributes that specify the design time assembly and class name via a string.

    This is done so the run-time dlls are not burdened with design-time code that is only needed in an environment such as Visual Studio .net.

    But suppose you wish to do away with numerous attributes linking the two libraries ... how would you approach the problem?
  • Anonymous
    March 30, 2005
    Interesting question Frank. Unfortunately it's one that I don't think I have a great answer for...

    I can talk about some of the changes that the Indigo team made to address some of the issues raised in the October study. One thing the team has done is to separate the specification of the run time settings of a service from it's implementation via the use of config files. Instead of using attributes to set properties of the service such as the transport scheme etc, these can be set in the config file. In the study I've been running this week, such a separation of concerns made sense to participants and enabled them to accomplish certain tasks successfully that participants in the October study were unable to accomplish.

    Using config files in this way might not be a solution to your question. But I think what might be generalisable is the advantage offered by the clear separation of concerns. I'm not claiming that this is a new insight - the AOP folks have been making this claim for a few years now. But I think it does suggest that one guiding principle with respect to attributes is to ensure that the set of attributes applied to a given file, class or member are as consistent as possible with respect to the scenarios that they enable. For example, in the October study participants had to apply multiple attributes to a service class to modify details of both it's implementation (for example the instance mode) and it's deployment (the transport scheme used). In the recent study, separating these out seems to have helped participants be more successful.

    I still have to review the data from this week's study in more detail though before I can really claim to have a good understanding of the reason for participant's success in this study. For example, one other reason could simply be the number of attributes that participants have to deal with. The IDesigner scenario you mention might not cause problems for developers since the number of attributes used is small.

    Sorry for rambling and not really answering your question though - what is your opinion on this?
  • Anonymous
    March 31, 2005
    [First post failed -- Retrying]
    The config idea may be brilliant! I will not pass judgement until I have tried it.

    One advantage of the config file: you can change the design-time behavior without modifying the run-time dlls. This is why it may be a brillant idea specifically for the customized designer scenarios we are looking at.

    We are working on a design-time infrastructure that is lighter-weight than the MS designed one. It is not that we don't like that architecture, and we will definitely be using it for simple things like type converters, it is simply that the Visual Studio architecture requires the creation of per-element IDesigner objects. In VG.net (http://www.vgdotnet.com) the user may be working on hundreds or thousands of elements at once, and that is too much overhead, especially to do something that is usually a per-class and not per-instance operation, such as filtering exposed properties.

    This alternate architecture would be for people using a designer SDK we are creating, for people creating custom graphical designers that are not integrated in Visual Studio .NET. They need something truly scalable, which Visual Studio is not. The config idea would make the whole SDK more flexible. We could have config files for all the common design-time bindings.

    In fact, the number of design-time attributes is more than just IDesigner -- there are lots of attributes, and putting them in an external file would help quite a bit. I wonder what Brian Pepin thinks of this idea.
  • Anonymous
    April 19, 2005
    In my previous post I talked about setting up an API usability study. In this post, I'll talk about how...
  • Anonymous
    May 05, 2005
    With the task list in place and participants recruited, it's time to run the study. My experience has...
  • Anonymous
    July 11, 2005
    I just realised that I never got around to finishing off a series of posts on how to design and run an...