Wednesday, April 7, 2010

The Traditional Test

Now, if I were to conduct just one test, I would do so upon having nearly finished the project, i.e. right before I were to go live with the site but still had sufficient time to make some changes that could quickly and effectively alleviate any truly glaring issues. Who and how would I test? Well, again, in a perfect world (Skokie, IL, 1953), I'd test at least fifteen persons in a rigorous fashion. In this instance, “rigorous fashion” means to individually sit each of the fifteen persons behind a two-way mirror but in front of an excellent computer machine, replete with eye-tracking enabled monitor, high quality audio microphone, and both overhead and straight-ahead cameras. I would employ Morae, a software suite developed and released by TechSmith Corporation, in order to capture each user's datastream; i.e., e.g., mouse-clicks, on-screen text, keystrokes, program launches and browser window modifications, not to mention mouse, keyboard, and tower hurls (these are the most fun, and destructive).

Naturally, as observeur and arbiteur extraordinare, I'd sit on the opposite side of the two-way, laughing heartily at the testers as they toiled through the ten or so questions and tasks that I'd have developed for them, my hands at the ready to strike the necessary key combinations in order to input realtime notes into my Morae screencast and video. Perhaps I'd ask them questions or instruct them to complete tasks akin to the following:

1. "Could you find a picture of a woman sitting on a chess board and tell me how Kyle managed to do her hair in just twenty minutes flat?"

(This question would force the user to think about which navigational button he or she would have to depress in order to land on a page containing this information. Once there, they would have to quickly scan the photos and determine the correct image. As I ask this question now, I am already immediately flooded with newer, infinitely more impressive, audaciously more awesome ideas like: "Hmmmm, this makes it sound like the gallery would be an excellent candidate for a relational database with a robust (coffee anyone? I can't be sure if this is a “java” joke...), perhaps Boolean, search function, enabling users the ability to search the voluminous pictures for a variety of pieces of metadata and cull out attributes in which they are interested.... Or they could just suffer the unbearable burden of searching through the thumbnails in the gallery and rolling over the pictures so that they can find the picture of the lady on the chess board, read the description, and then find the picture of Kyle doing her hair. How successful one is at this task can tell the researcher [i.e. me] a lot. Why? It'll show me how clear and articulate the writing is in describing the photos, as well as how natural and fluid the navigational features of the website are.)

2. "Please tell me your general impression of this site. Do you like the overall aesthetic, i.e. the look and feel? Is it pleasing to your senses? Would you come back?"

(I love questions like these. Very often we can obtain more information about a site and how a person really feels about it by way of simply asking them outright, "How do you feel about the pixels of light at which you are currently looking?". People like to talk [most of them] and people have interesting things to say [a few of them]. Obviously, quantitative data can tell us a lot about a site and the various functions and how effective they actually are. But unlike, say, an anonymous poll, usability tests can be much more personal and take into account [if you've completed an appropriate IRB and have the person's permission] various demographic and personal details about the person. And unlike an anonymous poll or survey, they can't really hide anything! You can get to know a person pretty well by sitting and watching them struggle with, elate over, or otherwise stare at something you've created [and obviously think is awesome] for an hour or more! This is invaluable in determining how well one has designed his or her site and what he or she or they can do towards ensuring that they target the appropriate user population.)

3. "Could you tell me how you might make a donation to Kyle's brother's film project? And could you, uh, go ahead and make one (thanks...)?"

(Again, this question will test the clarity of the navigational structure as well as the intuitiveness of one of the main features of the site, namely, the donation form [paypal, probably...]. I will be able to determine just how obvious the heading of "Documentary" is to the users, or whether it is semantically serpentine. Seemingly for certain. For some surfers. Sanguinely not for some others. [Ah, if only all humans were exactly the same!])

4. "Please sign up for news and exciting opportunities from Kyle!"

(This task is straightforward, but comprised of several steps. The user will work through the sign up form, after first having to find it, and then submit it, navigate to their email, and confirm their subscription. Should be easy, right? Well, this task will show the researcher [yeah, you guessed it; Frank Stallo---err, I mean, me] how intuitive the various aspects of the sign up form are and how easily the entirety of the process of signing up for the mailing list is completed.)

5. "Please share this on Facebook."

(This task could be completed in a couple of ways. Obviously, a user could copy the the URL of the site and link it in Facebook, sharing it with their friends. But I'd be looking for them to utilize the social media icons on the bottom of every page.)

6. "What is your name?"

(These descriptions are getting shorter...)

After all the users have completed the testing, I would inevitably pour over the data that was collected and determine ways to reduce clicks, rectify shoddy wordage, and refine inordinately meandrous processes. It will obviously be important to have had taken excellent notes over the course of the sessions, procured adequate background information on the users that undoubtedly "volunteered" for the study so as to determine how that may have affected their feedback and task success, and developed a plan for manipulating the data in such a way that I could excitedly infer incisive conclusions with statistically significant analysis. I'm not a statistician, so I'll decline to comment much (or at all) on what that might entai---- "Hey! Wait a minute!!" I interject.

Obviously I've just realized that I unwittingly began discussing a framework for a traditional, fully-realized usability testing scenario. If you remember, I said that I wasn't going to do that. For shame.

Well, if I were to perform just one test at just one point of development during the creation and implementation of this site, it'd be something along those lines. But I don't want to test my site just once! And I don't want to sit in a usability lab! And I don't want to pay anybody! At least not with money... ("Hey, I'll buy you a beer and you can sit and listen to me play 'Yankee Doodle' on harmonica!")

No comments:

Post a Comment