Wednesday, April 7, 2010

Stage 1: Time Aware Research

I approach websites like I approach any creative pursuit. For instance, when I've written a new song or had an interesting musical idea, I allow it to percolate for a time, in an internalized, insular way. Specifically, I play it by myself for awhile, roll it around in my head for a longer while, and then finally allow it to burst forth brilliantly and blindingly. This bursting forth occurs quickly and results in a rough, raw, and unrefined 'product'. By product, I mean a 'demo', or, if you will, a 'mockup' or 'wireframe'. This is achieved by recording a basic outline of the song onto tape or computer, quickly and easily. Once that is completed, it is ready for limited dissemination whereby persons whose opinion I respect on the matter listen to the song and provide feedback.

What would be an equivalent of this process for wireframes or mockups?

Obviously, the first thing that comes to mind is the dissemination of said mockups and wireframes, perhaps to fellow GSLIS peers and professors, coworkers, friends or family. The feedback received at that stage of development is invaluable.

But what are some other avenues for objective, unbiased review of initial design ideas?

Well, here is one.

This website provides a free outlet for designers and developers to share initial mockups or design drafts with others whose opinions are at once informed (maybe) and unbiased (we hope...). It also gives one an opportunity see others' early designs, gathering ideas to further their own creations. It affords a quick and easy way to see feedback from a variety of personalities and backgrounds in one place. Not a bad little tool for getting things off the ground. One can invest as much time (though the potential for time wastage is high) as he or she likes and it doesn't cost anything at all! Whether anyone actually bothers to look at the design is something that would have to be considered as well, but as a start, I probably wouldn't hesitate to at least contemplate uploading a design and gathering some feedback..

Here's another tool I might employ.

This website offers researchers the opportunity to “predict” how potential users might look at their site. The nice part about this tool is that you don't actually need a completed site, therefore, working with your wireframe or mockup, and using the “powerful algorithms” developed by FengGui's "world-renowned" scientists, one can catch a glimpse into how users might likely scan over your site well before you ever sit down to start coding it. Neat. You also don't need to bother any potential users or solicit outside help, the “heatmap” that Feng-Gui creates is done entirely by computer simulation.

I tried it with a website I had created a long time ago and more or less abandoned. I've linked the image from my Simmons web space on my project pages wiki.

Sadly, I don't think it worked terribly well, but then again, I don't think it was the best website with which to give this tool a whirl. As you can see, the “users” were really focused on the scroll bar (which is really obtrusive and conspicuous, isn't it?) the center image, and for some reason, the right hand side of the screen just under the main header image. Why there? I don't know.

Another very useful, free (albeit only in its most limited version) tool is Chalkmark.

It takes no time to sign up for an account to this really cool online tool. Once I made an account I was able to quickly set-up a new survey. Chalkmark allows you to upload images of your site and then administer a survey of questions that pertain to the image. Users then click on the location in the image that they believe best answers or suits the question or task at hand. There are many customizable options, determining how participants access your image and how the survey is administered. The free demonstration allows up to three questions on one survey only. I uploaded a test so that readers of this article can try the survey I developed for Kyle's site.

Go here to take the very quick test. The tasks in this survey are obviously an attempt to call out any significant problems to the wording of labels or links. But I designed it such that the task or question is not relevant to the main content of the particular page, but rather forces the user to focus their attention on either the functional links or the main navigation. As the artistic and aesthetic elements of the site have yet to be realized, it makes no sense to ask users about qualitative aspects of the site.

If I'm able to get in touch with Kyle (no small task) I'm at a distinct advantage. All he has to do is post the link to his Facebook status and it is very likely that at least 300-400 (or approximately 30-40%[the usual response rate when soliciting a large group of people en masse {how many levels of asides can I go? \\so many!\\}] of his Facebook “friends”) would respond to the survey. Granted there would be an element of the “lazy” or “disinterested” tester inherent in this as there is in any testing scenario.

But the lazy or malignant tester is not the only obstacle to obtaining useful results!

“Time-aware research” is a concept that is getting a lot more play in the user experience, usability testing field recently. In fact, it is probably one of the chief causes of the “disinterested” or “lazy” tester syndrome. Basically, the premise is this: usability testing is by its very nature “artificial”. We set up a day or two in the lab, give people $20 or some other small stipend to satiate their need to deem the study worth their time, formulate a few random tasks (or more scientifically developed tasks, designed to really ”engross” the user in what he or she is doing), and give the user ample time to complete the task, with no pressure of a real-world scenario bearing down on them. They want to perform the tasks, because they want the stipend, but they really don't have a vested interest in what they are doing. They don't “own” the goals of the tasks nor are they particularly vested in the outcome or the goals sought by the researchers.

However, if borne of necessity, a trip to the website, and a subsequent testing of the use of that website produces a far greater chance that increasingly vital observational data will result from that use. For this reason, there seems to be ample evidence that a “self-selecting” group of “participants” is more effective than what essentially amounts to a “bribed” group of “subjects”. These effects can no doubt be mitigated with effective research into the potential participants' background and interests, however, in a situation like mine, at this point in time, such research would be impossible. Granted, I am lucky enough to have a large group of people, namely friends of Kyle, to draw upon for my data collection, but in most cases I am relying on people whom I have never met and are inevitably going to be “asked” to participate in a study in which they do not have a vested interest.

This is one reason why the GSLIS Usability Lab seeks out clients like EBSCO, Brigham and Women's Hospital, and Harvard Catalyst (associated with Harvard Medical School and Center in the Longwood Medical Area). EBSCO's users are actually here, at Simmons. And the hospitals can rely on the fact that nurses, doctors, and other health professionals from the LMA can be solicited to take part in studies done on campus. These potential participants can walk over to Simmons and complete some tasks in the Catalyst system on their lunch-break.

The difference is analogous to sitting down friends and family to listen to a demo CD versus popping a CD in the car's player during a long ride together.

No comments:

Post a Comment