First, Some Background Information
Before I explain how we tested and what the tests revealed, let me first share with you a bit of background on our website. In the summer of 2009, we began working on a complete redesign and rebuild of our site, which had last been re-launched in 2006. It had served us very well for more than three years, but the growth our firm had experienced over that time—both internally and externally—provided a mandate for change. This time, we decided to treat the project in the exact same way we'd treat a client project; we actually followed the planning process we talk about so often, defined a set scope of work, priced it without any kind of "hey, it's for us" discount, subjected it to our resourcing department for scheduling, and put Katie, one of our project managers, in charge. Mark and I, the "clients," approved the $60,000 budget (no joke, this kind of thing is serious business), and prepared to balance our dreaming with the occasional, realistic "no." And you know what? It worked out. We stuck to the scope, remained under budget (thanks to Katie's stellar management and everyone else's self-control), and produced a brilliant new site that did everything we hoped it would. Oh, and we're all still friends.
By January, 2011, just a year after launching our redesign, we began to sense that there were still some things about the site that needed to change. We knew there were some new things we wanted to do that would require some work, but we also knew that there were probably just a few minor things that needed to be tweaked that we didn't even know about. After all, we use the site every single day; the chances that we'd notice the flaws that new visitors are experiencing are now very low, if not zero.
There Are No Perfect Launches
On that note, I'd like to pause for just a moment to make an important point about the web: There are no perfect launches. It's appropriate that we refer to launching a new website as "going live"; a "live" website begins to change radically and rapidly once it starts being used, which demands ongoing maintenance: bug fixes, design tweaks, functionality changes, and the like. Though we were anticipating a focused round of work a year after launch—what we often refer to as a "Phase 2"—we had been consistently maintaining our site since its relaunch, discovering things that we just would not have been able to anticipate until it began to be used. I say all of this to answer what I can imagine might be anyone's response to the kinds of things usability tests reveal: "Why didn't you build it that way in the first place?" The answer is simpler to say than it is to experience, but I assure you that the reality is sound. You can build a website with all the intelligence, care, earnestness and good intentions possible, but you will find that some of your decisions were good guesses that proved wrong.
One of the wonderful things about the web is that it is interwoven with the people who create and use it, making it an organic entity and environment that is just as unpredictable as the rest of our lives so clearly are. Our job is to continually adapt what we do and how we do it as we learn new things by observation—it's all a work in progress. So, moving on: usability tests will reveal better ways of doing things that you've already spent time and money doing differently. It's ok; that's exactly what they're for.
A Very Simple Process
We wanted to know how we could improve our site—we were serious about that—but we didn't exactly have in mind mounting a long and expensive process to get there. Fortunately, we had an established model to follow provided by our old friend, Steve Krug, of Don't Make Me Think fame. Krug's most recent book, Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems provides a fantastic overview of how to do usability testing in a way that provides fast and valuable results. If you're interested in usability testing and quality assurance methods for web projects, I'd heartily recommend the book; it definitely belongs on every interactive designer's bookshelf. After reading it as a team, we distilled Krug's advice down to a simple three-step process that can be replicated for just about any website. Before I specify the steps, though, I'd like to offer a caveat: This is a basic procedure; Krug provides and recommends more. On its own, it will certainly provide plenty of insight, but it could easily be expanded upon to test more complex functionality.
Step 1: Customizing the Test
Sitting volunteers in front of a website and recording them browse is not usability testing. Your first step is to plan the test specifically around the goals of the website you want to evaluate. The framework of the test we ran was pretty basic. First, we had our volunteers spend a few moments (one or two maximum) simply orienting themselves on the homepage, feeling free to narrate their observations as they did. We asked that they not click any links away from the page, but just observe what it contains (scrolling is OK) and then explain to us what they perceived the website to be about. While many volunteers may have difficulty comprehending the purpose of an especially tightly positioned site, the goal of the orientation is to observe how well the homepage communicates the website's purpose—even if the user may not understand what it means. For example, if a volunteer rightly concludes that a website offers "industrial plastics extrusion equipment," you could give the site points for clarity even if the volunteer doesn't know what "industrial plastics extrusion equipment" is.
The rest of the test should be based upon 3-4 specific tasks that you would assign to your volunteer. Each task should correspond to specific goals of the site and test the user's ability to complete those goals. These could include subscribing to content, registering for events, requesting more information, making a purchase, as long as they target the primary purpose of the site. For example, a test for a website like ours, which relies heavily upon a written content strategy and has corresponding sign-up calls to action, should include at least one task in which the volunteer must find a specific article and sign up to receive that content.
Step 2: Administering the Test
Ideally, your volunteer should be someone that has little to no familiarity with the website you are testing. That means that the designer, developer, planner, account executive, client, and CEO need not apply. Anyone else, theoretically, is qualified. By the way, since you're only going to be asking for about 10 minutes of this person's time, volunteer is certainly still an appropriate word, though it couldn't hurt to reward them for their effort—Starbucks gift card, lunch on you, something like that. And about those 10 minutes, try to have everything set up before your volunteer arrives so you can really keep it as short as possible. That should really amount to turning the computer on, loading the site in an uncluttered browser, and opening up whatever screen-capture software you've chosen to use to record the session—we've really enjoyed using Camtasia.
Once your volunteer is comfortable, go ahead and start recording. Take them slowly and clearly through each test question, starting with the homepage orientation, being careful not to lead them to any conclusions. Feel free to ask clarifying or expanding questions—things like, "What's happening here?" "Can you say more about that?" or "What was difficult about that?"—but nothing specific enough to transfer any suspicions or biases you already have. Remember, that's why you are not sitting where the volunteer is—you've lost the objectivity they have. It helps to have a hard copy of your test questions on which you can write notes during the test. Even though the video will capture everything your volunteer does, it may not clearly demonstrate other things you observe. If your volunteer gets stuck for too long on any of your 2-3 tasks, that's probably an indicator that something isn't working—either with the site or the test—and it is probably a good time to move on. You'll find that the briefer the test, the more productive.
Step 3. Evaluating the Test Results
On its own, observing your volunteers will likely produce plenty of insight as to what is not working on your website. But as the process itself is controlled and systematic, it doesn't hurt to be a bit more methodical in your evaluation of the data you've gathered. Each of your testing sessions should have produced a roughly 5-minute video clip. After you've administered all the sessions (somewhere between 5-10 would be wise), go through each one and assemble a list of takeaways from each task, then consolidate the points from each session into one master list. Once you've done that, I'd recommend assembling your development team—all the people I said were not qualified to be volunteers—to watch the videos, review your master list, then brainstorm on solutions together.
This simple process is likely to generate plenty of feedback and opinions, not to mention work you'll need to do on your website. It certainly did for us. So what I'd like to do now is share with you the tasks that made up our usability tests, an example video clip for each one, and the points of critique we pulled from these sessions. Since we have yet to make any changes to our site in response to the usability testing, our existing site, coupled with the data we gathered, should serve as a good case study for how this process can work.
A Case Study: Newfangled.com
Kaley was extra thorough in her review of our homepage, and unlike just about every one of our volunteers, did notice our positioning statement (at about 00:50). But the fact that so many of our volunteers missed the positioning statement clued us in to the need to make that information more obvious. We plan to redesign it—in particular, enlarge it—and move it up on the page.
Other volunteers struggled with the main slideshow that stretches across the top of the page. Those that began reading the slides complained that they refreshed too quickly, leaving too little time to process the message. Additionally, because we're using the slideshow to promote short-term and long-term initiatives, some users were either more or less inclined to pay attention to it depending upon how familiar they already were with the site. Those who had seen it before and recognized a slide assumed that the slideshow never changes and ignored it in their session. Those who were new to the site assumed the slideshow's content was the most important thing on the page, and therefore payed closer attention to it than anything else. What we really want is something in between. Incidentally, the volunteers who did pay more attention to the slideshow were also less inclined to scroll further down on the homepage and explore the rest of its content. Not ka-blammo, as the kids say.
The key points from the Homepage Orientation (from all of the sessions we did, including Kaley's) were:
- Make our positioning more clear.
- Rethink the slideshow.
- Get more content above the fold.
Task 1: Locating and Engaging with Content
Kaley headed straight for our search tool to locate the article I asked her to find—my newsletter from last June called Storytelling is the Future of the Web—which was exciting to see in action. I generally assume that most web users tend to find content by browsing available menus, but several of our volunteers recognized and used our search tool first. While using it, Kaley also uncovered a minor bug (which she didn't actually notice herself): When a newsletter page appears in the search results, the link it provides is to the "View on Single Page" version of the article. We'll fix this so that the link directs users to the paginated, "friendly link" version, just like the one you're reading right now.
Kaley had little trouble commenting. She quickly scrolled all the way to the bottom of the page—something that several of our volunteers struggled with, especially on pages with longer comments strings. However, she did uncover another minor detail that we should fix: When she clicked into the comment form fields, the field description disappeared. Since she had clicked into each of them before filling in her information, she was briefly confused about which fields were which. I can imagine that plenty of users might forget what each field label was if they did the same thing. She had no trouble using the sidebar form to subscribe to our newsletter, though.
Finally, some users experienced confusion after submitting forms and being directed to their specific "Thank You" pages. We realized that we could simplify the process by having the forms process without forcing the page to reload. That's now on our to-do list as well.
The key points from Task 1 were:
- Fix the search link and comment field bugs.
- Relocate the comments form to the top of the comments string.
- Get rid of the "Thank You" pages and have forms submit and complete without refreshing the page.
Task 2: Locating and Registering for Webinars
While Kaley made the correct assumption that the top webinar listed on our webinars landing page is the upcoming one and the others listed beneath have already occurred, most of our testing volunteers were confused here. The text indicating the date of the upcoming webinar, as well as the "register" link, are both small enough to be regularly missed by users. Additionally, Kaley uncovered a bug I'd never seen before. When she added her email address to the simple form on our webinar detail page and then clicked the "Register" button, she was redirected to our basic company contact form. When I reviewed the video, I noticed she clicked the button twice, which might have something to do with it. In any case, I've never seen that happen before, so we'll have to look into it. But once she went back and hit the "Register" button again, she—along with every other volunteer—was confused by being redirected to GoToWebinar's offsite registration form. She did successfully complete a registration, but the process revealed plenty of kinks to work out.
The key points from Task 2 were:
- Redesign the webinars landing page to make the upcoming/previous distinction more clear.
- Emphasize the "register" call to action on the landing page.
- Eliminate the redundancy of the Go To Webinar registration form.
Task 3: Agency Projects? What Agency Projects?
Finally, I asked each of our volunteers to find our most recent project done in partnership with an agency and its price range. Every one of them was confused by this. Kaley put it well just seconds after hearing the task: "I wouldn't know whether to click Agency Partnership or Featured Projects." She started with our Agency Partnership page, which unfortunately, didn't give her what she was looking for. Her next logical step was door #2—the Featured Projects page—which she browsed for some time before wondering if she still might be in the wrong place. Unfortunately, she was in the right place—it's just not designed in a way that emphasizes the agency relationship.
Kaley's session, as well as those with the other volunteers, made clear that we need to do a better job interweaving the information on our Agency Partnership, Featured Projects, and Pricing pages. As each is so directly tied to our positioning, it's problematic that they are currently so confusing to use and yield so little clarity to the user.
The key points from Task 3 were:
- Functionally tie together our Agency, Featured Projects and Pricing pages.
- Redesign the Featured Projects landing page to be more user-friendly.
- Emphasize the agency partnership on case study detail pages.
Each volunteer test session revealed so much to us that the excitement of learning something new quickly eclipsed any possible feeling of defeat we might have experienced as a result of realizing our website was not perfect.
We reconvened the same project team we used for our last redesign project in order to review the findings and discuss a plan of action, which, as you might imagine, turned out to include many other changes in addition to those I've shared with you here. Again, we're treating this work just as we would any other client work, and having already come to a consensus on the scope and cost—we've estimated it at around 10% of the initial project cost, which is on the low end of the expected range of first-year/phase-two costs for a site like ours—we are currently working out the schedule with the goal of completion by late fall, 2011.