DSC03481 smallOn this blog we have talked about the importance of the “Why” often. Web Analytics usually simply helps us understand the “What”. Clickstream data typically does not tell us why something happened. We have stressed how important it is to know the Why in order to derive actionable insights around customer behavior on the website and outcomes from that behavior.

Essentially people do weird stuff on our websites, clickstream won’t tell us why, so we have to ask them why. That’s your Eureka moment.

Early on in the blog I had talked about using surveys as a great way to have a continuous listening methodology when it comes to website visitors. In this post we’ll cover the grand daddy of UCD (User Centric Design) methodologies: Lab Usability Testing.

Since many of us in the Web Analytics community don’t have optimal awareness of this methodology we’ll go into some detail about how lab usability tests are conducted. Hopefully this will elevate the awareness (and we’ll appear super smart the next time someone invites us to participate in one). We will cover:

  • What is lab usability testing?
  • How to conduct a test.
  • Tips on conducting successful tests.
  • Benefits of lab usability testing.
  • Things to watch out for.

From the most uber perspective User Research is the science of observing and monitoring how we interact with everyday things such as websites or software or hardware, and then drawing conclusions about how to improve those things. Sometimes we do this in a lab environment (complete with one way mirrors and cameras pointed at the participants), other times we can do this in people’s native environments (offices, homes etc).

What is lab usability testing?

Lab usability tests measure a user’s ability to complete tasks. In a typical usability test, a user attempts to complete a task or set of tasks using a web site (or software or a product).

Each of these tasks have a specified goal with effectiveness, efficiency and satisfaction identified in a specified context of use.

A typical study will have eight to twelve participants who participate in the test. Early on during these tests with as few as five users patterns begin to emerge that highlight what parts of the customer experience or process and working well and which are causing problems.

Lab tests are conducted by a User Centric Design / Human Factors expert, who is typically supported by a note taker. Key stake holders connected to the website (or product) participate as Observers and their job is to get a close understanding of the customer experience. Stake holders can be business owners, engineers and developers, web analysts : ), product managers etc, anyone who has something to do with the website or customer experience.

Tests can be conducted with live version of the website, beta versions, on-screen HTML or PowerPoint prototypes, or even with paper printouts. These paper prototypes, sometimes called wire-frames, approximate what a user might otherwise see on a computer screen, but save the development team from having to produce an on-screen product.

Usability tests are typically held in a specially designed room called a usability lab. The lab is split into two rooms that are divided by a one-way mirrored window that allows observers to watch the test without being seen by the test subject. However, you can conduct a usability lab without a lab. All you need is a room with a computer in it and a promise from all test observers that they will remain silent and out of the test subject’s sight (that is, behind them) throughout the test.

As the test subjects work on their tasks, a test moderator observes. The moderator takes notes on the user’s actions, and records whether or not the participant is able to complete the task, in what length of time, and taking what steps. While the participant is working at their task, the moderator limits their own interactions to providing initial task instructions and occasionally prompting the participant to further explain their comments.

For example, if the participant says, “that was easy,” the moderator might say “tell me more about that.” This neutral prompt encourages the participant to explain what they thought happened, and why it worked well for them. Because moderators make non-judgemental comments and do not assist, the participant is forced to use their own devices—as they would at home or in their office—to complete their task.

All the while the note taker is busy taking comments of the session and making note of the important points. Observers will do the same. Sometimes observers have the option of interacting with the moderator to ask the participant more questions or to clarify something.

Often times lab usability tests are also recorded on video for later review and also to present to a larger audience in a company.

Usability tests are best for optimizing UI designs, work flows, understanding voice of customer, understanding what customers really do.

How to conduct a test:

There are four different stages to conducting a successful Lab Usability test.

# 1 Preparation:

The main steps in the preparation phase are:

  • Identifying what the critical tasks are that we are testing for. (For example for amazon.com: How easy it is for our customers to return product or request a replacement?)
  • For each task create scenarios for the test participant. (For example: You ordered a Sony digital camera from us. When you got the box it was missing a lens cap. You would like to contact amazon for help. What do you do next?)
  • For each scenario identify what success looks like. (For example: Found the correct page, abc.html, on the support site, followed the link to the contact amazon web page and filled out a request and hit the submit button.)
  • Identify who your test participants should be (new users, existing users, people who shop at competitors sites etc etc).
  • Identify compensation structure for the participants.
  • Contact a recruiter, in your company or outside, to recruit the right people for you.
  • Before you actually do the test with a live person do dry runs with someone internal just to make sure your scripts etc work fine. You’ll find issues in these pilots that you can clean up before you do the real thing.

# 2 Conducting the test:

The rubber hits the road, you get to see real people! : ) The main steps in this phase are:

  • Welcome your participants and orient them to the environment. “You are here at our company and there is a mirror and people are watching you and we are recording this and you can do now wrong so don’t worry and relax .”
  • Starting with a “think aloud” exercise is a good idea. You want to “hear” what the participants are thinking and this exercise will train them to “talk their thoughts”. The main goal is to really understand and uncover the problems they will surely have.
  • Have the participants read the tasks aloud, this will make sure they read the whole thing and hence that they understand well what the task / scenario is.
  • Now all of you in the company pay attention, close attention. Watch what the participants are doing and observe carefully for the verbal and non verbal clues about where the participants fail in their tasks or if they misunderstand what your web pages say or if they go down the wrong path.
  • The moderator can ask the participants follow up questions to get more clarity (but be careful not to give out answers and absolutely watch your own verbal and non verbal clues so as to be as calm and reassuring as you can to the participant).
  • Thank the participant in the end and make sure to pay them right away (if you can).

# 3 Analysis of the data:

Main steps in this phase are:

  • As soon as possible hold a debrief session with all the observers so that everyone can share their thoughts and observations.
  • Take time to note down trends and patterns.
  • The moderator is responsible for tallying up success and failures by each participants for each task.
  • Do a deep dive to identify what the core root causes were for the failures based on actual observations. (For example: The faq answer on the website was too long. Contact Us link was not apparent and hidden below the fold. It was not clear that they could not contact us via phone. Bonus: Almost everyone complained that their expectations were not set about when to expect a reply.)
  • Make recommendations to fix the problems identified. Usually create a PowerPoint deck that collects all the scores and then for each critical task 1) Identify the points of failures 2) Make concrete recommendations that will improve the customer experience 3) Categorize the recommendations into Urgent, Important and Nice To Have, to help business decision makers prioritize.

# 4 Follow up, bug until fixes made, test again and measure success:

Traditional role of UCD Experts / Researchers might end at the above step but I feel that their role continues post test result presentation to collaborate with the business owners to keep the momentum with the business owners to fix the problems and offer up their services and UCD expertise to partner with website developers and designers to improve the site experience.

Finally don’t forget to measure success post implementation. So we spent all the money above on testing, what was the outcome? Did we make more money? Are customers satisfied? Do we have lower abandonment rates? The only way to keep funding for efforts like above going is to show a consistent track record of success that impacts either the bottom line or customer satisfaction improvements.

Tips on Lab Usability Tests:

Some obvious / non-obvious tips on conducting a successful test:

  • Make sure you tell the participants that we are testing the website (/product / software) and not testing them. People tend to blame themselves a lot, make sure to stress it is not them and it is not their fault.
  • Don’t rely on what people say, focus on their behavior because people often report experiences very different from how they experience them. It is amazing how many times I have observed a completely frustrating (or long) experience from a customer and in the end they rate it as a 4 out of 5. People are just nice, our job is to make up for that by observing (I know that sort of sounds silly).
  • Try not to answer their questions when the participants ask you how to do something. Try things like “tell me more” of “if this were the case at your home / office what would you do next”
  • This one from above bears repeating: Watch your body language to ensure that you are not giving participants any subtle clues.

Benefits of Lab Usability Tests:

  • Lab tests are really great at getting close to a customer and really observing them, and even interacting with them. I realize this sounds like going to see a animal in a Zoo but the reality is that 99% of us will complete our employment with a company never having seen a real customer (and all the while we are supposed to be solving for them). This is a amazingly eye opening experience for everyone involved (no matter how much you do it). Be prepared to be surprised.
  • For complex experiences lab tests can be a great way to get customer feedback early in the process to identify big problems early one and save time and money and energy and sanity.
  • For existing experiences this is a great way to identify what is working and what is not. Especially if you are completely stumped by your clickstream data (which happens a lot).
  • It can be a great mechanism to generate ideas to solve customer problems. Not solutions, ideas.

Things to watch for:

  • Twelve people do not a customer base make. : ) Remember that it is just a best you can make it representative sample of your customers but there are things like the hawthorne effect that can impact participant behavior.  Don’t jump to definitive world changing opinions as a result of a lab test.
  • With availability of complex testing methodologies on the web (see this post) it is increasingly cheaper and faster to put tests in the real world and measure results. So if you want to try five versions of a page now with Multi-Variant Testing you can try fifty versions and measure success very quickly.

    Before you do a test see if you can simply throw it up on your real site and ask the million people who come to your site what they think.

  • I would caution on doing complex all encompassing redesigns of websites or customer experience based purely on a lab test. You are asking too much of the lab test, it is impossible to control for all the factors that are going to occur on your real website.

    Besides remember that on the web revolutions rarely work, evolution works.

  • One of the best things you can do for your company is not leave lab usability testing just to your UCD professionals (Researchers). Pair them up with your web analysts. The latter will bring their real work data from their tools and what that data is saying and the former will take that data to construct real tasks and use the data to create good scenarios for each task.

    Both User Researchers and Web Analysts can benefit tremendously from a close sustained partnership (the ultimate combination of qualitative and the quantitative).

So what do you think? Are you a User Researcher / UCD / Human Factors Specialist? Do you agree with a quant guy’s post? : ) Do you have tips for us? Share your experiences, feedback and critique via comments.

PS:
Couple other related posts you might find interesting:

Social Bookmarks:

  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite