We recently led a usability research project for a major regional health system that includes one of the top-ranked hospitals and medical schools in the U.S.

Our primary research was moderated usability testing with 7 participants. To supplement our qualitative research, we gathered data from 2 large-scale website surveys.

Website surveys cause a lot of challenges for researchers and marketers, and a lot of frustration for users. Only occasionally do they produce insights or drive action. In this case our survey achieved both — along with a benchmark metric — at a minimal cost to the user experience. Below I share our approach and what we learned.

Step 1: Pilot, Tweak, Launch

Every website is different, as is every website audience. We started by piloting a couple surveys that we’ve found to be effective at both measuring UX and collecting qualitative responses on similar websites. We then tweaked the question wording and display criteria until we saw a strong response rate on this site. We settled on these 2 surveys.

“Ease” Survey

This survey starts with a simple, multiple-choice ease-of-use question.  For those who select “Difficult”, we ask an open-ended “why” follow-up.  We served this to users as a popup after they’d visited 4 pages on desktop or 3 pages on mobile.

“Looking” Survey

Here we flip things and start with an open-ended task question, and follow it with a multiple-choice question.  We served this to users as they were about to leave the site (an exit intent survey).

We like these surveys because:

  • They’re short — a max of 2 questions each — helping to maximize the response rate and minimize the annoyance. In both surveys, we still get useful data even if some users don’t answer the second question.
  • They gives us both quantitative and qualitative data.
  • In combination, we get a good picture of both common tasks and top frustration reasons, and the overlap between the two.

Step 2: Analyze

Once we had over 600 responses in total, we turned the surveys off and started our analysis. To analyze the results, we:

  • Gathered all survey responses in a single spreadsheet
  • Calculated some easy-of-use metrics
  • Filtered the list to responses with likely task failure or frustration
  • Manually categorized the task failure and frustration reasons
  • Counted the responses within each of those categories

Here’s what we found:

46% of the respondents found the site easy to use; 22% found it difficult to use. By itself, this is the least useful finding. But it does provide a baseline to compare the site against similar/competitor sites and against future versions. And it allows the team to set a measurable target. Can we make usability improvements that increase the “Easy” share from 46% to over 60% by next year?

The #1 task area causing failure/frustration was the medical record section for patients, in particular the task of trying to access or log into that part of the site. Example comments included: “Hard getting the activation code”, “Can’t get on [records section]”, “I was unable to retrieve or change my lost password or user I’d.”,  “Multiple problems trying to log-in to my records. Total waste of time…useless even when getting help” and “[records section] is almost impossible to get into. Once you get access it blocks you.”

The #2 task area causing failure/frustration was Search, in particular the task of searching for a doctor. Example comments included: “circle around to the same information or dead end links” and “not being able to search for a doctor and see their patients feedback.”

The #3 task area causing failure/frustration was Contact Info & Appointments. Example comments included: “can’t find faculty email address”, “cant find email addresses for physicians or their admins”, “where is email for [name]?”, “Patients should be able to communicate with Doctor through email”, and “Can I make an appointment online?”

Step 3: Share

We shared these survey findings at a stakeholder workshop, just after the health system’s team had watched 5 of the moderated usability sessions. After sharing the top frustration/failure areas, we showed quotes from each area similar to the ones above.

While the biggest insights on this project came from qualitative testing, the survey allowed the team to:

  • Establish and understand their benchmark usability score
  • Validate/invalidate/quantify their qualitative findings
  • Identify new UX issues that did not come up in moderated testing

While surveys often help us find small UX issues that qualitative research failed to surface, it’s rare that we discover major new issues. On this project, we did.

At the start of the project, the team had told us the patient records section of the site was out of scope for moderated testing. A different team owned that section of the site, and they were not planning to change it anytime soon.

After the workshop, the attendees passed on the survey findings to their sister team that owned patient records. They showed that it was the #1 website frustration area across the site, and they shared the specific user comments. Motivated by this data, the records team is now taking action sooner than planned.


Learn more about our usability research services.