IA Research Case Study:

Tree Testing for Vacation.com Settles a Navigation Debate

Project team members disagreed on the best category labels for a new website’s navigation. Qualitative and quantitative tree testing quickly settled the debate and helped to improve content findability.

Challenge: Lack of Alignment on Navigation

A division of Travel Leaders Group (TLG) was preparing to launch a completely new version of its web property, Vacation.com. The new site would be a content hub and lead generation vehicle focused on luxury travel. The content was meant to inform and inspire travelers who are starting to think about their next vacation and, over time, convert them into a lead for the company’s large network of travel agents.

With the help of a marketing agency and a design agency, the new site’s design and content were almost ready for launch. Yet the teams were struggling to get on the same page regarding the site’s navigation.

One group of stakeholders wanted navigation labels to be short and inspiring. In line with the site’s inspirational theme, this version of the IA consisted of 1-word labels that spoke to travelers’ aspirations. For instance, a category named “CELEBRATE” would house content on family reunions and anniversaries, while “SAVOR” would draw in people seeking travel focused on food and wine.

Another group of stakeholders wanted navigation labels to be more descriptive. “It’s ok to have these inspirational words,” this group said. “But we need to follow them with some words that make it more clear what’s in this cateogry.” Rather than just “CELEBRATE”, for instance, the label would be “CELEBRATE: Families & Milestones”.

Approach: A/B/C Navigation Tree Testing

To help get the project team on the same page, Marketade led a tree testing study.

Tree testing is our go-to IA research method for evaluating the structure and/or labels of an early-stage navigation. We love the method because it’s lightweight and because it captures performance metrics. This makes it great for apples-to-apples comparisons of different versions of an IA. Across qualitative and quantitative testing, we can divide our participant pool, show each segment a different version, and compare the performance.

For this study, we went with an A vs. B vs. C test, comparing 3 versions of the navigation:

  • A: Primary labels only (e.g. “EXCITE”)
  • B: Primary labels and sub-labels: e.g. “EXCITE: Adventures + Events”)
  • C: Sub-labels only: (e.g. “Adventures + Events”)

Here are the key steps that we followed:

  • We collaborated with the content team to identify 12 representative articles — 2 from each of the 6 navigation categories
  • We wrote a scenario for each of the 12 content articles. An example: “You and your partner are planning a trip to Costa Rica and you’re thinking about doing a surf camp while you’re there.” To ensure that we captured an accurate picture of how users would navigate the site in the real world, our scenarios avoided words contained in the navigation labels, e.g., “honeymoon”, since this would artificially lead users to the correct links.
  • We ran a handful of moderated sessions with participants that we recruited via social media. We used these for navigation-related insights as well as to identify and fix any problems with our scenarios.
  • We conducted quantitative research with 300 participants recruited through Amazon’s Mechanical Turk. Each participant interacted with 6 scenarios, selected at random. After reading the scenario, the user chose the link that he/she would use to navigate to accomplish that task or find that content.
  • We assessed the navigation versions by comparing the success rates for each task, or the percentage of participants who picked the correct path for a given task.
  • We used the qualitative sessions to understand the “why” behind some of the patterns we saw in the quantitative data.
  • We wrote a 3-page summary report to share the findings and recommendations. The report linked to a more detailed analysis of the data and to session clips from the moderated testing.

Result: Team Alignment & Findable Content

In the end, the data was clear: versions B and C far outperformed version A in terms of content findability.

Using primary labels only (e.g. “CELEBRATE”) would have been a disaster in terms of the user experience. In version A, less than half of the participants selected the correct category for a given task, on average. By contrast, over two-thirds of participants were successful on versions B and C.

This was the most important finding from the research and it quickly settled the internal debate among project team members. Upon sending the report, support for version A disappeared.

Versions B and C were tied in terms of performance. Their success rates were the same and their median task completion times were the same. Our researchers preferred version C; if words in a site’s navigation aren’t helping, let’s cut them to reduce the cognitive load. But this wasn’t a strong recommendation, and the project team picked version B in the end.

Beyond this, we found a few opportunities to further improve the performance of version B by using clearer, more descriptive sub-labels. For instance, we recommended replacing “Families & Milestones” with “Reunions & Anniversaries”.

Finally, we recommended cataloging some articles under multiple categories. For example, an article mentioning both a country and its cuisine should be cataloged under both the “Explore: Countries + Cultures” and “Savor: Food + Wine” navigation links. As part of this recommendation, we stressed that the site should maintain a primary hierarchy and the team should not go overboard with placing content in multiple categories.

The project team quickly acted on the recommendations and, months later, launched Vacation.com with the navigation labels that came out of this lightweight study.

​More Case Studies

 

How Tree Testing Improved Baileigh’s Product Findability by 85%

Sales reps were overwhelmed by calls from site visitors unable to find small-ticket products online. IA research helped the company overhaul its site structure and increase self-service, web sales, and sales team productivity.

Improving a UN HR Portal’s Findability through IA Research

A large United Nations agency wanted to help its 2,000+ employees find HR-related content more easily. We led a card sort to generate a new information architecture. We then tested and improved the IA through a first-click test.

About the Project

  • Industry: Travel
  • Platform: Website
  • Audience type: Consumers
  • Specific audiences: Luxury travelers
  • Methods: Tree testing (IA research)
  • Stakeholder teams: Marketing, Creative
  • Length: 1 month
  • Organization size: 4,600 employees
  • Organization locations: New York City, Minneapolis area, Washington DC area