Nonprofit & Education Case Study:
How the Smithsonian Improved Content Findability with Tree Testing
The findability of digital content is critical at the world’s largest museum, education, and research complex. Early in the redesign process of the Smithsonian Global website, we led IA research that pointed the way toward a more intuitive navigation.
“As the world’s largest museum, research, and education complex, we have a responsibility to engage on the pressing global challenges of our time — from climate change and public health, to economic disparities and issues of diversity, equity, access, and inclusion — the solutions to which require an international vision and perspective.”
— Lonnie Bunch, Secretary of the Smithsonian Institution
The Smithsonian Institution’s purpose is “the increase and diffusion of knowledge.” A key activity supporting this purpose is connecting the organization’s deep expertise with global partners to address international challenges.
In over 140 countries across 7 continents, Smithsonian experts collaborate with other organizations on projects in:
- Science and conservation
- Culture and the arts
- Education and outreach
The Smithsonian Global web property showcases the organization’s international work and its impact. One of its goals is to attract and engage new global partners.
A Smithsonian team was recently working on a website redesign to better communicate its work. One of the specific goals of the redesign was to make projects, expertise, and other content easier to find.
In partnership with the design firm Provoc, Marketade led information architecture research to support the redesign. In this case, we picked tree testing as the best method because:
- The redesign was underway and a draft of a new site map was already created
- We had a short window of time to complete this research and provide recommendations
Tree testing is the fastest, most effective way to assess a site’s information architecture. We ask representative users to find content using a simple, clickable “tree” of the site’s navigation; and we record each click. The resulting data shows which findability-related tasks are easy for customers and which ones are difficult. We learn which category structures and labels make sense to users and which ones cause confusion.
Here are the key steps we followed for this study:
- We met with project stakeholders to reach a consensus on the target audiences for the website; while many Smithsonian properties have a broad audience, this site focuses on specific professional audiences working on or supporting global projects.
- We also met with stakeholders to understand the top user goals on the site and the common scenarios or contexts that brought users to the site.
- We reviewed the draft site map and worked with stakeholders to identify the most relevant page(s) and content for each user goal. In other words, what is the best destination for a user with this goal?
- Based on the goals and scenarios, we wrote tasks that work well in a tree test study. For each task, we picked 1 or more target pages (i.e. the correct destination).
- We used social media posts and one other source to recruit 24 participants who matched the site’s user profiles.
- We loaded the site map, the tasks, and the targets in Treejack, an online tree testing tool.
- We ran moderated and unmoderated research sessions with the participants.
- We analyzed the qualitative and quantitative data and identified key findings.
- We wrote a short report for Smithsonian and Provoc covering our findings and recommendations.
The bad news: Our research showed that 50% of the site’s core tasks performed below findability standards with the draft site map. Why? Qualitative data indicated a few problems including some vague category labels and some overlapping categories.
The good news: It was early in the redesign process. We were able to make major changes to the structure of the IA and the category names quickly and easily.
What might have taken months to fix after launch took only hours to fix at this stage.
Later in the redesign, we led wireframe testing. While that research covered more areas than IA and findability, we used the sessions in part to test the navigation changes that the team had made after tree testing. With the new navigation, users were far more successful in finding the content they needed and getting off on the right foot for the core tasks.
More Case Studies
A large United Nations agency wanted to help its 2,000+ employees find HR-related content more easily. We led a card sort to generate a new information architecture. We then tested and improved the IA through a first-click test.
While challenging to do right, a card sort study is the best starting point for generating a user-centered information architecture. Here’s the 12-step process we used for a Fortune 500 content hub — and the challenges we overcame.