Challenge: The Marketer-Designed IA

We’d seen it before and we knew it spelled trouble.

We even had a name for it: “the marketer-designed IA” (information architecture). As a recovering marketer, I’d created plenty such IAs myself over the years, before UX researchers showed me how poorly they performed with real users. Now I knew what to look out for, and I saw it loud and clear in a PowerPoint deck.

***

The marketing team at a Fortune 500 financial company, with a well-recognized consumer brand, was working with their IT group to create a microsite. The site would house informational and brand-focused content, which had previously been scattered across newsletters, blogs and other places.

They had about 125 articles ready to go, spanning a wide range of topics, and plans to create hundreds more. The microsite would have its own navigation, and the team needed a way to organize the content that made sense to users.

The problem was, they had some strong ideas already about how the information architecture should be structured and labeled. And none of these ideas had made contact with a user; they were based solely on the intuitions of analysts and writers.

Yes, they now wanted to do some user research. But they’d already invested a lot of time into coming up with their ideas, and so they 1) wanted the research completed within 1 week, and 2) were just looking for it to point to a winner among existing ideas, maybe with some tweaks.

“What if the best IA for customers looks nothing like these 3 that you’re showing us?”, we asked.  “If we limit ourselves to a quick round of testing, we won’t have an opportunity to come up with something truly human-centered if they perform poorly.”

We pointed out that, while we had no way to know at this point whether their proposed categories were valid, we were pretty sure that their proposed category names would perform poorly. Like many marketers, this team was proposing labels that sounded cute and clever (e.g. Live / Laugh / Learn), but were unlikely to make sense to customers — most of whom are trying to quickly find information or complete a task.

Solution: Part 1 – Card Sort Study

After some back and forth, the project team agreed to a 4-week research and design project that started with a card sort study and then put IAs through multiple rounds of tree testing, with design iterations in between.

Here we cover the card sort phase. In a future post, we’ll discuss the tree testing and rapid design iterations.

In a card sort, we present users with a collection of cards that represent content or functionality on our site, and ask them to group them in ways that make sense to them, and then label the groups. After watching enough users go through the process, thinking aloud as they go, we’re in a great position to generate intuitive IAs.

card sort example

Card sort example with grocery items

Running a card sort is tough. But it’s the best starting point for creating an IA.  Here are the steps we went through, along with some of the challenges we overcame.

#1. Review content inventory.

The project team sent over a content inventory spreadsheet with existing content and some future content. The spreadsheet included article titles, sub-titles, and links to the actual articles that existed. We spent a good chunk of time reading the articles.

#2. Select representative content.

Out of 125 potential content pieces, we picked out 100 for the card sort. Often we aim for 30 to 50 cards; in this case the content was relatively easy to understand and group, and there was a lot of it, so we went higher. The 25 we removed were already well represented in the 100 articles we had.

We made sure that all of the cards were at the same bottom “level”; e.g. we didn’t mix article pages with category pages.

#3. Get feedback on content, revise.

We shared the proposed list of articles with the project team and asked if it was a representative sample of future content. We made some revisions based on their feedback.

#4. Write card labels.

While it would have been nice to just reuse the article titles the writers had come up with as the card labels, we ended up rewriting nearly all 100 of them to: 1) be easy for participants to understand, 2) accurately represent content, and 3) spell out any acronyms or jargon. For most articles we used the sub-title as the starting point, since they were more descriptive than the titles.

#5. Fix biasing labels (don’t skip this!).

It’s very easy to bias participants in a card sort study and end up with categories that are less about how users approach the concepts and more about their keyword- and pattern-matching abilities. We have to work hard to avoid the latter, which often leads to a poor IA.

We reviewed the list of cards and spotted common words used across labels, and then found synonyms for those words. For example, “quiz” appeared in 5 of the cards. If we’d left them as is, we would have seen users grouping those cards together even if the underlying content didn’t go together in users’ minds. So we replaced one “quiz” with “exam”, turned another label into a “what do you know about …?” question format, and so on.

We also looked for common patterns in the cards, e.g. “resources for homeowners”, “tips for parents”, and mixed up the word order of those.

#6. Get feedback on card labels, revise.

Just as we’d done with the content list, we shared the card labels with the project team and got their feedback. We also shared the list of cards with a couple people unfamiliar with the project and asked them to tell us what each one meant. This helped us spot some problems, in particular card labels that were vague or misleading. We made revisions and moved on.

#7. Run pilot study, revise.

We ran a pilot card sort with 2 participants. Based on what we saw, we reworded more labels to further reduce common words and needless words. We also removed 25 more cards that now seemed unnecessary and made the sort easier to do in our target time of 15 to 20 minutes.

After numerous iterations, we now had 75 cards that we felt good about in terms of representativeness, descriptiveness, clarity, and lack of bias. We were ready for the real study.

#8. Recruit participants.

Often this is a hard step. Here it was easy, because the target market was consumers of a widely-used line of products. We lined up 55 participants.

#9. Run qualitative sessions.

We ran 5 users through a remote think-aloud card sort, one at a time. Hearing their thoughts allowed us to understand why people group certain topics together, and more generally helped us see the content from a user’s point of view.

#10. Run quantitative sessions.

We ran 50 users through a remote unmoderated card sort, one at a time. We couldn’t hear what these participants were thinking, but we now had a more robust set of data to spot patterns. Combined with the 5 think-aloud sessions, we had the right mix of qualitative and quantitative.

#11. Analyze results.

We ran the quantitative card sort through OptimalSort software, which provides great tools for analysis, in particular dendrograms and a similarity matrix. Their tools are so good that it’s tempting to try to walk away from them with your new IA in hand. We have to remind ourselves that, along with qualitative insights, these tools are inputs that help inform our designs.

similarity matrix

A similarity matrix helps us see grouping patterns

#12. Generate new IAs.

Armed with our analysis, we were ready to propose changes to the project team’s proposed IAs, and generate new IAs. We ended up with 3 that everyone felt good about for a first test. Now we were ready to put our IAs in front of users.

Stay tuned for part 2 of this post, where I’ll share our process and takeaways from tree testing and iterative IA design on this project.