GlossaryCard Sorting

Letting users show you how they think — not how you think they think

Card sorting is one of the most cost-effective ways to validate your information architecture before it ships. Here's how it works, and what the data actually tells you.

What Card Sorting Is

Card sorting is a research method where participants organise a set of topics — written on cards, physical or digital — into groups that make sense to them. The goal is to understand how users mentally categorise information, which directly informs {{LINK:information-architecture}}, navigation design, and content organisation.

It's one of the most cost-effective research methods available. You don't need a lab or expensive tooling, and even a sample of 15–20 participants produces usable patterns. Yet it's chronically underused, especially before navigation redesigns that end up being rebuilt six months later.

Open vs Closed vs Hybrid

Open card sorting — participants create their own group labels. Best for early-stage discovery: you want to understand how users naturally categorise content, without imposing your assumptions on them.

Closed card sorting — participants sort cards into predefined categories. Best for evaluating an existing or proposed navigation structure. You're testing whether your labels work, not discovering new ones.

Hybrid card sorting — participants sort into predefined categories but can create new groups if nothing fits. Useful when you have an existing structure but suspect it's incomplete.

Most navigation projects benefit from an open sort first, followed by a closed sort once candidate structures emerge from the analysis.

How to Run One

  1. Define your scope — pick 30–100 items that represent the content or features users need to find
  2. Write clear, jargon-free card labels — ambiguous labels produce ambiguous data
  3. Recruit 15–20 participants who match your target users
  4. Run sessions using tools like Optimal Workshop or Maze for remote, or sticky notes for in-person
  5. Collect all groupings and analyse patterns across participants

The whole exercise end-to-end — recruiting, running, and analysing — typically takes under a week for most scopes. That's a small investment before committing to a navigation structure that will affect every user.

What the Data Actually Tells You

The output of a card sort isn't a clean taxonomy. It's a matrix showing which cards were grouped together most often. High agreement (80%+ of participants put two cards in the same group) is a strong signal. Low agreement means users don't share a {{LINK:mental-models}} of how those items relate.

The analysis typically reveals:

  • Natural groupings — which items belong together in users' minds
  • Label problems — when the same card ends up in wildly different groups, the label itself is ambiguous
  • Orphan items — content that doesn't fit cleanly anywhere, often signalling a framing or scope problem

Common Mistakes

The most common mistake is running a card sort and then ignoring the results when they contradict the existing site structure. "Our internal team knows the product better than users do" is the rationalisation — but IA is about how users navigate, not how the company is organised.

The second biggest mistake: running the sort with internal staff instead of actual users. Internal teams have deeply ingrained mental models that rarely match how new users think about the product.

Card sorting fits naturally into any project that includes a {{INTERNAL:/services/ux-audit}} or navigation redesign. It's one of the cleaner ways to bring user data into what is otherwise an opinion-driven decision.