GlossaryMental Models

Why the gap between how users think your product works and how it actually works is the source of most UX problems

A mental model is what a user believes about how your product works — the rules, logic, and expectations they carry in before they read a single line of help documentation. When their model matches how the product actually works, the experience feels intuitive. When it doesn't, it feels broken.

The Mismatch Problem

Most usability problems aren't caused by bad design decisions — they're caused by a mismatch between what the designer thinks is obvious and what the user thinks is happening.

A user opens a complex B2B tool for the first time. They navigate to what looks like a settings area. They're actually in a configuration panel that will affect live data. They save. Something breaks. In their mental model, 'settings' meant preferences — display options, notification toggles, account details. In the product's model, 'settings' included system-level configuration with real consequences.

No one misread a label. The label was accurate. But the user's expectation of what that area would contain — their mental model of it — didn't match the product's architecture. That's the mismatch. And it's a more common source of UX failure than confusing UI patterns or poor copy.

Jakob Nielsen first articulated the design implication clearly: the user's model of a system will never be identical to the designer's model. The job of UX is to close that gap as much as possible, or at least to design for the gap so it doesn't cause harm.

Where Mental Models Come From

Users don't arrive at your product in a vacuum. They bring expectations shaped by:

  • Prior products they've used — if every project management tool they've used has a left sidebar for navigation, that's where they'll look first in yours
  • Real-world analogies — the filing cabinet metaphor for folders, the desktop metaphor for the home screen, the shopping cart for e-commerce purchase flows
  • Language and labels — the word 'archive' in email implies reversible storage; 'delete' implies permanent removal. Using one when you mean the other will break expectations every time
  • Platform conventions — iOS users expect swipe gestures to work differently than Android users do; enterprise SaaS users expect different defaults than consumer app users

The mental models users bring to your product are rarely blank. They're constructed from hundreds of hours with other software. Every convention you deviate from is a conscious decision with a cost.

Why It Hits Harder in B2B

Consumer products can afford to invest in onboarding that teaches new interaction patterns. Social apps do this constantly — a new gesture, a new sharing mechanic, and they'll walk you through it with tooltips and animation because they know users will invest the time.

B2B and SaaS products don't get that grace period. Enterprise users are working. They have fifteen other things open. If your product's logic doesn't map to their existing mental models within the first session, you're not getting a second chance to re-explain it — you're getting a support ticket or a churn conversation.

This is especially common in tools that migrate users from an older product or a spreadsheet-based workflow. The mental model they arrive with is precise and specific: rows are records, columns are attributes, filters work a certain way. When your tool's underlying model differs — even if it's objectively better — the transition is painful unless the design actively bridges the gap.

Teams often misread this as a change management problem when it's actually a design problem. The {{LINK:onboarding-ux}} is trying to teach something the product's structure should be teaching.

How to Surface Them in Research

Mental models aren't something users can easily tell you about directly. Ask 'what do you expect this screen to do?' and you'll get rationalised, performance answers. The better approaches are observational:

Card sorting — ask users to organise terms, features, or tasks into groups that make sense to them. The groupings reveal their underlying categories, not yours. Where their clusters diverge from your navigation structure is where the mismatch lives.

Think-aloud testing — watch users perform real tasks while narrating their reasoning. The moments when they say 'I'd expect this to be...' or 'wait, that's not what I thought would happen' are direct windows into their model.

Interview probes about prior tools — asking how they did this task before, what they found intuitive or confusing about the last solution, surfaces the expectations they're carrying into your product.

What you're mapping is the difference between their model and yours. Where they align, your design can rely on existing intuition. Where they diverge, your design needs to either adapt to their model or clearly signal that something different is happening.

Closing the Gap

There are two responses to a mental model mismatch, and the right one depends on how entrenched the user's model is:

Adapt your design to match the model. If your users expect a certain workflow sequence and your product requires a different one without good reason, change the product. This is the more common answer than teams want to admit. 'We designed it differently because it's better' is often rationalisation for ignoring research.

Guide users through the difference. If your model genuinely has to differ — because your product's underlying logic requires it — design the transition explicitly. Don't assume users will discover the new paradigm on their own. Use the first session to establish the mental model you need them to have, not to demonstrate every feature.

The worst outcome is doing neither: shipping a product that assumes users will figure out its logic, then being surprised when they don't.

A {{LINK:heuristic-evaluation}} can identify the surface-level symptoms of model mismatch — confusing labels, counterintuitive navigation, unclear affordances. But only {{LINK:ux-benchmarking}} or structured user research will tell you whether users are actually building the right model over time.

Related: {{LINK:cognitive-load}}, {{LINK:information-architecture}}, {{LINK:onboarding-ux}}