What it is
MaxDiff measures relative preference or importance by forcing respondents to choose the most and least appealing items from repeated sets.
Overview notes
Common use case
This method is especially helpful for message prioritization concept ingredient prioritization and feature ranking where standard ratings collapse into ties.
Decision guide
When to use it
- When you need a stronger ranking than simple ratings
- When the item list is important but full trade-off modeling is unnecessary
When not to use it
- When attribute combinations rather than standalone items matter
Inputs required
- A balanced best-worst experimental design
- A clear item list
Typical outputs
- Utility-like scores
- Preference ranking
Simple example
Rank value propositions to determine which messages deserve front-page emphasis in a campaign.
Strengths
- Reduces rating-scale inflation
- Produces clearer differentiation across items
Limitations
- Does not model multi-attribute trade-offs
Common mistakes
- Using overlapping items that respondents cannot clearly distinguish
How I use it in practice
I use MaxDiff when ratings are too flat and the real need is a defensible prioritization of messages benefits or features.
What is outputted
- Ranked item scores
How to interpret the output
- Focus on relative spacing and tiers rather than over-reading small score differences
How to communicate to clients
- Explain that scores are comparative not absolute liking levels
Displayr / Q implementation notes
- Keep item wording short and distinct
Mini demo
Best-worst task placeholder
Later versions can add a small best-versus-worst exercise to show how forced choice creates separation.
This method is marked as a good candidate for a future teaching demo, but v1 keeps the site lightweight for GitHub Pages.