UX Playbook

Heuristic evaluation

A heuristics evaluation is a thorough and objective way to identify the most important usability hurdles or to compare two or more products in a fair way, using a set of design principles. The benefit of this method compared with an expert review is that it is more structured and might be more convincing to stakeholders.

Often used heuristics are the 10 Usability Heuristics for User Interface Design by Jakob Nielsen to assess usability. The UX honeycomb by Peter Morville gives a broader perspective because it assesses user experience.

The deliverable is a report with all observations, the evaluation, and conclusions. The evaluation is best done with two or three researchers, to make sure it is objective and to catch every observation. Two see more than one, three are even better!

The main challenge with this kind of project is scoping: you don't want to discount on the dept and quality of the observations, so make clear agreements on scope. Try to underpromise and overdeliver.

How it works

This step-by-step guide makes sure the evaluation is done in a structured and unbiased way.

  1. Start by familiarizing yourself with the product by clicking through it. Make some initial notes of what catches your attention.
  2. Choose a framework to base your assessment on. This will structure the findings and gives the outcome credibility. You can use the 10 Usability Heuristics for User Interface Design by Jakob Nielsen, or the UX honeycomb by Peter Morville. When choosing the set of heuristics, think of what the assessment should be about (e.g. for user experience you might look at the honeycomb, for usability Nielsen, for accessibility the W3C standards). You can also add the company's own design principles to the mix.
  3. Align your approach with the client. Include the steps, the scope, the framework, and how you will assign scores. Plan review moments with the client now, so they make time in their agenda.

Example heuristics evaluation outline

4 websites:

Focus on the following sections:

Process

2 researchers, individually execute these activities for all 4 websites:

Once these steps have been completed individually, scores are compared to each-other and an average of both scores is used as final score.

Together, a conclusion is written up to compare the 4 websites.

Deliverables

  1. Begin with the end in mind. Think of the structure of the report, how to present observations, and set some guidelines for writing. This is the best time to create templates and other assets. Don't reinvent the wheel, but get input from previous projects and reuse those assets.
Assest created in Figma to present observations
  1. Start the assessment: do it individually to make sure you're as objective as possible. Makes notes of everything that catches your eye. Focus on the negative observations, but call out any outstanding positive observations as well.
  2. Merge the observations of all researchers so every researcher bases their score on the same information.
One of the slides with observations of the Marktplaats project
  1. Give scores individually, add your reasoning.
  2. Compare scores and take the average or agree on the final score.

Example of scores for Marktplaat project

  1. Summarize your findings and spend enough time on the summary and conclusions. Advisable to do this with the whole team. Take a step back from the details and discuss the bigger picture: what are the most important conclusions? This is what is most interesting to the client!
  2. Present the conclusions to the client and deliver the report

Two slides of the summary of the Marktplaats project

Tips

This Slack entry has given the team very valuable feedback to add to the assessment

Useful links