The Cone Trees network The UX Bookmark UX Quotes UX Jobs in Singapore Nice one yeah!

The Difference Between a Heuristic Evaluation and an Expert Review

Summary
Heuristic evaluations and expert review have the same goal- to evaluate the usability of the product. While the goal of these usability evaluation methods is the same, the methods are different.

It is common to hear people people using these terms interchangeably. An expert review is termed as a heuristic evaluation when in actuality the evaluators evaluated the usability of the product referring to their own knowledge of right and wrong rather than explicitly referencing against a set of heuristics.

This article explains the difference between a Heuristic Evaluation and an Expert Review and tells you when to apply which method.

What is a Heuristic Evaluation?

A heuristic evaluation is the evaluation of the usability of a product against a set of heuristics. Issues are found and reported and recommendations are made explicitly referencing this set of heuristics.

What is an Expert Review?

An expert review is the evaluation of the usability of a product by an expert in the usability domain and preferably in the domain the product applies to. An expert may or may not directly refer to a set of heuristics during the evaluation and while reporting issues and recommendations. Besides this, an expert will evaluate the usability of the product against what the expert has learned throughout their
experience of working on usability of products- through data yielded on their own or accessed from existing research.

When to Use a Heuristic Evaluation and When to Use an Expert Review?

It would be safe to say that higher the expertise of the evaluator performing a heuristic evaluation or an expert review, the higher are your chances of yielding useful results. Also, an expert review will yield better results as compared to a heuristic evaluation when performed by experts since this incorporates knowledge about the domain and their own experience which in most cases may go beyond what a set of heuristics might help one find.

However, in the case a usability evaluation has to be done by a group of evaluators that do not have much experience in usability or the domain the product applies to, a heuristic evaluation will have higher chances of yielding better results than an expert review, since these evaluators will have a set of heuristics (rules of thumb) to refer to, as opposed to utilizing their own experience to make judgments which in this case will be very limited and may yield a considerable number of ‘false alarms’.

11 Comments

  1. Gideon Simons said on October 2, 2012 at 7:40 am |

    Very good explanation!
    I think the best of both worlds would be a hybrid evaluation process of independent professional feedback followed by feedback according to a set of heuristics.

    Gideon

  2. Abhay Rautela (Cone Trees) said on October 2, 2012 at 2:42 pm |

    Thanks for the comment Gideon, glad you like it. As I mentioned in the article, an expert review can (and should) lend itself to a set of heuristics, which is what I believe you are referring too, and I agree to as well.

  3. Chompi said on October 9, 2012 at 12:31 am |

    How does heuristic or expert reviews compare to customer dissatisfaction reasons?
    I find customer dissatisfaction reasons (literally asking customers what drove them to give an unsatisfactory rating to the product) to be critical to identify what is just not meeting the bar for customers.

    Any improvement in customer outcomes should address dissatisfiers addressed by customers.

    Heuristic or expert reviews are useful when it comes to hypothesize why or what to change to address customer dissatisfiers.

    What are your thoughts?

  4. Laurie Kantner said on October 9, 2012 at 7:27 pm |

    I have used these terms interchangeably, thinking of “heuristic evaluation” as UX-insider-speak and “expert review” as more plain English. The distinction you point out has validity - thank you for the article!

  5. Sven Laqua said on October 10, 2012 at 1:50 am |

    I partly agree in that experts tend to label their ‘expert evaluations’ as ‘heuristic evaluations’ while not necessarily sticking to an explicit set of heuristics in the process.

    That said, I believe ‘experts’ are called specifically that because they have learnt the relevant heuristics and apply them implicitly in their ‘expert review’. As you point out, sticking to a fixed set of heuristics can feel restricting to an expert and moreover, it does add overhead.

    Also, the point of a ‘proper’ heuristic evaluation is to identify the most pressing issues, as per how many (non expert) evaluators have identified the same issues. You need to utilize a consistent set of heuristics to achieve this.

    If a single expert is running an evaluation/review, that benefit of a set list of heuristics is gone, and it is left to the expert to judge which issues are more serious than others.

    Nevertheless, an expert will apply relevant heuristics implicitly :)

  6. Patricia said on October 12, 2012 at 5:57 am |

    The value of heuristics, which can often considered ti be a standard, is that a certain set of “measurements” are used from evaluation to evaluation consistently. For example, Jakob Nielsen’t 10 Heuristics for usability.

  7. Terry said on October 23, 2012 at 1:00 am |

    Can you give an example of the set of heuristics that might be used?

  8. Abhay Rautela (ConeTrees) said on October 23, 2012 at 1:45 pm |

    @Chompi: You asked, “How does heuristic or expert reviews compare to customer dissatisfaction reasons?… What are your thoughts?”. Great question! I like it because it lets me explain how the usability of a product can be improved during initial development and after release.

    Usability evaluation can be broken down into three parts: inspections, testings and inquiry. In a nutshell, inspection is about usability experts/ evaluators inspecting usability, and testing is about using users to evaluate usability and inquiry is about evaluators inquiring users of a system/ application (different terminology in different but intersecting disciplines) about it through attitudinal and behavioural methods.

    Summative usability testing measures effectiveness, efficiency and subjective satisfaction. All three are reported quantitatively. But in addition, qualitative data on satisfaction/ dissatisfaction can be captured post-task or post-test probing.

    You can’t gather ‘customer’ satisfaction for a product in development, but you can perform usability evaluations (ideally performed at every phase of the UX process) on it throughout the Product Development Lifecycle (PDLC) to meet the usability goals of the project and ensure it is easy to use when it is released. On the other had, once the product is released, its important to measure and continually improve the user experience. This will happen through studying web analytics, administrating satisfaction surveys, conducting usability tests, contextual inquiries, what have you. This is where the customer dis/satisfaction you mention fits in. As you can see, different user research and usability evaluation methods are suitable for different phases of the PDLC. Hope this helps!

  9. Abhay Rautela (ConeTrees) said on October 23, 2012 at 2:20 pm |

    @Laurine: You said “I have used these terms interchangeably, thinking of “heuristic evaluation” as UX-insider-speak and “expert review” as more plain English. The distinction you point out has validity - thank you for the article!”

    You’re welcome Laurine! I’m glad you found it helpful.

  10. ConeTrees said on November 12, 2012 at 5:27 pm |

    @Sven Laqua: Thanks for your comment, well said :)

  11. Abhay Rautela (Cone Trees) said on November 12, 2012 at 5:53 pm |

    @Terry: There are a number of different heuristics. Besides, Nielsen’s heuristics for interface design, there are some more mentioned by Jeff Sauro at his blog: Measuring Usability. Here is the URL to that post: http://www.measuringusability.com/blog/he.php

    1. Bastien and Scapin created a set of 18 Ergonomic criteria
    2. Gerhardt-Powals 10 Cognitive Engineering
    3. Principles Connell & Hammond’s 30 Usability Principles
    4. Smith & Mosier’s 944 guidelines for the design of user-interfaces (from 1986)

Have something to say? Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*