The System Usability Scale (SUS), created by John Brooke in 1986, offers a quick and effective way to evaluate the usability of your products and designs. SUS is a practical and reliable tool for measuring perceived ease of use, and it can be used across a broad range of digital products and services to help UX practitioners determine if there is an overall problem with a design solution. Unlike something like a usability report,  SUS is not diagnostic and is used to provide an overall usability assessment measurement, as defined by ISO 9241-11, which is made up of the following characteristics:

  • Effectiveness – can users successfully achieve their objectives?
  • Efficiency – how much effort and resource is expended in achieving those objectives
  • Satisfaction – was the experience satisfactory?
A vector illustration showing an A-B comparison between experiences on two different devices.

After each usability testing session, give the user the 10-question SUS questionnaire to complete. These questions are designed to get quick and unfiltered feedback from the user for each testing session and to be answered quickly without onerous interaction. One of the primary benefits of using SUS is that the feedback is reliable and repeatable. In addition, it can be used to compare two different design solutions by quick A/B testing. In order to compare with the results for other designs, do not change the wording or order of the questions.

When distributing the questionnaire, it’s good practice to use a working title or descriptor of the project to avoid any confusion or bias.

Collect feedback from a minimum of five users for reliable (feedback) data. Users should be given 1-2 minutes to complete the questionnaire. Make sure that no other feedback is collected, only the ranking scores, to ensure the questionaire’s integrity.

The 10 System Usability Scale questions

  1. I think that I would like to use this [project] frequently.
  2. I found the [project] unnecessarily complex.
  3. I thought the [project] was easy to use.
  4. I think that I would need the support of a technical person to be able to use this [project].
  5. I found the various functions in this [project] were well integrated.
  6. I thought there was too much inconsistency in this [project].
  7. I imagine that most people would learn to use this [project] very quickly.
  8. I found the [project] very cumbersome to use.
  9. I felt very confident using the [project].
  10. I needed to learn a lot of things before I could get going with this [project].

You can add additional context to survey questions, like adding a description of the type of project. Scoring is based on a 5-point Likert Scale from strongly disagree to strongly agree; however, you can also change the language of these if you’d like, for example to ’worst imaginable’, ‘awful’, ‘poor’, ‘OK’, ‘good’, ‘excellent’, or ‘best imaginable’.

An example of a likert scale multiple choice question.

 Each response is assigned a value for the SUS score calculation. The points breakdown for the responses are:

Strongly Disagree: 1 point
Disagree: 2 points
Neutral: 3 points
Agree: 4 points
Strongly Agree: 5 points

How to calculate a SUS score


Looking at a respondent’s answers and the corresponding number score for each response, you can tabulate the overall SUS score by using the following framework:

  • Add up the total score for all odd-numbered questions, then subtract 5 from the total to get (X).
  • Add up the total score for all even-numbered questions, then subtract that total from 25 to get (Y).
  • Add up the total score of the new values (X+Y) and multiply by 2.5.

    Example scoring:

    Odd = (4+5+3+4+3) = 19 – 5 = 14
    Even = (2+1+3+1+1) = 25 – 8 = 17
    SUS Score: (14+17) x 2.5 = 77.5

    Odd – questions 1, 3, 5, 7, and 9
    Even – questions 2, 4, 6, 8, and 10

By following the scoring tabulation methodology above, you’ll then get a SUS score out of 100 (in this case, 77.5/100). Note that this is not a percentage score but a total score out of 100. The average SUS score is 68, and scoring above or below the average will give you immediate insight into the overall usability of the design solution.

Scores below 68 point to issues with the design that need to be researched and resolved, while scores higher than 68 indicate the need for minor improvements to the design.

SUS scores will fall into a range of categories: best imaginable, excellent, good, OK, poor and worst imaginable. It is up to the UX designer to determine a follow-up course of action. This can include detailed testing to find the root of the problem and existing pain points, or a rethink of the design solution entirely.

Visualization of acceptibility scores in the System Usability Scale.
SUS Acceptability Score. Image credit 10up.com.

What to do in the case of very low SUS scores

Scores below 51 require immediate attention with regard to usability issues or pain points that need to be remedied or investigated further. UX practitioners can consider a quick list of issues to investigate, particularly those that prevented the user from being able to complete a specific task during testing:

  • Does the navigation lack intuitive structures or hierarchy?
  • Are labels clear and understandable?
  • Are content taxonomies intuitively categorized and discoverable? 
  • Are tasks and user flow overly complicated?
  • Does the design solution create frustration or repeated task errors?

What SUS scores provide

The System Usability Scale is a quick and reliable method of assessing the usability of design solutions. It will not provide insight into specific problems, but does provide easy feedback on the overall ease of use of a site or app from a user’s perspective.  Use SUS scoring to evaluate prototypes and sprint deliverables to ensure usability is being accessed with each iteration.

References:

Brooke, J B (2013). SUS – a retrospective. Journal of Usability Studies, Vol. 8, Issue 2, February 2013 pp. 29-40.

Brooke, J. (1986).
“SUS: a “quick and dirty” usability scale”. In P. W. Jordan, B. Thomas, B. A. Weerdmeester, & A. L. McClelland (eds.). Usability Evaluation in Industry. London: Taylor and Francis.