System usability scale

From HandWiki
Strongly
disagree
Strongly
agree
 1  2  3  4  5
1. I think that I would like to use this system frequently.
2. I found the system unnecessarily complex.
3. I thought the system was easy to use.
4. I think that I would need the support of a technical person to be able to use this system.
5. I found the various functions in this system were well integrated.
6. I thought there was too much inconsistency in this system.
7. I would imagine that most people would learn to use this system very quickly.
8. I found the system very cumbersome to use.
9. I felt very confident using the system.
10. I needed to learn a lot of things before I could get going with this system.
Standard version of the system usability scale

In systems engineering, the system usability scale (SUS) is a simple, ten-item attitude Likert scale giving a global view of subjective assessments of usability. It was developed by John Brooke[1] at Digital Equipment Corporation in the United Kingdom in 1986 as a tool to be used in usability engineering of electronic office systems.

The usability of a system, as defined by the ISO standard ISO 9241 Part 11, can be measured only by taking into account the context of use of the system—i.e., who is using the system, what they are using it for, and the environment in which they are using it. Furthermore, measurements of usability have several different aspects:

  • effectiveness (can users successfully achieve their objectives)
  • efficiency (how much effort and resource is expended in achieving those objectives)
  • satisfaction (was the experience satisfactory)

Measures of effectiveness and efficiency are also context specific. Effectiveness in using a system for controlling a continuous industrial process would generally be measured in very different terms to, say, effectiveness in using a text editor. Thus, it can be difficult, if not impossible, to answer the question "is system A more usable than system B", because the measures of effectiveness and efficiency may be very different. However, it can be argued that given a sufficiently high-level definition of subjective assessments of usability, comparisons can be made between systems.

SUS has generally been seen as providing this type of high-level subjective view of usability and is thus often used in carrying out comparisons of usability between systems. Because it yields a single score on a scale of 0–100, it can be used to compare even systems that are outwardly dissimilar. This one-dimensional aspect of the SUS is both a benefit and a drawback, because the questionnaire is necessarily quite general.

Recently, Lewis and Sauro[2] suggested a two-factor orthogonal structure, which practitioners may use to score the SUS on independent Usability and Learnability dimensions. At the same time, Borsci, Federici and Lauriola[3] by an independent analysis confirm the two factors structure of SUS, also showing that those factors (Usability and Learnability) are correlated.

The SUS has been widely used in the evaluation of a range of systems. Bangor, Kortum and Miller[4] have used the scale extensively over a ten-year period and have produced normative data that allow SUS ratings to be positioned relative to other systems. They propose an extension to SUS to provide an adjective rating that correlates with a given score. Based on a review of hundreds of usability studies, Sauro and Lewis[5] proposed a curved grading scale for mean SUS scores.

References

  1. Brooke, John (1996). "SUS: a "quick and dirty" usability scale". Usability Evaluation in Industry. London: Taylor and Francis. https://www.researchgate.net/publication/319394819_SUS_--_a_quick_and_dirty_usability_scale. 
  2. Lewis, J.R.; Sauro, J. (2009). The factor structure of the system usability scale. San Diego, California: International conference (HCII 2009). http://www.measuringusability.com/papers/Lewis_Sauro_HCII2009.pdf. 
  3. Borsci, Simone; Federici, Stefano; Lauriola, Marco (2009). "On the dimensionality of the System Usability Scale: a test of alternative measurement models". Cognitive Processing 10 (3): 193–197. doi:10.1007/s10339-009-0268-9. PMID 19565283. 
  4. Bangor, Aaron; Kortum, Philip T.; Miller, James T. (2008). "An Empirical Evaluation of the System Usability Scale". International Journal of Human-Computer Interaction 24 (6): 574–594. doi:10.1080/10447310802205776. https://www.tandfonline.com/doi/abs/10.1080/10447310802205776. 
  5. Sauro, J.; Lewis, J.R. (2012). Quantifying the user experience: Practical statistics for user research. Waltham, Massachusetts: Morgan Kaufmann. doi:10.1016/C2010-0-65192-3. ISBN 9780123849687. 

Further reading

External links