Algorithm aversion

From HandWiki
Short description: Biased assessment of an algorithm

Algorithm aversion is "biased assessment of an algorithm which manifests in negative behaviours and attitudes towards the algorithm compared to a human agent."[1] It describes a phenomenon where humans reject advice from an algorithm in a case where they would accept the same advice if they thought it was coming from another human.

Algorithms, such as those employing machine learning methods or various forms of artificial intelligence, are commonly used to provide recommendations or advice to human decisionmakers. For example, recommender systems are used in E-commerce to identify products a customer might like, and artificial intelligence is used in healthcare to assist with diagnosis and treatment decisions. However, humans sometimes appear to resist or reject these algorithmic recommendations more than if the recommendation had come from a human. Notably, algorithms are often capable of outperforming humans, so rejecting algorithmic advice can result in poor performance or suboptimal outcomes.

This is an emerging topic and it is not completely clear why or under what circumstances people will display algorithm aversion. In some cases, people seem to be more likely to take recommendations from an algorithm than from a human, a phenomenon called algorithm appreciation.[2]

Examples of algorithm aversion

Algorithm aversion has been studied in a wide variety of contexts. For example, people seem to prefer recommendations for jokes from a human rather than from an algorithm,[3] and would rather rely on a human to predict the number of airline passengers from each US state instead of an algorithm.[4] People also seem to prefer medical recommendations from human doctors instead of an algorithm.[5]

Factors affecting algorithm aversion

Various frameworks have been proposed to explain the causes for algorithm aversion and techniques or system features that might help reduce aversion.[1][6]

Decision control

Algorithms may either be used in an advisory role (providing advice to a human who will make the final decision) or in an delegatory role (where the algorithm makes a decision without human supervision). A movie recommendation system providing a list of suggestions would be in an advisory role, whereas the human driver delegates the task of steering the car to Tesla's Autopilot. Generally, a lack of decision control tends to increase algorithm aversion.[citation needed]

Perceptions about algorithm capabilities and performance

Overall, people tend to judge machines more critically than they do humans.[7] Several system characteristics or factors have been shown to influence how people evaluate algorithms.

Algorithm Process and the role of system transparency

One reason people display resistance to algorithms is a lack of understanding about how the algorithm is arriving at its recommendation.[3] People also seem to have a better intuition for how another human would make recommendations. Whereas people assume that other humans will account for unique differences between situations, they sometimes perceive algorithms as incapable of considering individual differences and resist the algorithms accordingly.[8]

Decision domain

People are generally skeptical that algorithms can make accurate predictions in certain areas, particularly if task involves a seemingly human characteristic like morals or empathy. Algorithm aversion tends to be higher when the task is more subjective and lower on tasks that are objective or quantifiable.[1]

Human characteristics

Domain expertise

Expertise in a particular field has been shown to increase algorithm aversion[2] and reduce use of algorithmic decision rules.[9] Overconfidence may partially explain this effect; experts might feel that an algorithm is not capable of the types of judgments they make. Compared to non-experts, experts also have more knowledge of the field and therefore may be more critical of a recommendation. Where a non-expert might accept a recommendation ("The algorithm must know something I don't.") the expert might find specific fault with the algorithm's recommendation ("This recommendation does not account for a particular factor").

Decision-making research has shown that experts in a given field tend to think about decisions differently than a non-expert.[10] Experts chunk and group information; for example, chess grandmasters will see opening positions (e.g., the Queen's Gambit or the Bishop's Opening) instead of individual pieces on the board. Experts may see a situation as a functional representation (e.g., a doctor could see a trajectory and predicted outcome for a patient instead of a list of medications and symptoms). These differences may also partly account for the increased algorithm aversion seen in experts.

Culture

Different cultural norms and influences may cause people to respond to algorithmic recommendations differently. The way that recommendations are presented (e.g., language, tone, etc.) may cause people to respond differently.[citation needed]

Age

Digital natives are younger and have known technology their whole lives, while digital immigrants have not. Age is a commonly-cited factor hypothesized to affect whether or not people accept algorithmic recommendations. For example, one study found that trust in an algorithmic financial advisor was lower among older people compared with younger study participants.[11] However, other research has found that algorithm aversion does not vary with age.[2]

Proposed methods to overcome algorithm aversion

Algorithms are often capable of outperforming humans or performing tasks much more cost-effectively.[4][3]

Human-in-the-loop

One way to reduce algorithmic aversion is to provide the human decision maker with control over the final decision.

System transparency

Providing explanations about how algorithms work has been shown to reduce aversion. These explanations can take a variety of forms, including about how the algorithm as a whole works, about why it is making a particular recommendation in a specific case, or how confident it is in its recommendation.[1]

User training

Algorithmic recommendations represent a new type of information in many fields. For example, a medical AI diagnosis of a bacterial infection is different than a lab test indicating the presence of a bacteria. When decision makers are faced with a task for the first time, they may be especially hesitant to use an algorithm. It has been shown that learning effects achieved through repeated tasks, constant feedback and financial incentives can contribute towards reducing algorithm aversion.[12]

Algorithm appreciation

Studies do not consistently show people demonstrating bias against algorithms and sometimes show the opposite, preferring advice from an algorithm instead of a human. This effect is called algorithm appreciation.[2] Results are mixed, showing that people sometimes seem to prefer advice that comes from an algorithm instead of a human.

For example, customers are more likely to indicate initial interest to human sales agents compared to automated sales agents but less likely to provide contact information to them. This is due to "lower levels of performance expectancy and effort expectancy associated with human sales agents versus automated sales agents".[13]

References

  1. 1.0 1.1 1.2 1.3 Jussupow, Ekaterina; Benbasat, Izak; Heinzl, Armin (2020). "Why Are We Averse Towards Algorithms ? A Comprehensive Literature Review on Algorithm Aversion". Twenty-Eighth European Conference on Information Systems (ECIS2020): 1–16. https://aisel.aisnet.org/ecis2020_rp/168/. 
  2. 2.0 2.1 2.2 2.3 Logg, Jennifer M.; Minson, Julia A.; Moore, Don A. (2019-03-01). "Algorithm appreciation: People prefer algorithmic to human judgment" (in en). Organizational Behavior and Human Decision Processes 151: 90–103. doi:10.1016/j.obhdp.2018.12.005. ISSN 0749-5978. https://www.sciencedirect.com/science/article/abs/pii/S0749597818303388. 
  3. 3.0 3.1 3.2 Yeomans, Michael; Shah, Anuj; Mullainathan, Sendhil; Kleinberg, Jon (2019). "Making sense of recommendations" (in en). Journal of Behavioral Decision Making 32 (4): 403–414. doi:10.1002/bdm.2118. ISSN 1099-0771. https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2118. 
  4. 4.0 4.1 Dietvorst, Berkeley J.; Simmons, Joseph P.; Massey, Cade (2015). "Algorithm aversion: People erroneously avoid algorithms after seeing them err." (in en). Journal of Experimental Psychology: General 144 (1): 114–126. doi:10.1037/xge0000033. ISSN 1939-2222. PMID 25401381. http://doi.apa.org/getdoi.cfm?doi=10.1037/xge0000033. 
  5. Cabitza, Federico (2019). "Biases Affecting Human Decision Making in AI-Supported Second Opinion Settings". MDAI 2019 - International Conference on Modeling Decisions for Artificial Intelligence. Lecture Notes in Computer Science. 11676. pp. 283–294. doi:10.1007/978-3-030-26773-5_25. ISBN 978-3-030-26773-5. https://link.springer.com/chapter/10.1007/978-3-030-26773-5_25. 
  6. Burton, Jason W.; Stein, Mari-Klara; Jensen, Tina Blegind (2020). "A systematic review of algorithm aversion in augmented decision making" (in en). Journal of Behavioral Decision Making 33 (2): 220–239. doi:10.1002/bdm.2155. ISSN 1099-0771. https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2155. 
  7. Hidalgo, Cesar (2021). How Humans Judge Machines. Cambridge, MA: MIT Press. ISBN 978-0-262-04552-0. 
  8. Longoni, Chiara; Bonezzi, Andrea; Morewedge, Carey K (2019-05-03). "Resistance to Medical Artificial Intelligence". Journal of Consumer Research 46 (4): 629–650. doi:10.1093/jcr/ucz013. ISSN 0093-5301. 
  9. Arkes, Hal R.; Dawes, Robyn M.; Christensen, Caryn (1986-02-01). "Factors influencing the use of a decision rule in a probabilistic task" (in en). Organizational Behavior and Human Decision Processes 37 (1): 93–110. doi:10.1016/0749-5978(86)90046-4. ISSN 0749-5978. https://www.sciencedirect.com/science/article/abs/pii/0749597886900464. 
  10. Feltovich, Paul J.; Prietula, Michael J.; Ericsson, K. Anders (2006), Ericsson, K. Anders; Charness, Neil; Feltovich, Paul J. et al., eds., "Studies of Expertise from Psychological Perspectives", The Cambridge Handbook of Expertise and Expert Performance, Cambridge Handbooks in Psychology (Cambridge: Cambridge University Press): pp. 41–68, doi:10.1017/cbo9780511816796.004, ISBN 978-1-107-81097-6, https://www.cambridge.org/core/books/cambridge-handbook-of-expertise-and-expert-performance/studies-of-expertise-from-psychological-perspectives/3A7FF4C6F3426BE751C71EDF84927741, retrieved 2021-09-08 
  11. Lourenço, Carlos J.S.; Dellaert, Benedict G.C.; Donkers, Bas (2020-02-01). "Whose Algorithm Says So: The Relationships Between Type of Firm, Perceptions of Trust and Expertise, and the Acceptance of Financial Robo-Advice" (in en). Journal of Interactive Marketing 49: 107–124. doi:10.1016/j.intmar.2019.10.003. ISSN 1094-9968. https://www.sciencedirect.com/science/article/pii/S1094996819301112. 
  12. Filiz, Ibrahim; Judek, Jan René; Lorenz, Marco; Spiwoks, Markus (2021-09-01). "Reducing algorithm aversion through experience" (in en). Journal of Behavioral and Experimental Finance 31: 100524. doi:10.1016/j.jbef.2021.100524. ISSN 2214-6350. https://www.sciencedirect.com/science/article/pii/S221463502100068X. 
  13. Adam, Martin; Roethke, Konstantin; Benlian, Alexander (2022-11-10). "Human Versus Automated Sales Agents: How and Why Customer Responses Shift Across Sales Stages". Information Systems Research 34 (3): 1148–1168. doi:10.1287/isre.2022.1171. ISSN 1047-7047. https://pubsonline.informs.org/doi/full/10.1287/isre.2022.1171.