Human–artificial intelligence collaboration

From HandWiki
Short description: Area of study


Human-AI collaboration is the study of how humans and artificial intelligence (AI) agents work together to accomplish a shared goal.[1] AI systems can aid humans in everything from decision making tasks to art creation.[2] Examples of collaboration include medical decision making aids.,[3][4] hate speech detection,[5] and music generation.[6] As AI systems are able to tackle more complex tasks, studies are exploring how different models and explanation techniques can improve human-AI collaboration.

Improving collaboration

Explainable AI

When a human uses an AI's output, they often want to understand why a model gave a certain output.[7] While some models, like decision trees, are inherently explainable, black box models do not have clear explanations. Various Explainable artificial intelligence methods aim to describe model outputs with post-hoc explanations[8] or visualizations,[9] these methods can often provide misleading and false explanations.[10] Studies have also found that explanations may not improve the performance of a human-AI team, but simply increase a human's reliance on the model's output.[11]

Trust in AI

A human's trust in an AI agent is an important factor in human-AI collaboration, dictating whether the human should follow or override the AI's input.[12] Various factors impact a person's trust in an AI system, including its accuracy[13] and reliability[14]

Why is humanizing AI-Generated text important?

Here are the reasons why humanizing AI-generated content is important:[15]

  1. Relatability: Human readers seek emotionally resonant content. AI can lack the nuances that make content relatable.
  2. Authenticity: Readers value a genuine human touch behind content, ensuring it doesn't come off as robotic.
  3. Contextual Understanding: AI can misinterpret nuances, requiring human oversight for accuracy.
  4. Ethical Considerations: Humanizing AI content helps identify and rectify biases, ensuring fairness.
  5. Search Engine Performance: AI may not consistently meet search engine guidelines, risking penalties.
  6. Conversion Improvement: Humanized content connects emotionally and crafts tailored calls to action.
  7. Building Trust: Humanized content adds credibility, fostering reader trust.
  8. Cultural Sensitivity: Humanization ensures content is respectful and tailored to diverse audiences.

References

  1. Sturm, Timo; Gerlach, Jin P.; Pumplun, Luisa; Mesbah, Neda; Peters, Felix; Tauchert, Christoph; Nan, Ning; Buxmann, Peter (2021). "Coordinating Human and Machine Learning for Effective Organizational Learning". MIS Quarterly 45 (3): 1581–1602. doi:10.25300/MISQ/2021/16543. https://misq.org/coordinating-human-and-machine-learning-for-effective-organizational-learning.html. 
  2. Mateja, Deborah; Heinzl, Armin (July 2021). "Towards Machine Learning as an Enabler of Computational Creativity". IEEE Transactions on Artificial Intelligence 2 (6): 460–475. doi:10.1109/TAI.2021.3100456. ISSN 2691-4581. https://ieeexplore.ieee.org/document/9500215. 
  3. Yang, Qian; Steinfeld, Aaron; Zimmerman, John (2019-05-02). "Unremarkable AI". Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI '19. Glasgow, Scotland Uk: Association for Computing Machinery. pp. 1–11. doi:10.1145/3290605.3300468. ISBN 978-1-4503-5970-2. https://doi.org/10.1145/3290605.3300468. 
  4. Patel, Bhavik N.; Rosenberg, Louis; Willcox, Gregg; Baltaxe, David; Lyons, Mimi; Irvin, Jeremy; Rajpurkar, Pranav; Amrhein, Timothy et al. (2019-11-18). "Human–machine partnership with artificial intelligence for chest radiograph diagnosis" (in en). npj Digital Medicine 2 (1): 111. doi:10.1038/s41746-019-0189-7. ISSN 2398-6352. PMID 31754637. 
  5. "Facebook's AI for Hate Speech Improves. How Much Is Unclear" (in en-us). Wired. ISSN 1059-1028. https://www.wired.com/story/facebook-ai-hate-speech-improves-unclear/. Retrieved 2021-02-08. 
  6. Roberts, Adam; Engel, Jesse; Mann, Yotam; Gillick, Jon; Kayacik, Claire; Nørly, Signe; Dinculescu, Monica; Radebaugh, Carey et al. (2019). "Magenta Studio: Augmenting Creativity with Deep Learning in Ableton Live". Proceedings of the International Workshop on Musical Metacreation (MUME). http://musicalmetacreation.org/buddydrive/file/mume_2019_paper_2/. 
  7. Samek, Wojciech; Montavon, Grégoire; Vedaldi, Andrea; Hansen, Lars Kai; Müller, Klaus-Robert (2019-09-10) (in en). Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer Nature. ISBN 978-3-030-28954-6. https://books.google.com/books?id=j5yuDwAAQBAJ&q=explainable+AI&pg=PR5. 
  8. Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos (2016-08-13). ""Why Should I Trust You?"". Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD '16. San Francisco, California, USA: Association for Computing Machinery. pp. 1135–1144. doi:10.1145/2939672.2939778. ISBN 978-1-4503-4232-2. https://doi.org/10.1145/2939672.2939778. 
  9. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. (October 2017). "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization". 2017 IEEE International Conference on Computer Vision (ICCV). pp. 618–626. doi:10.1109/ICCV.2017.74. ISBN 978-1-5386-1032-9. https://ieeexplore.ieee.org/document/8237336. 
  10. Adebayo, Julius; Gilmer, Justin; Muelly, Michael; Goodfellow, Ian; Hardt, Moritz; Kim, Been (2018-12-03). "Sanity checks for saliency maps". Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS'18 (Montréal, Canada: Curran Associates Inc.): 9525–9536. https://dl.acm.org/doi/10.5555/3327546.3327621. 
  11. Bansal, Gagan; Wu, Tongshuang; Zhou, Joyce; Fok, Raymond; Nushi, Besmira; Kamar, Ece; Ribeiro, Marco Tulio; Weld, Daniel S. (2021-01-12). "Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance". arXiv:2006.14779 [cs.AI].
  12. Glikson, Ella; Woolley, Anita Williams (2020-03-26). "Human Trust in Artificial Intelligence: Review of Empirical Research". Academy of Management Annals 14 (2): 627–660. doi:10.5465/annals.2018.0057. ISSN 1941-6520. https://journals.aom.org/doi/10.5465/annals.2018.0057. 
  13. Yin, Ming; Wortman Vaughan, Jennifer; Wallach, Hanna (2019-05-02). "Understanding the Effect of Accuracy on Trust in Machine Learning Models". Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI '19. Glasgow, Scotland Uk: Association for Computing Machinery. pp. 1–12. doi:10.1145/3290605.3300509. ISBN 978-1-4503-5970-2. https://doi.org/10.1145/3290605.3300509. 
  14. Bansal, Gagan; Nushi, Besmira; Kamar, Ece; Lasecki, Walter S.; Weld, Daniel S.; Horvitz, Eric (2019-10-28). "Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance" (in en). Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (1): 2–11. doi:10.1609/hcomp.v7i1.5285. https://ojs.aaai.org/index.php/HCOMP/article/view/5285. 
  15. "Humanize AI Text". https://www.humanizeaitext.org/.