Weak artificial intelligence
Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as narrow AI,[1][2][3] is focused on one narrow task. In John Searle's terms it “would be useful for testing hypotheses about minds, but would not be minds”.[4] Weak artificial intelligence focuses on mimicking how humans perform[dubious ] basic actions such as remembering things, perceiving things, and solving simple problems.[5] As opposed to strong AI, which uses technology to be able to think and learn on its own. Computers can use methods such as algorithms and prior knowledge to develop their ways of thinking as human beings do.[5] Strong artificial intelligence systems are learning how to run independently of the programmers who programmed them. Weak AI is not able to have a mind of its own, and can only imitate physical behaviors that it can observe.[dubious ][6] It is contrasted with Strong AI, which is defined variously as:
- Artificial general intelligence (AGI): a machine with the ability to apply intelligence to any problem, rather than just one specific problem.
- Human-level artificial intelligence: a machine with a similar intelligence to an average human being.
- Artificial superintelligence (ASI): a machine with a vastly superior intelligence to the average human being.
- Artificial consciousness: a machine that has consciousness, sentience and mind (John Searle uses "strong AI" in this sense).
Scholars like Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling"[7] (as, on the other hand, implied by the strong AI assumption).
Narrow AI can be classified as being “... limited to a single, narrowly defined task. Most modern AI systems would be classified in this category.”[8] Narrow means the robot or computer is strictly limited to only being able to solve one problem at a time. Strong AI is conversely the opposite. Strong AI is closer to the human brain. This is all believed to be the case by philosopher John Searle. This idea of strong AI is also controversial. Searle believes that the Turing test (created by Alan Turing during WW2, originally called the Imitation Game, used to test if a machine is as intelligent as a human) is not accurate or appropriate for testing strong AI.[9]
Terminology
This section possibly contains original research. (October 2021) (Learn how and when to remove this template message) |
“Weak AI” is sometimes called “narrow AI”, but the latter is usually interpreted as subfields within the former.[by whom?] Hypothesis testing about minds or part of minds is typically not part of narrow AI, but rather the implementation of some superficial lookalike feature. Many currently existing systems that claim to use “artificial intelligence” are likely operating as a narrow AI focused on a specific problem, and are not weak AI in the traditional sense.
Siri, Cortana, and Google Assistant are all examples of narrow AI. Still, they are not good examples of a weak AI[citation needed][discuss], as they operate within a limited pre-defined range of functions. They do not implement parts of minds, they use natural language processing together with predefined rules. They are in particular not examples of strong AI as there is no genuine intelligence nor self-awareness. AI researcher Ben Goertzel, on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.[10]
Weak AI vs. strong A.I.
The differences between weak AI vs. strong AI are not widely cataloged out there at the moment. Weak AI is commonly associated with basic technology like voice-recognition software such as Siri or Alexa as mentioned in Terminology. Whereas strong AI is not fully implemented or testable yet, it is only really fantasized about in movies or popular culture media.[11] It seems that one approach to AI moving forward is one of an assisting or aiding role to humans. There are some sets of data or numbers that even we humans cannot fully process or understand as quickly as computers can, so this is where AI will play a helping role for us.[12]
Impact
Some commentators[who?] think narrow AI could be dangerous because of this "brittleness" and fail in unpredictable ways. Narrow AI could cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles.[1]
Examples
Some examples of narrow AI are self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty.[13] Another issue with narrow artificial intelligence currently, is that behavior that it follows can become inconsistent.[14] It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments.
Simple artificial intelligence programs have already worked their way into our society and we just might not have noticed it yet. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are just to name a few.[15] As much as narrow and relatively general AI is slowly starting to help out societies, they are also starting to hurt them as well. AI had already unfairly put people in jail, discriminated against women in the workplace for hiring, taught some problematic ideas to millions, and even killed people with automatic cars.[16] AI might be a powerful tool that can be used for improving our lives, but it could also be a dangerous technology with the potential for things to get out of hand.
Social media
Facebook, and other similar social media platforms, have been able to figure out how to use artificial intelligence and machine learning, or more specifically narrow AI, to predict how people will react to being shown certain images. Narrow artificial intelligence systems have been able to identify what users will engage with, based on what they post, following the patterns or trends.[17]
Twitter has started to have more advanced AI systems to figure out how to identify narrower AI forms and detect if bots may have been used for biased propaganda, or even potentially malicious intentions. These AI systems do this through filtering words and creating different layers of conditions based on what AI has had implications for in the past, and then detecting if that account may be a bot or not.[18]
TikTok uses its "For You" algorithm to determine a user's interests very quickly through analyzing patterns in what videos the user initially chooses to watch. This narrow AI system uses patterns found between videos to determine what video should be shown next including the duration, who has shared or commented on it already, and music played in the videos. The "For You" algorithm on TikTok is so accurate, that it can figure out exactly what a user has an interest in or even really loves, in less than an hour.[19]
See also
- Artificial intelligence
- A.I. Rising
- Deep learning
- Expert system
- History of artificial intelligence
- Virtual Assistant
- Machine learning
- Philosophy of artificial intelligence
- Artificial general intelligence
- Hardware for artificial intelligence
- Synthetic intelligence
References
- ↑ 1.0 1.1 Dvorsky, George (April 1, 2013). "How Much Longer Before Our First AI Catastrophe?". https://gizmodo.com/how-much-longer-before-our-first-ai-catastrophe-464043243.
- ↑ Muehlhauser, Luke (October 18, 2013). "Ben Goertzel on AGI as a Field". https://intelligence.org/2013/10/18/ben-goertzel/.
- ↑ Chalfen, Mike (October 15, 2015). "The Challenges Of Building AI Apps". https://techcrunch.com/2015/10/15/machine-learning-its-the-hard-problems-that-are-valuable/.
- ↑ The Cambridge handbook of artificial intelligence. Frankish, Keith., Ramsey, William M., 1960-. Cambridge, UK. 12 June 2014. pp. 342. ISBN 978-0-521-87142-6. OCLC 865297798.
- ↑ 5.0 5.1 Chandler, Daniel; Munday, Rod (2020). A Dictionary of Media and Communication. Oxford University Press. doi:10.1093/acref/9780198841838.001.0001. ISBN 978-0-19-884183-8. http://dx.doi.org/10.1093/acref/9780198841838.001.0001.
- ↑ Colman, Andrew M. (2015). A dictionary of psychology (4th ed.). Oxford. ISBN 978-0-19-965768-1. OCLC 896901441. https://www.worldcat.org/oclc/896901441.
- ↑ Lieto, Antonio (2021). Cognitive Design for Artificial Minds. London, UK: Routledge, Taylor & Francis. pp. 85. ISBN 9781138207929.
- ↑ Bartneck, Christoph; Lütge, Christoph; Wagner, Alan; Welsh, Sean (2021) (in en). An Introduction to Ethics in Robotics and AI. SpringerBriefs in Ethics. Cham: Springer International Publishing. doi:10.1007/978-3-030-51110-4. ISBN 978-3-030-51109-8. http://link.springer.com/10.1007/978-3-030-51110-4.
- ↑ Liu, Bin (2021-03-28). ""Weak AI" is Likely to Never Become "Strong AI", So What is its Greatest Value for us?". Arxiv.
- ↑ Goertzel, Ben (February 6, 2010). "Siri, the new iPhone "AI personal assistant": Some useful niche applications, not so much AI". http://multiverseaccordingtoben.blogspot.com/2010/02/siri-new-iphone-personal-assistant-some.html.
- ↑ Kerns, Jeff (February 15, 2017). "What's the Difference Between Weak and Strong AI?". https://www.proquest.com/docview/1876870051.
- ↑ LaPlante, Alice; Maliha, Balala (2018). Solving Quality and Maintenance Problems with AI. O'Reilly Media, Inc.. ISBN 9781491999561. https://learning.oreilly.com/library/view/solving-quality-and/9781491999561/?ar=.
- ↑ Szocik, Konrad; Jurkowska-Gomułka, Agata (2021-12-16). "Ethical, Legal and Political Challenges of Artificial Intelligence: Law as a Response to AI-Related Threats and Hopes" (in en). World Futures: 1–17. doi:10.1080/02604027.2021.2012876. ISSN 0260-4027. https://www.tandfonline.com/doi/full/10.1080/02604027.2021.2012876.
- ↑ Kuleshov, Andrey; Prokhorov, Sergei (September 2019). "Domain Dependence of Definitions Required to Standardize and Compare Performance Characteristics of Weak AI Systems". 2019 International Conference on Artificial Intelligence: Applications and Innovations (IC-AIAI). Belgrade, Serbia: IEEE. pp. 62–623. doi:10.1109/IC-AIAI48757.2019.00020. ISBN 978-1-7281-4326-2. https://ieeexplore.ieee.org/document/9007318.
- ↑ Earley, Seth (2017). "The Problem With AI". IT Professional 19 (4): 63–67. doi:10.1109/MITP.2017.3051331. ISSN 1520-9202. https://ieeexplore.ieee.org/document/8012343.
- ↑ Anirudh, Koul; Siddha, Ganju; Meher, Kasam (2019). Practical Deep Learning for Cloud, Mobile, and Edge. O'Reilly Media. ISBN 9781492034865. https://learning.oreilly.com/library/view/practical-deep-learning/9781492034858/?ar=.
- ↑ Kaiser, Carolin; Ahuvia, Aaron; Rauschnabel, Philipp A.; Wimble, Matt (2020-09-01). "Social media monitoring: What can marketers learn from Facebook brand photos?" (in en). Journal of Business Research 117: 707–717. doi:10.1016/j.jbusres.2019.09.017. ISSN 0148-2963. https://www.sciencedirect.com/science/article/pii/S0148296319305429.
- ↑ Shukla, Rachit; Sinha, Adwitiya; Chaudhary, Ankit (28 February 2022). "TweezBot: An AI-Driven Online Media Bot Identification Algorithm for Twitter Social Networks" (in en). Electronics 11 (5): 743. doi:10.3390/electronics11050743. ISSN 2079-9292.
- ↑ Hyunjin, Kang (September 2022). "AI agency vs. human agency: understanding human-AI interactions on TikTok and their implications for user engagement". https://academic.oup.com/jcmc/article/27/5/zmac014/6670985?login=false.
Original source: https://en.wikipedia.org/wiki/Weak artificial intelligence.
Read more |