Artificial consciousness

From HandWiki
Revision as of 21:52, 8 February 2024 by Jworkorg (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Field in cognitive science

Artificial consciousness[1] (AC), also known as machine consciousness (MC),[2][3] synthetic consciousness[4] or digital consciousness,[5] is the consciousness hypothesized to be possible in artificial intelligence.[6] It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel qualia).[7]

Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness or NCC. Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious.[8]

Philosophical views

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.[9]

Plausibility debate

Type-identity theorists and other skeptics hold the view that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution.[10][11][12][13]

In his article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can no longer be reprogrammed, from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."[14]

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.[15]

Computational Foundation argument

One of the most explicit arguments for the plausibility of artificial sentience comes from David Chalmers. His proposal is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. Chalmers proposes that a system implements a computation if "the causal structure of the system mirrors the formal structure of the computation", and that any system that implements certain computations is sentient.[16]

The most controversial part of Chalmers' proposal is that mental properties are "organizationally invariant". Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are "characterized by their causal role". Aided by previous work,[17][18] he says that "[s]ystems with the same causal topology…will share their psychological properties".

Phenomenological properties, unlike psychological properties, are not definable in terms of their causal roles. Establishing that phenomenological properties are a consequence of a causal topology, therefore, requires argument. Chalmers provides his Dancing Qualia argument for this purpose.[19]

Chalmers begins by assuming that his principle of organization invariance is false: that agents with identical causal organizations could have different experiences. He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. The experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could "notice" the shift in experience; Chalmers considers this state of affairs an implausible reducto ad absurdum establishing that his principle of organizational invariance must almost certainly be true.

Critics[who?] of artificial sentience object that Chalmers begs the question in assuming that all mental properties and external connections are sufficiently captured by abstract causal organization.

Controversies

In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous.[20] Philosopher Nick Bostrom said that he thinks LaMDA probably is not conscious, but asked "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain... there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."[21]

Testing

The most well-known method for testing machine intelligence is the Turing test. But when interpreted as only observational, this test contradicts the philosophy of science principles of theory dependence of observations. It also has been suggested that Alan Turing's recommendation of imitating not a human adult consciousness, but a human child consciousness, should be taken seriously.[22]

Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Although various systems may display various signs of behavior correlated with functional consciousness, there is no conceivable way in which third-person tests can have access to first-person phenomenological features. Because of that, and because there is no empirical definition of sentience,[23] a test of presence of sentience in AC may be impossible.

In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments.[24] He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.

Ethics

If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).

In 2021, German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering".[25]

Aspects of consciousness considered necessary

Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious.[26] The functions of consciousness suggested by Baars are: Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function. Igor Aleksander suggested 12 principles for artificial consciousness:[27]The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.

Awareness

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined,[clarification needed] and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities.

There are at least three types of awareness:[28] agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.[29]

Memory

Conscious events interact with memory systems in learning, rehearsal, and retrieval.[30] The IDA model[31] elucidates the role of consciousness in the updating of perceptual memory,[32] transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA, there is evidence that this is also the case in the nervous system.[33] In IDA, these two memories are implemented computationally using a modified version of Kanerva’s Sparse distributed memory architecture.[34]

Learning

Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events.[26] Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".[35]

Anticipation

The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander.[36] The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events.[36] An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.

Subjective experience

Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism.

Implementation proposals

Symbolic or hybrid

Intelligent Distribution Agent

Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory.[26][37] His brainchild IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled.[38][39]

While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task." IDA has been extended to LIDA (Learning Intelligent Distribution Agent).

CLARION cognitive architecture

Main page: CLARION (cognitive architecture)

The CLARION cognitive architecture posits a two-level representation that explains the distinction between conscious and unconscious mental processes. CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task.[40] Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.

OpenCog

Ben Goertzel made an embodied AI through the open-source OpenCog project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.

Connectionist

Haikonen's cognitive architecture

Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."[41][42]

Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many.[43][44] A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.[45][46]

Shanahan's cognitive architecture

Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").[47][2][3][48]

Takeno's self-awareness research

Self-awareness in robots is being investigated by Junichi Takenoat Meiji University in Japan.[49] Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it.[50][51][52] Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self-awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis.[53] He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).

Impossible Minds: My Neurons, My Consciousness

Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and wrote in his 1996 book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language.[54] Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain is a neural state machine—is open to doubt.[55]

Creativity Machine

Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),[56][57][58] or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies.[59] He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity.[60][61][62] Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.[61][63][64][65][66]

Attention schema theory

Main page: Philosophy:Attention schema theory

In 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema.[67] Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain".[8] This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well-studied body schema that tracks the spatial place of a person's body.[8] This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.

"Self-modeling"

Hod Lipson defines "self-modeling" as a necessary component of self-awareness or consciousness in robots. "Self-modeling" consists of a robot running an internal model or simulation of itself.[68][69]

In fiction

Main page: Simulated consciousness in fiction
  • In Arthur C. Clarke's The City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.

See also


References

Citations

  1. Thaler, S. L. (1998). "The emerging intelligence and its critical look at us". Journal of Near-Death Studies 17 (1): 21–29. doi:10.1023/A:1022990118714. 
  2. 2.0 2.1 Gamez 2008.
  3. 3.0 3.1 Reggia 2013.
  4. Smith, David Harris; Schillaci, Guido (2021). "Why Build a Robot With Artificial Consciousness? How to Begin? A Cross-Disciplinary Dialogue on the Design and Implementation of a Synthetic Model of Consciousness". Frontiers in Psychology 12: 530560. doi:10.3389/fpsyg.2021.530560. ISSN 1664-1078. PMID 33967869. 
  5. Elvidge, Jim (2018) (in en). Digital Consciousness: A Transformative Vision. John Hunt Publishing Limited. ISBN 978-1-78535-760-2. https://books.google.com/books?id=kIqttQEACAAJ. 
  6. Chrisley, Ron (October 2008). "Philosophical foundations of artificial consciousness". Artificial Intelligence in Medicine 44 (2): 119–137. doi:10.1016/j.artmed.2008.07.011. PMID 18818062. https://www.sciencedirect.com/science/article/abs/pii/S0933365708001000. 
  7. Institute, Sentience. "The Terminology of Artificial Sentience" (in en). http://www.sentienceinstitute.org/blog/artificial-sentience-terminology. 
  8. 8.0 8.1 8.2 Graziano 2013.
  9. Block, Ned (2010). "On a confusion about a function of consciousness" (in en). Behavioral and Brain Sciences 18 (2): 227–247. doi:10.1017/S0140525X00038188. ISSN 1469-1825. https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/on-a-confusion-about-a-function-of-consciousness/061422BF0C50C5FF00927F9B6E879413. 
  10. Block, Ned (1978). "Troubles for Functionalism". Minnesota Studies in the Philosophy of Science: 261–325. 
  11. Bickle, John (2003) (in en). Philosophy and Neuroscience. Dordrecht: Springer Netherlands. doi:10.1007/978-94-010-0237-0. ISBN 978-1-4020-1302-7. http://link.springer.com/10.1007/978-94-010-0237-0. 
  12. Schlagel, R. H. (1999). "Why not artificial consciousness or thought?". Minds and Machines 9 (1): 3–28. doi:10.1023/a:1008374714117. 
  13. Searle, J. R. (1980). "Minds, brains, and programs". Behavioral and Brain Sciences 3 (3): 417–457. doi:10.1017/s0140525x00005756. http://cogprints.org/7150/1/10.1.1.83.5248.pdf. 
  14. Artificial consciousness: Utopia or real possibility? Buttazzo, Giorgio, July 2001, Computer, ISSN 0018-9162
  15. Putnam, Hilary (1967). The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion. University of Pittsburgh Press. 
  16. David J. Chalmers (2011). "A Computational Foundation for the Study of Cognition". Journal of Cognitive Science 12 (4): 325–359. doi:10.17791/JCS.2011.12.4.325. https://www.ida.liu.se/divisions/hcs/seminars/cogsciseminars/Papers/Chalmers_Computational_foundations.pdf. 
  17. Armstrong, D. M. (1968). Honderich, Ted. ed. A Materialist Theory of the Mind. New York: Routledge. https://philpapers.org/rec/ARMAMT-5. 
  18. Lewis, David (1972). "Psychophysical and theoretical identifications" (in en). Australasian Journal of Philosophy 50 (3): 249–258. doi:10.1080/00048407212341301. ISSN 0004-8402. http://www.tandfonline.com/doi/abs/10.1080/00048407212341301. 
  19. Chalmers, David (1995). "Absent Qualia, Fading Qualia, Dancing Qualia". http://consc.net/papers/qualia.html. 
  20. "'I am, in fact, a person': can artificial intelligence ever be sentient?" (in en). the Guardian. 14 August 2022. https://www.theguardian.com/technology/2022/aug/14/can-artificial-intelligence-ever-be-sentient-googles-new-ai-program-is-raising-questions. 
  21. Leith, Sam (7 July 2022). "Nick Bostrom: How can we be certain a machine isn't conscious?". The Spectator. https://www.spectator.co.uk/article/nick-bostrom-how-can-we-be-certain-a-machine-isnt-conscious/. 
  22. "Mapping the Landscape of Human-Level Artificial General Intelligence". http://web.eecs.utk.edu/~itamar/Papers/AI_MAG_2011.pdf. 
  23. "Consciousness". In Honderich T. The Oxford companion to philosophy. Oxford University Press. ISBN:978-0-19-926479-7
  24. Victor Argonov (2014). "Experimental Methods for Unraveling the Mind-body Problem: The Phenomenal Judgment Approach". Journal of Mind and Behavior 35: 51–70. http://philpapers.org/rec/ARGMAA-2. 
  25. Metzinger, Thomas (2021). "Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology". Journal of Artificial Intelligence and Consciousness 08: 43–66. doi:10.1142/S270507852150003X. 
  26. 26.0 26.1 26.2 Baars 1995.
  27. Aleksander, Igor (1995). "Artificial neuroconsciousness an update". in Mira, José; Sandoval, Francisco (in en). From Natural to Artificial Neural Computation. Lecture Notes in Computer Science. 930. Berlin, Heidelberg: Springer. pp. 566–583. doi:10.1007/3-540-59497-3_224. ISBN 978-3-540-49288-7. https://link.springer.com/chapter/10.1007/3-540-59497-3_224. 
  28. Joëlle Proust in Neural Correlates of Consciousness, Thomas Metzinger, 2000, MIT, pages 307–324
  29. Christof Koch, The Quest for Consciousness, 2004, page 2 footnote 2
  30. Tulving, E. 1985. Memory and consciousness. Canadian Psychology 26:1–12
  31. Franklin, Stan, et al. "The role of consciousness in memory." Brains, Minds and Media 1.1 (2005): 38.
  32. Franklin, Stan. "Perceptual memory and learning: Recognizing, categorizing, and relating." Proc. Developmental Robotics AAAI Spring Symp. 2005.
  33. Shastri, L. 2002. Episodic memory and cortico-hippocampal interactions. Trends in Cognitive Sciences
  34. Kanerva, Pentti. Sparse distributed memory. MIT press, 1988.
  35. "Implicit Learning and Consciousness: An Empirical, Philosophical and Computational Consensus in the Making" (in en). https://www.routledge.com/Implicit-Learning-and-Consciousness-An-Empirical-Philosophical-and-Computational/Cleeremans-French/p/book/9781138877412. 
  36. 36.0 36.1 Aleksander 1995
  37. Baars, Bernard J. (2001). In the theater of consciousness: the workspace of the mind. New York Oxford: Oxford University Press. ISBN 978-0-19-510265-9. 
  38. Franklin, Stan (1998). Artificial minds. A Bradford book (3rd print ed.). Cambridge, Mass.: MIT Press. ISBN 978-0-262-06178-0. 
  39. Franklin, Stan (2003). "IDA: A Conscious Artefact". Machine Consciousness. 
  40. (Sun 2002)
  41. Haikonen, Pentti O. (2003). The cognitive approach to conscious machines. Exeter: Imprint Academic. ISBN 978-0-907845-42-3. 
  42. "Pentti Haikonen's architecture for conscious machines – Raúl Arrabales Moreno" (in en-US). 2019-09-08. https://www.conscious-robots.com/2009/12/10/pentti-haikonens-architecture-for-conscious-machines/. 
  43. Freeman, Walter J. (2000). How brains make up their minds. Maps of the mind. New York Chichester, West Sussex: Columbia University Press. ISBN 978-0-231-12008-1. 
  44. Cotterill, Rodney M J (2003). "CyberChild - A simulation test-bed for consciousness studies". Journal of Consciousness Studies 10 (4–5): 31–45. ISSN 1355-8250. https://orbit.dtu.dk/en/publications/cyberchild-a-simulation-test-bed-for-consciousness-studies. 
  45. Haikonen, Pentti O.; Haikonen, Pentti Olavi Antero (2012). Consciousness and robot sentience. Series on machine consciousness. Singapore: World Scientific. ISBN 978-981-4407-15-1. 
  46. Haikonen, Pentti O. (2019). Consciousness and robot sentience. Series on machine consciousness (2nd ed.). Singapore Hackensack, NJ London: World Scientific. ISBN 978-981-12-0504-0. 
  47. Shanahan, Murray (2006). "A cognitive architecture that combines internal simulation with a global workspace". Consciousness and Cognition 15 (2): 433–449. doi:10.1016/j.concog.2005.11.005. ISSN 1053-8100. PMID 16384715. https://pubmed.ncbi.nlm.nih.gov/16384715. 
  48. Haikonen, Pentti O.; Haikonen, Pentti Olavi Antero (2012). "chapter 20". Consciousness and robot sentience. Series on machine consciousness. Singapore: World Scientific. ISBN 978-981-4407-15-1. 
  49. "Robot". http://robonable.typepad.jp/robot/2_/index.html. 
  50. "Takeno – Archive No...". http://www.rs.cs.meiji.ac.jp/Takeno_Archive.html. 
  51. The world first self-aware robot and The success of mirror image cognition, Takeno
  52. Takeno, Inaba & Suzuki 2005.
  53. A Robot Succeeds in 100% Mirror Image Cognition , Takeno, 2008
  54. Aleksander I (1996) Impossible Minds: My Neurons, My Consciousness, Imperial College Press ISBN:1-86094-036-6
  55. Wilson, RJ (1998). "review of Impossible Minds". Journal of Consciousness Studies 5 (1): 115–6. 
  56. Thaler, S.L., "Device for the autonomous generation of useful information"
  57. Marupaka, N.; Lyer, L.; Minai, A. (2012). "Connectivity and thought: The influence of semantic network structure in a neurodynamical model of thinking". Neural Networks 32: 147–158. doi:10.1016/j.neunet.2012.02.004. PMID 22397950. http://www.ece.uc.edu/~aminai/papers/marupaka_creativity_NN12.pdf. Retrieved 2015-05-22. 
  58. Roque, R. and Barreira, A. (2011). "O Paradigma da "Máquina de Criatividade" e a Geração de Novidades em um Espaço Conceitual," 3º Seminário Interno de Cognição Artificial – SICA 2011 – FEEC – UNICAMP.
  59. Minati, Gianfranco; Vitiello, Giuseppe (2006). "Mistake Making Machines". Systemics of Emergence: Research and Development. pp. 67–78. doi:10.1007/0-387-28898-8_4. ISBN 978-0-387-28899-4. https://archive.org/details/systemicsemergen00mina. 
  60. Thaler, S. L. (2013) The Creativity Machine Paradigm, Encyclopedia of Creativity, Invention, Innovation, and Entrepreneurship, (ed.) E.G. Carayannis, Springer Science+Business Media
  61. 61.0 61.1 Thaler, S. L. (2011). "The Creativity Machine: Withstanding the Argument from Consciousness," APA Newsletter on Philosophy and Computers
  62. Thaler, S. L. (2014). "Synaptic Perturbation and Consciousness". Int. J. Mach. Conscious 6 (2): 75–107. doi:10.1142/S1793843014400137. 
  63. Thaler, S. L. (1995). ""Virtual Input Phenomena" Within the Death of a Simple Pattern Associator". Neural Networks 8 (1): 55–65. doi:10.1016/0893-6080(94)00065-t. 
  64. Thaler, S. L. (1995). Death of a gedanken creature, Journal of Near-Death Studies, 13(3), Spring 1995
  65. Thaler, S. L. (1996). Is Neuronal Chaos the Source of Stream of Consciousness? In Proceedings of the World Congress on Neural Networks, (WCNN’96), Lawrence Erlbaum, Mawah, NJ.
  66. Mayer, H. A. (2004). A modular neurocontroller for creative mobile autonomous robots learning by temporal difference, Systems, Man and Cybernetics, 2004 IEEE International Conference(Volume:6 )
  67. Graziano, Michael (1 January 2011). "Human consciousness and its relationship to social neuroscience: A novel hypothesis". Cognitive Neuroscience 2 (2): 98–113. doi:10.1080/17588928.2011.565121. PMID 22121395. 
  68. Pavlus, John (11 July 2019). "Curious About Consciousness? Ask the Self-Aware Machines" (in en). https://www.quantamagazine.org/hod-lipson-is-building-self-aware-robots-20190711/. 
  69. Bongard, Josh, Victor Zykov, and Hod Lipson. "Resilient machines through continuous self-modeling." Science 314.5802 (2006): 1118–1121.

Bibliography

Further reading

External links