Machine consciousness

From Scholarpedia
Igor Aleksander (2008), Scholarpedia, 3(2):4162. doi:10.4249/scholarpedia.4162 revision #91446 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Igor Aleksander

Machine consciousness refers to attempts by those who design and analyse informational machines to apply their methods to various ways of understanding consciousness and to examine the possible role of consciousness in informational machines.

Contents

Origins of machine consciousness.

As it is difficult to relate personal conscious sensations to conventional physics and chemistry, machine consciousness is an attempt to understand consciousness using the methods and laws of informational machines. Such machines range from the algorithmic where an apparent, external behaviour leads to an attribution of consciousness, to fine-grain neural systems where the neural dynamics of conscious experience can be considered by identifying states that have sensation-like characteristics.

Thinking of consciousness as an identifiable property of a well-specified machine was exercising the mind of cyberneticians as long ago as the 1960s (summarised in Nemes, 1969) and and neural network analysts in the early 1990s (Taylor 1992, Aleksander 1992). However, an important event in the history of this topic is a meeting sponsored in 2001 at the Cold Spring Harbour Laboratories (CSHL). Sponsored by the Swartz Foundation (that normally funds scientific meetings on brain studies) it addressed the question 'Could Machines Be Conscious?'. The organisers were neuroscientist Christof Koch, philosopher David Chalmers, computer scientist Rodney Goodman, and roboticist Owen Holland. While there was little agreement on precise definitions of consciousness between the audience of 21, made up of neuroscientists, philosophers and computer scientists, there was agreement on the following proposition. “There is no known law of nature that forbids the existence of subjective feelings in artefacts designed or evolved by humans (http://www.swartzneuro.org/abstracts/2001_summary.asp)”. In the years which followed several lines of research investigating the design of such artefacts, came into being.

Early models

Several approaches already existed at the time of the CSHL meeting and these are outlined below.

Global workspace

One of the models of conscious processes that pre-dates the CSHL meeting is that of Bernard Baars (1988 and 1997). The model is based on the supposition that there are many unconscious processes that compete with one another to cause their output to enter a ‘global workspace (GW)’ area of the system. Such unconscious processes could be perceptual or memory-based or both. The GW broadcasts the winning entry to all the unconscious processes influencing their subsequent states. It is this broadcast that, according to Baars, constitutes the conscious event. The entry into the GW is also influenced by the current sensory inputs to the system so that the most important or salient process with respect to current sensory state of the organism is likely to enter the GW. Stan Franklin of Memphis University (2006) made use of Baars' Global Workspace (GW) model to design a billeting system for sailors. The needs of an individual sailor are among the competing processes and suggestions for billeting from which the result of the broadcast from the GW constitutes the advice given to the user. Franklin explicitly does not claim that this process encompasses an explanation of sensations. He mainly stresses that functionally, consciousness may be attributed to his working model as, to a user, it appears to carry out a task that normally requires conscious deliberation in a human billeter. Global workspace has influenced current work as, for example, that of Shanahan (2007) who is developing working models of this using spiking neurons.

Virtual machine functionalism

Arising largely from an article by Sloman and Chrisley (2003) this approach to machine consciousness rests on Dennett's observation that a way of resolving over-constraining materialist ties in the study of consciousness, it is possible to consider mental states as being the result of a virtual machine running on the parallel material of the neural brain. They draw attention to Block’s (1996) contention that a functionalist model is merely a state machine. That is, a mental state inexorably leads to another pre-determined mental state and this can only be influenced by sensory input. This is a rather restricted view of the richness of mental activity. Sloman and Chrisley point out that this impoverished view may be replaced by a richer one where one allows that a mental state is the product of the state of several interacting machines whose states not only contribute to an overall mental state, but where these sub-states can modify each other through interaction of the machines. Calling this ‘virtual machine functionalism’, they provide a ‘schema’ (COGAFF) for discussing the virtual processes that constitute a functional view of being conscious. They suggest that it is important to consider two principal interacting streams, one which flows from sensation to action and another which flows from reaction to deliberation. The also allow for a global form of altered control such as may arise in an emergency.

Phenomenal virtual machines

A partitioned mental state as advocated under virtual machine functionalism also appears in Aleksander (2005). Here, some of the sub-states have a phenomenal character through being assumed to be the states of fine-grained neural networks which depict the world and the organism in it. In implementations, such depictions are displayed on computer screens to make explicit the virtual mental state of the machine. To shed some computational light on consciousness, Aleksander argues that it is a compound concept that covers many phenomena. He expresses these as five axioms for each of which meaningful virtual states of neural computing models may be found. The first is presence which finds mechanisms for representing the world with the organism in it by incorporating motor signals. The second is imagination which relates to autonomous state trajectories that may be sustained in the absence of sensory input. The third is attention which (exogenously) relates to mechanisms that guide the sensors of the organism during perceptual acts and (endogenously) when such guidance occurs during imaginative acts. The fourth and fifth axioms deal with the related concepts of planning (exploration in imaginative acts against a volitional state) and emotion as an evaluation of plans.

Consciousness in robots and systems

Artificial organisms form a useful grounding tool for machine approaches to consciousness. These are discussed next.

Conscious processes as algorithms or control structures

Designers of complex systems both in AI and control structures have pointed to aspects of such design as being helpful in modelling and understanding consciousness. For example, Benjamin Kuipers of the University of Texas (1996) draws attention to algorithms in AI which extract and track specific items of information from what he calls “the firehose of experience”. It is the very process of tracking, possibly a multiplicity of objects and events which endows the machine with the property of generating a coherent narrative of its consciousness of the world. While admittedly addressing the ‘easy problem’, Kuipers suggest a way of "sneaking up" on the ‘hard problem’ (Chalmers 1996) citing computational systems to hold representations which correspond to world events and processes by being created through algorithms that learn such representations. This admittedly appears to leave open the question as to why any informational constructs should be felt. Kuipers suggests that whatever "feeling the information" might mean, it is likely to be derived from accurate representations of the world and the ability to act correctly, these being the direct product of appropriate informational processes. In a similar vein, but in the context of control system design, Ricardo Sanz of Madrid University (2007) points out that the machinery which is expected to behave correctly in a complex world demands a design complexity that relies increasingly on adaptation and learning. He suggests that there may be something it is like “to be a model-based reflective predictive controller” of a machine with a mission, which is akin to there being something it is like to be a conscious being with a purpose in life.

Consciousness for robots

Owen Holland of Essex University with his colleagues (2007) has the distinction of having been the first researcher to have been awarded a major grant [by the Engineering and Physical Sciences Research Council in the UK] to investigate the basic constructionist question: ‘were a robot to be conscious, how would it be designed?’. Holland’s primary principle is to build a human-like skeletal structure ready to engage with a real world so that it can build an internal virtual model of the world and its own interaction with it as a fundamental conscious thought. Holland sees machine consciousness as being of the hard and soft variety, where these terms refer to either making inroads to an understanding of consciousness, or using the concept for building more competent machines. His internal model is based on Gerry Hesslow’s notion of consciousness as ‘inner simulation’ developed at the University of Lund (2007). This too has led to the concept-proving design, with Tom Ziemke, of a miniature robot called K. Part of Hesslow and Ziemke’s philosophy is not to ask whether the artefact has or has not qualia, but to note that the question can be asked of the robot with the same legitimacy as is done for humans or other animals.

Resilient and brain-based machines

Another strand to methodologies in machine consciousness has been added by Bongard et.al. (2006) through the construction of resilient robotic organisms. These model their own capacity for locomotion in the world and are able to adapt to deficits in their physical structure. As locomotion is an ancient sign of a conscious internal state, this remodelling activity may be important in the work of those who attempt to build conscious robots. For some years, Edelman and his colleagues (e.g. Krichmar and Edelman, 2003) have been building the Darwin series of robots, not so much to capture consciousness, but more as a check for the viability of hypotheses of the functioning of brain architectures. In these machines adaptation and self learning are the key features related to being conscious that are studied.

The externalist outlook for robots

Antonio Chella of Palermo University (2007) has designed a robot that accepts two views of consciousness: externalism (as advocated by Riccardo Manzotti of the University of Milan) and the ‘sensorimotor contingency’ expressed by Kevin O’Regan and Alva Noë. Externalism contends that consciousness of the world cannot be studied by constraining it to the brain – if a theory of consciousness is to arise it must incorporate the ‘entanglement’ of brain and environment. In agreement with this is the sensorimotor contingency which maintains that there is little need for internal representation in an organism conscious of its environment – the world as it stands provides sufficient representation. The motor parts of the organism are crucial, and ‘mastery’ of the sensorimotor contingency suggests the learning of appropriate action which attends to important features in the world. Consciousness is then a ‘breaking into’ this process. Chella has designed robots that are intended to act as guides to human visitors to a museum. The two theories impinge on this design because there is a tight coupling between robot and environment as the robot has mastery of a sensorimotor contingency to guide its action.

Neural mechanisms

Considerable effort in Machine Consciousness has involved properties of neural computational forms. These conclude this article.

Neural models of cognition

Looking at systems other than robots, a substantial contribution to machine consciousness has been made by the Finnish engineer Pentti Haikonen. Working for the Nokia company in Helsinki he is developing a neural-network approach to express important features of consciousness as signal processes in a multi-channel architecture (2003). He addresses issues such as perception, inner vision, inner speech and emotions. In common with others who approach consciousness as an engineering design process, Haikonen comes to the conclusion that given a proper decomposition of the concept into its cognitive parts, many mysteries are clarified. In his recent writings (2007) Haikonen confirms and deepens the notion that his neural structure leads to an informed discussion about meaning and representation. Of course, with the engineering approach, the criticism can be advanced that all this only addresses the ‘easy problem’ of the necessary functioning of substrates to create cognitive representations leaving the ‘hard problem’ of the link to experienced qualia or sensations untouched. The main counter to this argument is currently the ‘virtual’ notion that qualia and sensations may be discussed as virtual concepts that do not depend on links to physical substrates and yet can be addressed in the language of informational systems.

Theory within neurophysiology

While discovering behaviours in the brain itself is not a direct contribution to machine consciousness as discussed in this article, it still plays an important role both as inspiration for design and to identify what still needs to be discovered in the brain in order for convincing machine models to be developed. A typical example of the former is the work of John Taylor of King’s College in London. He argues for the central role of attention in human consciousness, and that, consequently, machine approaches should deal centrally with attention. In Taylor (2007) he describes a system called CODAM (Corollary Discharge of Attention Movement) and argues that ‘bridging’ links may be found between the model and phenomenal consciousness in humans, particularly in terms of transparency, presence, unity, intentionality and perspective. A deeper theoretical approach is suggested by Seth et. al. (2006) that draws on the work of Tononi (2004). This introduces a measure of ‘Information Integration’, (PHI) which treats consciousness as a necessary capacity of a neural network rather than a process. Relevant in these assessments is a measure of causally significant connections in neural networks that can be seen as repositories of experience (Seth 2005). In sum, this informational analysis related to consciousness looks for a multiplicity of measurements which, when taken together, identifies a level of complexity in a neural network that may be needed to achieve the complexity of organisms which are likely to be conscious.

Perspective

Currently (at the very beginning of 2008), machine consciousness has the status of a pragmatic attempt to use methodologies known in information, neuronal and control sciences both to throw light on what it is to be conscious and to aid the design of complex autonomous systems that require a non-living form of consciousness to act correctly in a complex world. In the process of early development are the notions of both functional and phenomenological virtual machines which encourage informational discussions of consciousness in a way that is not limited by coupling to its physical substrate. Machines appear to benefit from this as the acquisition of the necessary skill to operate in highly complex informational worlds is akin to the conscious action of living organisms in similar worlds.

References

  • Aleksander, I. (1992) 'Capturing consciousness in neural systems' Artificial Neural Networks, 2. Proc. ICANN-92, London: North-Holland. 17-22
  • Aleksander, I. (2005) The World in My Mind My Mind in the World: Key Mechanisms of Consciousness in Humans Animals and Machines. (Exeter, Imprint Academic)
  • Baars, B. (1988) A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
  • Baars, B. (1997) In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press.
  • Block, N. (1996). "What is functionalism?" a revised version of the entry on functionalism in The Encyclopedia of Philosophy Supplement, Macmillan
  • Bongard, J.,Zykov, V. and Lipson, H. (2006)Resilient Machines Through Continuous Self-Modeling, Science, 314, 1118
  • Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
  • Chella, A.(2007) Towards Robot Conscious Perception. In Chella and Manzotti (Eds.), Artificial Consciousness, Exeter: Imprint Academic.
  • Franklin (2003) ‘IDA a Conscious Artifact?’ Journal of Consciousness Studies,10 (4-5), pp47-66.
  • Haikonen, P. O.(2003) The Cognitive Approach to Conscious Machines. UK: Imprint Academic
  • Haikonen, P. O.(2007) Robot Brains: Circuits and Systems for Conscious Machines. UK: Wiley & Sons.
  • Hesslow. G. and Jirenhed, D. A. (2007) Must Machines be Zombies? Internal Simulation as a Mechanism for Machine Consciousness, Proc AAAI Symp. Machine Consciousness and AI, Washington.
  • Holland, O., Knight, R. and Newcombe, R.(2007), The Role of the Self Process in Embodied Machine Consciousness. in Chella and Manzotti (Eds.), Artificial Consciousness, Exeter: Imprint Academic.
  • Krichmar, J.L. and Edelman, G.M.(2003) Brain-based devices: intelligent systems based on principles of the nervous system.(IROS 2003). IEEE/RSJ.Vol 1. 940- 945
  • Kuipers, B. (2007) Sneaking Up On the Hard Problem of Consciousness. Proc AAAI Symp. Machine Consciousness and AI, Washington.
  • Nemes, T. (1969). Cybernetic Machines, Budapest: Iliffe Books.
  • Sanz, R. Lopez, I. and Bermejo-Alonso, J. (2007) A Rationale and Vision for Machine Consciousness in Complex Controllers. In Chella and Manzotti (Eds.), Artificial Consciousness, Exeter: Imprint Academic.
  • Seth, A.K., Izhikevich, E.M., Reeke, G.N., & Edelman, G.M. (2006). Theories and measures of consciousness: An extended framework. Proc. Nat. Acad. Sci. USA. 103(28):10799-10804
  • Seth, A.K. (2005) Causal connectivity analysis of evolved neural networks during behavior. Network: Computation in Neural Systems.16(1):35-54
  • Shanahan, M. A (2007) Spiking neuron model of cortical broadcast and competition. Consciousness and Cognition, doi:10.1016/j.concog.2006.12.005
  • Sloman, A. and Chrisley, R. (2003) Virtual machines and consciousness. Journal of Consciousness Studies 10:4-5 (April/May),133-72.
  • Taylor, J.G. (1992)'From single neuron to cognition' Artificial Neural Networks, 2. Proc. ICANN-92, London: North-Holland. 11-15
  • Tononi, G. (2004) An information integration theory of consciousness. BMC Neuroscience 2004, 5:42

Internal references

  • John G. Taylor (2007) CODAM model. Scholarpedia, 2(11):1598.
  • Olaf Sporns (2007) Complexity. Scholarpedia, 2(10):1623.
  • Gregoire Nicolis and Catherine Rouvas-Nicolis (2007) Complex systems. Scholarpedia, 2(11):1473.
  • Mark Aronoff (2007) Language. Scholarpedia, 2(5):3175.


Recommended reading

  • Aleksander, I. The World in My Mind My Mind in the World: Key Mechanisms of Consciousness in Humans Animals and Machines. Exeter, Imprint Academic , 2005
  • Baars, B. J. In the Theater of Consciousness,Oxford, Oxford University Press, 1997
  • Haikonen, P. O.The Cognitive Approach to Conscious Machines. UK: Imprint Academic, 2003
  • Holland, O. (Editor) Machine Consciousness, Special Issue of the Journal of Consciousness Studies ,Exeter: Imprint Academic, 2003
  • Manzotti, R. and Chella, A., (Eds.): Artificial Consciousness, Exeter: Imprint Academic, 156 - 173. 2007
  • Torrance, S., Clowes, R. and Chrisley, R.: Machine Consciousness: Embodiment and Imagination. Special Issue of the Journal of Consciousness Studies , Exeter: Imprint Academic,2007

External links

See also

Attention and Consciousness, Consciousness, Models of Consciousness, Neurorobotics, Robotics

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools