top of page
Search

Evolution as the Architect:Bringing the Mind Into Being


Evolution as the Architect:Bringing the Mind Into Being By David Puzak

(York University)


Edited by Will Coddington


I

In the early days of artificial intelligence, symbolic internal representations filled the spaces between perception and action. These representations relied on formal logics that contained rule based propositions to construct behaviour. However, as research in artificial intelligence continued to progress, scientists began to discover the critical role of how the body and its interaction within a dynamic environment ultimately lead us to realize that to “build a system that is intelligent it is necessary to have its representations grounded in the physical world.” (Brooks, Elephants Don't Play Chess, 1990, p. 5). Here the definition of intelligence is redefined as something that has the potential to act autonomously in an environment and not be self limited by its own inner representations. Brooks was ultimately trying to say that intelligence as a sense-think-act process is incompatible with true adaptive behaviour and that by taking thinking out of the equation a more accurate model of cognition would ensue. This discovery threatened the role of internal representations in explaining what human intelligence truly consists of. This shift away from symbolic representation was the key turning point in helping researchers realize that perhaps true intelligence is not only founded in logical reasoning and computation but as adaptive behaviours that promote survival within a specific environment. Brooks extrapolates on the importance of this by stating that, “mobility, acute vision and the ability to carry out survival related tasks in a dynamic environment provide a necessary basis for the development of true intelligence.” (Brooks, Intelligence without representation, 1991, p. 140). The very act of performing these survival related tasks in an ever changing environment is a far more accurate basis for understanding the true definition of intelligence. By taking this approach in redefining intelligence, in a way we are mimicking the process of how evolution came to shape what we see has higher cognitive processes. By examining the role of how evolutionary pressures contribute to shaping the brain and body through complex environmental interactions, we begin to see the emergence of the mind as a specific entity fit to problem solve and maximize its efficiency in a larger dynamical system. In doing so, we will become better equipped to reorient our models of cognition to more accurate accounts aiding in our construction of artificial intelligence. It is better suited to take this bottom-up approach to studying artificial intelligence rather than adopting a top-down approach. A bottom-up approach assumes that our cognitive processes are products of an engine of reason. One specific bottom-up approach borrowed from information theory is that of subsumption architecture. Subsumption architecture aims to explain complex and intelligent behaviour by reducing it into simpler components. I will make a case of how throughout evolution the human brain has had to solve domain specific adaptive problems and through this process continual improvements guided by these problems have made small improvements to the brains architecture. These solutions have incrementally through time combined themselves in such a way to give rise to the higher cognitive faculties of reasoning.


II


Each stage of human evolution has found solutions to domain specific problems that have been slowly tinkered through selection to provide us with the cognitive apparatus we currently possess. Although this has been an extremely lengthy process, is has allowed human beings to possess the faculties of higher cognitive processes like reasoning. The parallels between the domain specific adaptations we have gained from evolution and the subsumption architecture we have created in the field of robotics is worth investigating primarily because of the value that the process of evolution can impart on our endeavour in creating accurate models of human cognition. The domain specific adaptations discussed in this paper all relate to the development of a specific cognitive faculty and help explain how small improvements through time have gradually reached complexity. There exists a growing amount of evidence and research that supports the idea of how body, mind and environment are a complex but deeply intertwined causal system. One particularly interesting account involves how the switch from quadruped locomotion to bipedal in early hominid fossils influenced the size of our inner ear balance organs to allow us to become sensitive to vertical movements. This strengthens the embodiment thesis by reinforcing the fact that “changes in behaviour over evolutionary time are associated with coordinated changes in both the periphery and the nervous system.” (Beer, 1997, p. 154). These peripheral changes modified the architecture of the brain in turn creating new layers of cognitive processes that could have led to such beneficial adaptive behaviours as navigating more rugged terrain, avoiding predators, enhancing visual perception and further augmenting our problem solving abilities. It is worth noting that an organism does not select for these adaptive solutions electively – no forethought on the part of the organism plays any role in gaining these new layers of cognition; it is merely a product of agent-environment interaction. This supports Brooks’ assertion that “low level simple activities can instill an organism with reactions to dangerous or important changes in its environment without complex representations and the need to reason about them.” (Brooks, Intelligence without representation, 1991, p. 6). As a result of the changes brought on by the development of bipedal movement, humans gained better control over their environment through increased cognitive capacity of their surroundings. Although classical artificial intelligence research can be seen to have successfully reverse engineered the mathematical processes of human cognition, how can domain specific adaptations account for its emergence? The idea that mathematics had emerged purely a priori in the mind is generally supported by proponents of those who assume that our cognitive processes are predominantly products of an engine of reason. However, Lakoff and Nunez provide a perspective of the development of mathematical thought founded in environmental interactions that steer us towards the idea that mathematical processes are based on representations that are grounded in the external environment. Some of the domain specific problems that would have needed solutions would be directly related to the perceptions of the phenomena we were sensing. For example, in object collection, in order to best maximize our economy of materials such as stones needed for tools in the manipulation of our environment, humans would have needed some way of keeping track of sum or difference. (Lakoff, 2001, p. 64). This primitive behaviour preceded the more complex mathematical concepts that are seen today because it was this among many incremental processes provided by evolution that lead to more complex problem solving abilities. The emergence of mathematical thought was a process that included interactions between the human brain, body, and environment. The continuous feedback provided by domain specific problems found in the environment led to a simultaneous increase in brain size and as a result – enhanced cognitive abilities (Beer, 1997, p. 554). In adding to what is commonly seen as strictly representational within cognition but has seen new light within embodiment research is that of language being the product of interactions between an organism and the environment. What approaches to explaining language can we investigate in gaining a better understanding in our modelling of cognition? It has been a widely held assumption that language is a representational vehicle in which beliefs are carried. However, numerous studies in neuroscience have shown no positive correlate between beliefs and neural processes (Churchland, 1981). If this continues to hold true and empirical evidence goes against vindicating beliefs, proponents of the mind as an engine of reason such as Fodor and his Language of Thought hypothesis would be wise to investigate other avenues of discourse. Andrews and Radenovic (2010) explained the following: “If the antirepresentationalists are correct, there is no categorical definition of belief, that is, no definition that would tie belief to language, concepts, content, or representations. Rather, “belief” is a cluster term that includes dispositional stereotypes and patterns of behavioural and affective responses that can be analyzed only in terms of an interaction between the organism and the environment. Further, belief is not binary; there are degrees of belief, and there is belief relative to a context.” (pg. 41) Beliefs are generally seen as internal representations for the causes of behaviour and this is held firm by thinkers such as Fodor. However, how does this relate in our abilities to recreate the linguistic abilities of humans in artificial life? Language has always been a challenge in artificial intelligence, but by approaching language from an embodied perspective and examining the behaviour of an organism in its environmental interactions (rather than reverse engineering using a top-down approach like we have currently done); we could gain a much clearer picture of the processes that underlie it. Further, by investigating the nature of dispositional stereotypes, a view on how linguistic ability emerged by way of domain specific adaptive responses might help in reorienting our cognitive models of language. More evidence that during its infancy, human cognition did not rely on inner representations to generate and guide behaviours lie in the experiments conducted in change blindness. The main premise behind change blindness is in the fact that because we do not carry internal representations the result is that we are prone to missing many details that are found in our environment. Humans as situated organisms take information from their environment as needed. It would not suit early hominids adaptively to carry maps of their terrain internally since it would not assist them maximizing their problem solving abilities in encountering a myriad of new novel stimuli. The experiments in change blindness provide us with another fact -- that an organism can be limited by its physiology. The anatomy of the eye only affords humans a very small presentation of the environment – the fovea (area with the sharpest central vision) is very small. In order to compensate for this, evolution has provided human vision systems with extremely fast eye movements called visual saccades. These saccades help humans maximize cognitive efficiency by allowing them to quickly parse their environment in wide horizontal and vertical planes. These eye saccades developed by a domain specific adaptation have likely had a direct impact in the development of the human brains visual centres and have led to changes in how they attend to their environment (Kalat, 2009, p. 242). Recall the similar account made earlier in the relationship between bipedalism and its influence on cognitive functioning. Again, this supports the example of some pre existing ‘organ’ that is being constantly modified in a simultaneous fashion with the environment and our cognition. The fovea is a structure that exists in its own right – but as a result of environmental pressures, some adaptation had to occur in order for humans to maximize their cognitive efficiency. Another field founded in evolutionary psychology that has potential benefits in assisting us in creating accurate bottom-up models of cognition is the neuroscience of reasoning. How can a bottom-up approach explain seemingly complex behaviour such as social reasoning? Taking a look at what evolutionary pressures played a part in shaping this domain specific adaptation can allow us to see what purpose reasoning had in coming into being. One hypothesis for the development of social reasoning is found in what is called the Machiavellian approach. This approach assumes that it was advantageous to gain the ability to be able to socially manipulate others in a group in order to gain an individual benefit (Atkinson, 2003, p. 15). As early human social groups grew in complexity, so consequently did the interactions between them. Studies help confirm the fact that this pressure for social adaptation in larger groups lead to marked changes in brain size – “average social group size and neo-cortex size are positively correlated across species: the bigger the social groupings, the bigger is the neo-cortex relative to the rest of the brain.” (Atkinson, 2003, p. 16). This can help conclude that a specific pressure helped select for a sub system in the brain that helped increase an aspect of intelligence. When it is known what particular sub systems underlie a process (such as reasoning), it is easier to understand its place in the whole system adding to a better understanding of how all of the sub systems came together to create its function. This can positively impact the creation of bottom-up models in artificial intelligence. The views and ideas shared in the preceding paragraphs all express the idea of what is termed as neuroconstructivism. Research based on this field of study generally looks towards finding empirical evidence found in dynamical systems to try and distance themselves from the limiting burden of internal representations. This increases the breadth of someday soon finding new theoretical bases for cognitive processes that up to this day have been based on representation sparse top-down models of cognition.


III


The limitations of this paper revolve around whether or not bottom-up processes such as those used in subsumption architecture can account for more complex cognition. The answer to this is most likely yes, that given time and allowance for technological limitations to evolve, it is possible that the theories we develop in areas of research such as neuroscience and evolutionary psychology will allow us to apply these as models to the subsumption architecture in the field of robotics. Throughout human evolution, humans did not have to rely on any forms of internal representation. In turn it is not necessary that in our creation of autonomous agents this be a requirement either. Given the change, complexity can emerge through agent- environment interactions by way of designed purposeful domain specific adaptations (Brooks, Elephants Don't Play Chess, 1990, p. 3). Many apply to adopt a hybrid approach to creating models of cognition that involve combining top-down and bottom-up approaches in order to try to capture the full picture of human cognition. Even though this seems intuitive, it is counterproductive in the modelling of cognition. This need not be the case because whereas top- down approaches are limited in that reverse engineering a system does not reveal to us the purpose that each sub system had in creating the whole. Evolution required millions of years of tinkering, if we are going to build the cognitive system with the ground up based on subsumption architecture, then we can expect some delays in our current theoretical models. Classical AI used a top-down approach only because it was more convenient and modelled cognitive faculties that were relatively recent evolutionary gains. For example, one of the top-down approaches to modelling language was created into a software program called DEC talk. DEC talk only focused on language as tokens of internal representation and fell short of what meaning the words had within the context of a conversation (Clark, 2001, pp. 63-64). As a result, the top-down approaches to language still have not found any success in exhibiting anything near the human linguistic flexibility. Another area of limitation could be seen in how the role of memory might relate to representation. Do memories count as forms of internal representation? Even if they do, memory can be conceptualized as a link between the internal and external. It was an adaptive benefit that we as humans developed memory. For example, highly developed spatial memory skills help us locate ourselves in a particular place in time and help us navigate terrains (Kalat, 2009, p. 384). This allows for a strategic manipulation of the environment that could include hunting and catching prey, memories of harmful food substances, animals that pose a threat to our survival. There is much proof supporting the fact that memory is not really a form of representation in our minds. Based on the physical systems hypothesis, what counts as a representation is something that is quite static and unchanging so that when formal rules are applied to it, it produces meaningful and logical output (Clark, 2001, p. 28). Memory as it is occurs in humans does not follow this pattern. The studies in change blindness confirm this by showing how we are limited when it comes are sensing novel input; we don’t have an internal representational correlate to allow us to detect changes quickly – in other words there is no comparison between anything we sense to anything in our memory. In analogy to subsumption architecture, memory is just a number of neural structures all working together to create the illusion of discrete information storage. Studies to find the traces of memory in neuroscience have still been evasive likely because it is a global process relying on many different localizations of function. Evolution designed the body around a combination of a great multitude of adaptive parts – these parts come together to form the structure in which function (mind) ultimately emerges. Without this cascading effect of simple to complex processes, it will not be possible to trace the path of how higher cognitive faculties emerged. Bottom-up approaches will benefit from advances in technologies and research in helping explain complex behaviours. This is why work in robotics specifically subsumption architecture is so valuable in the construction of artificial intelligence. For example, it could be possible that reasoning requires the cooperation of many lower level cognitive functions like attention and memory; this outlines the importance of the cascading effect of domain specific adaptations. The shift away from inner symbols that has been argued for in this paper through the use of neuroconstructivist views allows a new perspective on understanding how intelligence can emerge from mindless and non representational causes. While Brooks’ work is generally based in robotics, research from fields like evolutionary psychology might help inform and improve the progress in creating successful autonomous agents that are based on representations that are physically grounded. Brooks outlines one of his main contentions with artificial intelligence researchers that still adopt a top-down representational approach as disregarding the fact of how the early evolutionary processes effected the cognitive systems as a symbol system and reinforces the point of the poor performance of symbol based robotics in comparison to embodied robotics (Brooks, Elephants Don't Play Chess, 1990, p. 3). There is promise in the study of evolutionary processes in aiding the creation of more accurate models of cognition. By gaining a clearer understanding of how domain specific adaptations contributed to designing the mind through incremental complex interactions between brain and body, a benefit will be seen in our understanding of how subsumption architecture will be able to account for complex and intelligent behaviours.


W orks Cited

Andrews, K. R. (2010). Confronting Language, Representation, and Belief: A Limited Defense of Mental Continuity. In J. V. Shackelford, The Oxford Handbook of Comparative Evolutionary Psychology. Oxford: Oxford University Press.

Atkinson, A. P. (2003). Evolutionary Psychology's Grain Problem and the Cognitive Neuroscience of Reasoning. In Evolution and the psychology of thinking: The debate (pp.61-99). Hove: Psychology Press.

Beer, H. J. (1997). The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment. Trends in Neuroscience , 553-557.

Brooks, R. (1990). Elephants Don't Play Chess. Robotics and Autonomous Systems , 3-15.

Brooks, R. (1991). Intelligence without representation. Artificial Intelligence , 139-159.

Churchland, P. M. (1981). Eliminative materialism and the propositional attitudes. The Journal of Philosophy , 67- 90. 43


Clark, A. (2001). Robots and Artificial Life. In A. Clark, Chp. 6 Mindware (p. 109). Oxford: Oxford University Press.

Kalat, J. W. (2009). Biological Psychology 10th Edition. Belmont, CA: Wadsworth.

Lakoff, G. N . (2001). W here M athematics Comes From: How the Embodied M ind Brings M athematics into Being. N Y: Basic Books.

Sylvain Sirois, M. S. (2007). Neuroconstructivism: How the Brain Constructs Cognition. In Behavioral and Brain Sciences (pp. 2-33). Cambridge: Cambridge University Press.















 
 
 

Comments


© 2022 Dave Puzak Psychotherapy & Counselling

bottom of page