Here is a twofold review by Andy Clark to appear in Mind and made available by Mirko Farina on his Facebook page. For other critical notices of Adams and Aizawa and Rupert see the Journal of Mind and Behavior (Justin Fisher’s review of A&A, Aizawa’s reply; Rupert’s review of Clark and the soon to appear review of Rupert by Colin Klein). Ever decreasing circles eh?
Combined Critical Notice
The Bounds of Cognition, by Frederick Adams and Kenneth Aizawa. Malden, MA, Blackwell Publishing, 2008. Pp. xiii + 197
Cognitive Systems and the Extended Mind, by Robert D. Rupert. Oxford University Press, New York, 2009. Pp xiii + 268
Much Ado About Cognition
Where in physical space lies the machinery of mind and cognition? Is mind all “in the head” or might parts of the enabling physical apparatus underlying at least some of our mental states and processes be spread across brain, (extra-neural) body, and even (extra-bodily) world? In these two laudably clear, rewarding, and challenging treatments, we are offered reasons to be skeptical.
The central target of each treatment is the radical-sounding claim that the local physical machinery whose whirrings and grindings make minds like ours possible is not (or not always) fully contained by the ancient metabolic boundaries of skin and skull. Such a claim, despite it’s radical-sounding nature, can be found (in varying degrees, and in one form or another) in a wide variety of recent treatments, spanning philosophy (e.g. Wilson (1994) (2004), Haugeland (1998), Hurley (1998), Clark and Chalmers (1998), Rowlands (1999) (2003) (2006), Noë (2004), (2009), Wheeler (2005), Menary (2007), Sutton (In Press)), psychology (e.g. Thelen and Smith (1995)), Spivey et al (2004), (Gray et al (2006)), robotics and dynamical systems theory (e.g. Beer (2000) and beyond (e.g. Hutchins (1995), Malafouris (2008), Roepstorff (2008)). Philosophically, it is to be distinguished from more standard externalist claims (such as those made by Putnam (1975) and Burge (1979) (1986)) about the role of the distal environment in the fixation of content. It is to be distinguished also from any mere modal claim to the effect that the machinery of mind might – perhaps in some strange or alien beings, or even in some future incarnation (transcarnation?) – of humanity, be smeared across more than the neural economy. Instead it is the claim, sometimes dubbed ‘the extended mind hypothesis’ following the coinage in Clark and Chalmers (1998), that the physical vehicles (for this usage see Hurley (1998) chapter 8) of some of our actual cognitive states and processes might include more than the usual neural suspects. Such cases might include the functioning (subject perhaps to various additional requirements of sufficient degrees of reliability, availability, and agentive trust) of a notebook or an iPhone, where these appear as elements in an integrated web of causal machinery whose combined action enables the guidance and modulation of an agent’s behaviour in some of the ways distinctive of cognitive or mental states. The central example given in Clark and Chalmers (1998) concerned the state of dispositionally believing such and such to be the address of a particular art gallery. That state, Clark and Chalmers maintained, might be realized either by states of the neural apparatus alone or by various combinations of neural and non-neural resources. These could include the biological system plus written entries in a constantly-carried and fluently deployed notebook or (for a more contemporary version) plus entries in the memory of a portable device such as an iPhone. The former, in extremely rough outline, is sometimes referred to as the case of ‘Otto’: a mildly (biologically) memory-impaired agent whose success depends frequently upon such notebook-based encodings.
Adams and Aizawa, in The Bounds of Cognition (henceforth BC) vigourously reject the suggestion that non-neural structures and organization might in this (or indeed any other currently available) way form part of the material base, in us, of any genuinely mental state or process: a view that strikes the authors as both ‘outrageous’ and ‘preposterous’ (BC p.vii). Rupert, in Cognitive Systems and the Extended Mind (henceforth, CS) is more circumspect, but argues that “insofar as we trust cognitive science to reveal the location of the human mind and self, cognitive science offers no reason at present to think that these extend into the environment” (CS, p. 12). I shall begin by looking at BC, then use some considerations that emerge from this discussion as a springboard for examining a central theme from CS, eventually wrapping the strands together in a way that reveals (what I take to be) the most important issues that still separate us.
Adams and Aizawa, in their slim but elegant treatment BC, raise three main concerns. The first is that many (though not all) proponents of the extended mind fail to provide a positive account of what makes a process a cognitive process in the first place. Proponents of the extended mind (especially, but not only, Clark and Chalmers) are said to lack a story about what Adams and Aizawa dub the “mark of the cognitive”. The second is that, if (this is, as it turns out, rather a big ‘if’) we adopt their own favoured “mark of the cognitive”, it seems as if non-neural states and processes currently fail to make the grade. The third is that the specific arguments they find in the literature fail (they claim) to make the case, displaying one or both of two basic errors: the error of mistaking (mere) causal coupling for something more profound, and/or that of confusing the cognitive process with the cognitive system, where the latter may include parts and processes that aid and abet cognition, without themselves participating in true cognitive processing. I’ll first comment briefly on each of these argument clusters, before exploring a more fundamental issue that will lead directly into a discussion of Rupert’s rather different angle on this whole debate.
Adams and Aizawa are much exercised by the question, What is the mark of the cognitive? Without an answer to this question, they fear there can be no meaningful debate concerning the where question: the question of the location of the physical machinery that instantiates a cognitive or mental state or process. They suggest, as a plausible ‘mark of the cognitive’, the presence of “non-derived representations governed by idiosyncratic kinds of processes” (BC p.10). The kinds of inscription found in a notebook fail to make the grade on both counts. They involve derived (that is, in some sense humanly assigned) meanings. And they do not behave in the same ways as their in-the-head counterparts (for example, they fail to display various well-known psychological effects, such as the recency effect which systematically favours late entries in a list (BC p.63)).
The issue concerning non-derived representations (Adams and Aizawa also speak here of non-derived contents) is complex (see Clark (2005) (2008) for discussion) but one point to notice is that such contents/representations are indeed present (assuming the idea is coherent, as I now think it is) in the putative overall cognizing system comprising bio-stuff and further resources. So the real question here concerns the acceptability of derived representations or contents as genuine elements in a cognitive process that quite clearly involves many non-derived ones too. And here, I think, our intuitions simply do not rule this out (think of manipulating images of Venn diagrams in the head). Somewhat puzzlingly, Adams and Aizawa seem to accept that “not every component of every state of every cognitive process must bear non-derived content” (BC p.50) while marginalizing its significance, assimilating the case to something like an allowance for items that bear no contents at all ‘ derived or otherwise” (p.50) viz mere “cognitive punctuation marks” (BC p.50, 55). Without some reason for this restriction, however, the negative impact of the general concession (that not every part of a properly cognitive process need involve all and only non-derived contents) remains (or so it seems to me) profound. Quite mundane bio-external encodings, such as those in a trusty notebook, now seem apt for participation.
What about the other element in the Adams and Aizawa proposed ‘mark of the cognitive’: the one that insists upon governance by the same ‘idiosyncratic kinds of processes” we find in the head? A major worry is that this now threatens to beg the question (or at least to stacks the decks) against the proponent of extended cognition, as (see Wheeler (2005)) it gives immediate priority to in-the-head processes as setting the benchmarks against which the rest are to be judged. This was the error that Clark and Chalmers hoped to avoid by asking not “how similar in idiosyncratic detail is this or that bio-external process to one in the head” but rather “suppose this process were found in the head, governing behaviour as it does, would we then judge it as a cognitive process in good standing” (this is a re-working of the so-called, and oft-misunderstood, Parity Principle associated with Clark and Chalmers (1998)). Notice that this is a question we can perfectly well ask ourselves without first adopting any explicit ‘mark of the cognitive’.
Overall, it strikes me as conceptually unsatisfying (not to mention morally suspect) to suppose that the space of genuinely cognitive states and processes is to be limned by appeal to what are, by Adams and Aizawa’s own lights, ‘idiosyncratic features’ of human in-the-head processing. Suppose cat-brains turn out not to display some of these signature features? Should we conclude that cat-memory is not real memory? Here too, Adams and Aizawa are alert (BC p.71-73) to the worry, yet fail (in my view) to find if sufficiently significant. After reviewing some variants found within animal cognition the authors write that:
“These observations suggest a complication in the evaluation of the hypothesis of extended cognition. They suggest that we cannot refute the hypothesis of extended cognition simply on the grounds that the combination of brain, body, and environment does not form a conglomerate that is like a normal human cognitive processor. The combination could have some general, non-human, kind of cognition…that is related to human cognition in only a “family resemblance” kind of way.” (BC p.72).
I quite this passage at length because it also rather clearly illustrates the worry that the appeal to human in-the-head processing threatens to beg the question hereabouts. For the passage only makes sense if, by ‘like a normal human cognitive processor’, we already mean ‘like a normal human in-the-head mechanism’. The challenge that the theorist of extended cognition means to raise, however, is most often to this very identification. To see this, suppose (just for the sake of argument: but see Clark (1989) and Bechtel (1994) for something similar) you were to hold that much of normal human cognition depends on a hybrid cognitive architecture comprising an ‘in-the-head’ connectionist/pattern-completing system operating in fruitful conjunction with various forms of sentential external structure and encoding: ones whose properties and features, acting in concert with those commanded by the biological brain, open up whole new kinds of computational space by anchoring, warping, and morphing trajectories of search and reason (for more on this idea, see Clark (1992), Clark and Thornton (1997), Clark (In Press)). The processing within this extended conglomerate would then deeply characterize certain forms of distinctively human cognizing, rather than demanding the construction of some new, problematically related, class of ‘general cognition’ (as suggested in BC p.73,74)
This leads quite nicely to a consideration of Adams and Aizawa’s critique of the so-called ‘coupling arguments’ for cognitive extension. These are arguments that they take to display the following (problematic) basic form: extended cognitive mechanisms are argued for simply by displaying “a causal connection between the cognizing organism and its environment” (BC p.93). But in general, as the authors rightly insist, no such entailment holds. Bombarding a nucleus (BC p. 101) with neutrons causes nuclear fission. But nuclear fission itself is constituted by the large nucleus being broken into smaller ones. The bombarding neutrons cause this, but they do not help constitute it. This seems exactly right. But proponents of the extended mind, while often insisting on the importance of various forms of causal coupling, should not be committed to the view that any old coupling will do. In fact (see Clark (2008) chapter 5), it is often best to view the coupling considerations as a means not of directly establishing co-constitution so much as establishing ownership. The right kind of dense coupling (e.g. complex reciprocal causal exchange) is surely part of the explanation of why, for example, the right and the left neural hemispheres count – when the coupling takes just the right form – as two components of a larger processing system rather than as two isolated processors.
Which brings us to the final argument cluster from BC that I want to highlight, and the one that builds a direct bridge between BC and CS. This cluster concerns the difference between cognitive processes and cognitive systems. Here, Adams and Aizawa fear a slightly different –perhaps more pernicious as less straightforward – version of the coupling-constitution fallacy. In this version, friends of the extended mind engage in a form of argumentative drift, moving from demonstrations of causal coupling “to the notion of a cognitive system and from there to something like the view that cognitive processes extend beyond the boundary of the head” (p.120). This is the version of the fallacy they find, for example, in Clark and Chalmers (1998). But as they again rightly note “one cannot assume that an X-process pervades every part of an X-system” (BC p.107). Otherwise put, not every part of an X-performing system is part of what we might think of as the performance engine itself. An air-conditioning system may include ducts and duct tape, but these, though essential to its success, are not part of the air-conditioning process itself: they are not the active air-cooling mechanism. Might the various bio-external loops and wiggles be the cognitive equivalent of ducts or duct tape, perhaps helping to form extended cognitive systems in some weak sense, but not in any way extending true cognitive processing beyond the head? Indeed, as the authors also usefully remark, some of what goes in inside the head may well fall on the wrong side of such a divide:
“..it might be that some neuronal pathways in the brain merely transmit or communicate information from one region of the brain to another, without in any way transforming it in ways that count as cognitive. Such neuronal pathways would be mere buses in the electronic digital computer sense of the term. Perhaps the corpus callosum is a mere bus.” (BC p.17)
Mere information transfer, it seems, is not enough to ensure status as a cognitive processing element, as there is (apparently) no processing going on. Putting this together with our earlier discussions, we now begin to glimpse the true underlying thrust of the Adams and Aizawa critique. They start with a vision of what counts as cognitive processing that is in fact driven by what we find going on within processing areas in the human brain. Finding nothing similar (in idiosyncratic detail) going on in the mundane bio-external cases, they conclude that all such mundane bio-external activity (involving public notation systems, notebooks, iPhones etc) must provide only the cognitive equivalent of ducts or duct tape.
The best way to treat the extended mind story, however, is as a plea to explore the interesting space between brain-benchmarked processes (the ones that Adams and Aizawa accept as genuinely ‘cognitive’) and mere cognitive ducting or duct tape. Adams and Aizawa’s own example of the corpus callosum helps demonstrate the existence of such space. For it seems clear that the corpus callosum, even assuming it is acting merely as an information transfer device, does indeed help realize the cognitive architecture responsible for normal human cognizing. For the specific patterns of complex two-way information flow that it enables are crucial to normal human cognizing. The idea that this contribution is in some way cognitively inert seems wrong-headed, even if we agree that what it is doing is not the same kind of active transformation of information or representations as we find going on within areas such as V1, V2 and V4.
Commenting on this difference, Adams and Aizawa go on to suggest that “perhaps the neuronal pathways connecting the cortical visual areas V1, V2, and V4 are mere information buses” (BC p.17): a suggestion that perhaps they too find disconcerting, as they quickly add that “be this logical and nomological possibility what it may, our guess is that there are few, if any, neuronal pathways that are dedicated only to information transfer” (BC p.17). I want to linger on this a little, as I think it will turn out to be deeply revealing. What can it mean for a pathway to be dedicated ‘only to information transfer’? One thing it might mean is that the signal carried does not change as it traverses the pathway. That is, it does not change simply in virtue of traversing the pathway. The pathway itself does nothing to it, just as (for example) a can of coke can does not alter in virtue of being routed through a dispensing duct. In that sense, the pathway doesn’t itself process the information (it doesn’t condition any air, to revert to Adams and Aizawa’s analogy). But we now see how very elusive this distinction really is. For it seems clear that that kind of ‘mere transfer’ of signal or information is fully compatible with deep – indeed absolutely profound – information processing significance. For the pattern of information flow that that specific transfer or bussing enforces or enables may be the very thing that allows the overall mechanism to perform the processing that it does.
Thus consider an emerging vision of the neural economy according to which (see Hohwy (2007), Friston (2005) (2009)) the power of cortical processing lies in the patterns of reciprocal connectivity between cortical areas: patterns that mediate a complex dance between bottom-up signals coming from sensory systems and top-down signals that provide predictions of the inputs to lower-level regions. On this model, the basic work of the brain is to try to minimize prediction errors (surprise) by having each area scrabble to modulate its activity (in effect, trying to explain away its own prediction errors) in the light of the activity of the other areas to which it is linked. This simple picture turns out to offer a breathtakingly comprehensive take on neural organization, as well as a unified model of perception, attention, and action. For present purposes, however, what matters is the critical role it offers for patterns of connectivity. Even if we assume (for the sake of argument) that all that some of the connections do is to transmit signals between areas, the power of the overall ensemble depends crucially on these patterns of (‘mere’) transmission. Such observations lead Hohwy to comment that “the answer to how the brain works lies in the nature of the connectivity itself. The nature of the activity in one area is not as important as the effect of that activity in other areas of the brain” (Hohwy (2007) p 319). To suggest that such connections are not proper parts of the cognitive architecture seems absurd. Yet all they are doing is enforcing or enabling causal interactions between the various areas in which the actual processing (at least on Adams and Aizawa’s model) occurs. This, I think, should give cause for concern with Adams and Aizawa’s underlying vision of cognitive processing itself.
What it suggests is that the distinction between processes, and indeed parts or components of processes, that actively transform information and ones that do not is itself far from simple. I suspect this is because the notion of transformation is itself defined relative to some assumed notion of a containing system, so that circuits that from one perspective appear to merely bus information back and forth may, from another perspective, appear as critical nodes (players) in more widely distributed processes of change and mutual modulation. Such nodes may not themselves process information in the rather localist way Adams and Aizawa seem to imagine, but the patterns of information flow they enable nonetheless help constitute the cognitive architecture, empowering it in a way that seems to elude Adams and Aizawa’s net. The ‘mere’ enabling of patterns of causal commerce looks, in these cases, to be part of what literally constitutes the presence of one cognitive architecture rather than another.
It is a rather simple matter to take this picture and apply it to the wider realm of bio-external loops and connections. Such loops and connections allow inner resources and outer resources (for example, the structured physical encodings of sentences, numerals, etc supported by various external media) to enter into mutually modulatory commerce. It is quite conceivable, then, that establishing appropriate patterns of causal commerce with certain mundane bio-external structures might under some circumstances transform our own effective cognitive architecture. Some of us (Dennett (1991) Hutchins (1995), Clark (1992) (2003), Donald (1991) spring to mind) think that this is indeed the case, and that it holds at least part of the key to understanding what is distinctive about human mind and reason. Human minds, if this view is correct, may be physically realized by essentially hybrid cognitive architectures, in that much of what is most distinctively human (for example, our abilities to create and reflect upon advanced mathematical ideas, or to build and deploy concepts that reach far beyond the realms of the visible and the directly encounterable) turns out to depend on the way information flow is enabled and maintained in systems that span brains and various (sometimes designed) portions of the bio-external world.
In the end, it is probably not worth fighting over the labels of ‘mental’ or ‘cognitive’ as applied to the processing that occurs within such external elements considered in themselves, any more than it is worthwhile trying to carve the inner neural story into active processors and ‘mere information buses’. But insofar as such a distinction makes sense, we should be willing to concede that very often there may be no processinggoing on within the external structures at all. A trusted notebook is not obviously doing any kind of information processing itself. It is a node in a storage and retrieval circuit, a kind of buffer and link in a processing regime defined over brain, body, and world. In those respects the notebook (though presumably not a small computer program working in your iPhone) may indeed look more like a mere passive information conduit than a processor. But even so, the provision of the right conduits (the right links and connections) is, as we saw, often what makes it the case that a certain cognitive architecture (one capable of X, incapable of Y, and so on) is present at all. So to discount circuitry that ‘merely buses’ information from A to B would be a major mistake. It is at this point that we can see, I think, just why Adams and Aizawa rightly identify their own approach more closely with the classical ‘rules and representations’ vision of mind rather than with the kind of connectionist or neural network approach that now pretty much dominates computational psychology and cognitive science. For the connectionist, unlike the classicist (though production systems are an obvious and interesting exception to this rough generalization), is used to thinking of enabled patterns of flow (and hence mutual modulation) as critical aspects of distributed processing regimes.
In exploring the space of possible human cognitive architectures we need not – and surely should not – deny that neurons have special and important properties, nor that specific neural areas (such as V1 and V4) actively process signals in distinctive ways, nor that extra-neural stuff typically, circa 2009, behaves in ways that are unlike the neural stuff along many important dimensions. But nor should we lose sight of the enormous contribution that can be made by very specific patterns of internal or (according to the extended mind story) outward looping causal flow. Confronted with the staggering potency of such contributions, the question of whether what certain connections or bio-external resources do ‘in themselves’ should count as ‘cognitive processing’ seems rather pale and uninteresting. Should we really worry whether or not “cognitive processing pervades [an] entire region of spacetime” (BC p.125)? I’m not convinced (for the reasons just scouted) that this is quite the right question to ask.
We have begun, however, to glimpse the space for a partial reconciliation. Between Adams and Aizawa’s local-brain-process defined notion of ‘cognitive processes’ and their notion of mere cognitive systems (such local processes plus utilitarian webs of ducts and ductwork) lies the interesting space of highly connectivity-sensitive cognitive architectures. This is the space of processing regimes that are hugely empowered by patterns of connectivity between processing areas, and that are thus open to powerful forms of transformation by new channels of causal commerce that loop beyond the brain and body. Such channels may run through one external resource at one time, and another at another. They are not part of the persisting (though not unaltering) set of brain-bound cognitive resources. But when up and running, they may (if the extended mind story is correct) be as deeply integrated into that set as their brain-bound cousins.
It is this important claim that turns out to be most centrally at stake in a second recent engagement with these issues. This is the comprehensive, constructive, and penetrating treatment by Robert Rupert, in his critical treatise Cognitive Systems and The Extended Mind (CS). Rupert, like Adams and Aizawa, believes that we need to start with something like a ‘mark of the cognitive’ viz some principle of demarcation (see CS p.16) that marks the difference – and thus determines the boundary – between the cognitive system and the rest of the world. But where Adams and Aizawa are impressed by the notion of intrinsic content, and combine an appeal to that with an appeal to the (I think problematic) requirement of relatively fine-grained functional similarity to the active processing that brains do, Rupert settles for something that seems initially rather less demanding. That something is the notion of an integrated, persisting, cognitive system. Thus according to Rupert:
“ A state is cognitive if and only if it consists in, or is realized by, the activation of one or more mechanisms that are elements of the integrated set members of which contribute causally and distinctively to the production of cognitive phenomena” Rupert (2009) p.42
Rupert thus allows, as far as I can determine, that we have some useable antecedent grip on the very notion of a cognitive phenomenon (as is required if we are, for example, to apply the Parity Principle in the way Clark and Chalmers intended it). But he argues that this alone is insufficient to determine the boundaries of an individual’s cognitive states or processes. This additional step, he suggests, is best made by careful assessments of the degree to which some state or process can be show to form part of an integrated, persisting set of mechanisms that “contribute to the production of a wide range of cognitive phenomena, across a variety of conditions, appearing as causally relevant elements in various overlapping collections of mechanisms of similar standing” (CS p.44). It is only by appeal to this “integrated cognitive architecture” Rupert concludes, that we can distinguish between:
“on the one hand, cognition as a natural kind and, on the other, pandemic, only sometimes cognition-related computation, information flow, and dynamical interaction” (CS p.42)
Such a view, Rupert suggests, has the not-insignificant virtue of explaining (and justifying) the massed data, theory, and experimentation of traditional cognitive psychology, while doing so in a way that still allows ample scope to explore and admire the many clever and unexpected ways in which natural cognizers structure and exploit varying environmental opportunities in the service of practical and cognitive success. It thus combines conservatism (with respect to standard cognitive psychological views) with the ability to accommodate a wealth of new and exciting results from areas such as dynamics, robotics, developmental psychology, and developmental systems theory: all the theory and data taken by some (see e.g Clark (2008) for a partial survey) empirically to motivate talk of extended cognitive systems. In place of the ‘Hypothesis of Extended Cognition’ Rupert thus advances what he previously (Rupert (2004)) dubbed the ‘Hypothesis of Embedded Cognition’ according to which “typical cognitive processes depend, in surprising and complex ways, on the organism’s use of external resources” (CS p. 5).
The basic shape of Rupert’s worry is thus clear. Both sides can arguably accommodate all the empirical data and experiments, and so considerations of conservativeness and fit with the existing cognitive psychological tradition are argued (but see Sprevak (forthcoming) for some useful discussion of this claim) to win the day. The appeal to persisting integrated systems as our objects of study makes sense, it is claimed, of existing practice, accommodates what matters from all the newer perspectives, and thus argues in favour of the embedded conception over the extended one.
One might (not unreasonably it seems to me) question the key move in this argument, viz, the claim that integrated persisting systems both are and should be the sole targets of cognitive psychological enquiry, It is this move that sets up the associated appeal to conservatism as a kind of trump card given the availability of two ways to carve the experimental and theoretical cake, one of which is standard-target-preserving and one of which is not. That claim strike me as both premature (given the emerging spread of work in contemporary cognitive science and cognitive psychology) and in any case unduly restrictive. Why should we not carve our cake in several different ways for varying experimental and theoretical purposes (see Craver (2009) for some useful discussion)? For example, supposing our goal is to understand what a person (a socially and technologically situated entity) can do, we may need to study the class of systems that includes loops through body and world. But if our goal is just to understand what the persisting biological organism alone can do (for example, by way of mathematical reasoning) we might restrict the use of all non-biological props and aids, allowing e.g. counting on fingers but not the use of notepads. Likewise, if it is to discover the stand-alone capacities of the neural apparatus, we might want to impede subjects using their fingers as counting buffers during an experiment. While reaching deeper into the machine, if our goal is to track the contribution of a specific neural sub-structure, we might use TMS (Transcranial Magnetic Stimulation) to activate or de-activate some specific neural region. The invitation is thus to let a thousand flowers bloom. Indeed, the only genuine danger is likely to be is that we too easily allow the relentless march of brain science to blind us to the deep computational importance of larger socio-technological circuits.
But suppose, for the sake of argument, we accept at least that part of Rupert’s story that gives a special place to the study of mechanisms that form part of the ‘persisting integrated cognitive system’. Do things really fall out in quite the way Rupert imagines? I am not convinced. Consider the suggestion (central to the treatments in Clark (2007) and (2009)) that the active machinery of cognition, while being importantly organism-centred, should not be conceived or modeled as organism-bound. The paradigm cases here are processes that are initiated and maintained by neural activity, that are relatively easy to initiate and maintain in the kind of (media-rich, western, first world) environments that we have painstakingly structured, but that loop out through body and world in ways that are essential to what the extended mind theorist would wish to describe as their cognitive role. (For cases that do not depend on such media-rich settings, we could consider the social realm in which similar loops can be run through other agents, but that raises a host of different issues that I shall not attempt to comment on here). The case of Otto is perhaps best seen as a rather simple, slow timescale, version of such a loop. Real-world cases more centrally in this ballpark include the (putative) role of ongoing sketching, writing, gesture, and body movements as contributors to thought (see Clark (2008) for more on all of these)
A useful toy case, discussed in Clark (2007) (2008), is that of a simple robot (call it the XOR-Robot) controlled by two perceptrons. The robot is nonetheless enabled (this example is taken from Paul (2004)) by the devious use and monitoring of it’s own bodily motions, to solve whole open-ended classes of problems that it is demonstrably neurally in principle incapable of solving: ones requiring the computation of non-linearly separable functions. The self-maintained bodily loop here meets, as far as I can determine, Rupert’s conditions, as it would be available as a “common element in the production of a wide variety of cognitive phenomena, across a variety of conditions, appearing as causally relevant elements in various overlapping collections of mechanisms of similar standing” (CS p.44, as quoted earlier). Notice also that in these looping cases, a constitutive role for e.g. gesture, sketching, and scribbling is fully compatible with the plausible claim that the gestures, sketches, and scribbles only do their work by both following from, and leading to, specific forms of neural activity. In the XOR-Robot, the bodily twitches both follow from and lead to ‘neural’ activity. But what counts is that these extra-neural activities now form part of a unified, widely-deployed processing resource: one whose reliable action has been selected (either developmentally or by evolution) for its specifically cognitive virtues.
Now in one sense, none of this need trouble Rupert. Rupert, unlike Adams and Aizawa with their stress on fine-grained features of the neural regime, is able allow organismic processes quite unlike neural ones to act as cognitive elements. All they have to do is meet the integration conditions, and be visibly subserving the production of some cognitive phenomenon. But once he allows this (as I think he must) in the case of the bodily loop in the XOR-Robot, the door is surely wide open to loops that transcend bodily bounds. Scribbling or typing while engaged in neural processing is a case in point. This –in our contemporary engineered society – is a widely available, frequently initiated and maintained loop, that figures in the production of a vast range of apparently cognitive phenomena.
One might reply by denying that there really are many such cases, or by denying that they are of any great cognitive significance. But neither of these deflationary moves seems warranted. We humans are constantly engineering and re-engineering our worlds so as to provide for more and more of these ‘loopy opportunities’. And we go to great lengths to ensure that the right non-biological materials and circuits are mostly available whenever the relevant cognitive task is encountered. The precise impact of all this on human cognizing remains undetermined (trying harder to determine it is what all that talk of ‘extended minds’ is meant to promote). But potentially, at least, such frequently deployed loops could be helping us to realize whole new forms of cognitive device.
It is worth lining up a few final examples to drive this point home. Thus consider a simple three layer feedforward connectionist net. Such nets are limited in the kinds of computation they can perform since they lack any form of memory, merely responding (once trained) to the same input in the same way, regardless of it’s position in any sequence of inputs. But now add a simple feedback loop to the system, turning it into a so-called Simple Recurrent Net (SRN) as described by Elman (1990). All the loop in the SRN does is copy back the state of the hidden units at time T alongside the next input at time T+1. The loop does nothing to the information that travels along it. In that sense, it acts as what Adams and Aizawa earlier described as a ‘mere information bus’. Insofar as any information is ‘processed’ it is processed by e.g. the response and updating functions of the units. Yet SRN’s possess all kinds of computational properties (including chaotic behaviours and complex temporal dynamics) that feed-forward nets lack. Or consider what happens to certain densely locally connected complex networks when a few long range recurrent connections are added to the mix. Even a few such additions can turn a merely complex network into a so-called small-world network, with all the special properties that implies (see e.g. Strogatz (2001))
Perhaps then, what look from the standard perspective to be simple uses of environmental structure to store or transport information are actually transformative of the effective architecture itself? Were such transformations effected by a human organism’s reliably available, frequently employed practices (such as scribbling and sketching) those key biology-transcending loops would seem to meet all of Rupert’s requirements. It is at this point that the complex weave of critical issues so powerfully raised by the two treatments finally converge. For each critical engagement leads us to the same broad response, centering on the possible role of complex, reliable, organism-centered (but organism-transcending) loops in the creation and maintenance of new forms of cognitive device, or even perhaps whole new forms of cognitive architecture.
Rupert will surely reply, once again, that we can explain all we need to explain here using the standard apparatus: giving the loops their due, but not as elements of the integrated cognitive system. But this reply seems increasingly unmotivated. Given the potency and ubiquitous availability of the key loops (especially, their availability when certain kinds of problem-solving are even at issue, as in long range planning, drawing, book writing, advance mathematics, etc) there seems no remaining reason to treat the inner, biological parts of the loop as belonging to the cognizing engine and the external, non-biological parts as not.
All of which leaves us in a delicate but interesting place. All parties can, it seems, agree that such organism transcending loops are potent and that their potency is (to seek some non question-begging phrase) cognitively enabling. All parties can agree that the fine detail (indeed, even the coarse detail) of these loopy contributions is distinct from that of the neural system. All parties should, I think, allow that at least some cases of self-maintained ‘loopy’ processing can meet plausible versions of the requirements of systemic integration. Where the parties seem to differ, then, is in whether they then reserve the term ‘cognitive processing’ for only the inner (either the neural, if you are Adams/Aizawa, or the organismic, if you are Rupert) parts of the loop, or simply treat the whole loop, when it appears, as itself a cognitive device. Adams and Aizawa, as we saw, do not accept the mere bussing of information as ‘cognitive processing’, so they are at liberty to appeal to that to carve the cake at the neural boundaries. But in so doing, they force us to treat critical elements of the inner neural organization as ‘non-cognitive’ too. This seems too high a price to pay. Rupert is much better placed to accommodate the loops as potential realizers of cognitive processes, perhaps grading them according to their degree of reliable and recurrent availability and use over time as a systemic resource (one could do this using the statistical apparatus on offer in CS section 2.2.2). To preserve his skeptical conclusion he must either deny their ubiquity, their degree of genuine integration, or their importance. None of these moves seems promising. Absent some further considerations I conclude that the case against extended cognition remains unproven.
In fact, I would go a little further. In the light of this discussion it seems ad hoc to label some parts of what all agree to be ubiquitous, self-maintained, cognition-enhancing, loops as elements of ‘human cognitive processing’ and other parts as not. Or at least, that seems ad hoc unless you already assume (what was, after all the point at issue) that human cognitive processing is to be defined by the neural (Adams and Aizawa) or by the organismic (Rupert) contribution in the first place. To their credit, neither Rupert nor Adams and Aizawa beg the question in this kind of way. Each offered principles of demarcation that were at least potentially independent of matters of material or location. In the former case, we should (or so I have argued) reject the principles. In the latter case, even were we to accept the principle, it does not seem readily to rule out a wide range of cases of putatively extended cognizing.
There is, of course, much more in both BC and CS Adams than I have been able to engage with here. But even this selective sampling reveals both treatments as important, insightful, and immensely fertile contributions to the emerging literature in this area. They faithfully capture much of the debate so far, and push it significantly further, drawing our attention to important underlying issues that have yet to be properly appreciated and explored. What is the actual business of the brain, and how should we relate this to the business of the mind? Is there really a distinctive common thread that, as it happens, gathers together (in us humans) all and only the work done by the brain (Adams and Aizawa) or perhaps by the organism (Rupert), neatly carving off the contributions of all extra-neural (Adams and Aizawa) or all bio-external (Rupert) loops and processes? Even is there is such a common thread, does it really imply a perfectly matching limit on the architecture of the mind? What work, if any, does the notion of the mental or the cognitive actually do in the sciences of mind? These questions sound easy, but they are not. It is a major virtue of these state-of-the-art treatments that they bring such puzzles so firmly to the fore. Now all we need to do is solve them.
School of Philosophy, Psychology, and Language Sciences
University of Edinburgh
Bechtel, W (1994). Natural Deduction in Connectionist Systems. Synthese101 (3):433-463
Beer, R (2000). ‘Dynamical approaches to cognitive science’. Trends in Cognitive Sciences 4(3):91-99
Burge, T (1979) “Individualism and the Mental”, in P. French, T. Uehling Jr., and H. Wettstein (eds.), Midwest Studies in Philosophy, vol.4, Metaphysics. Minneapolis, MN: University of Minnesota Press.
Burge, T (1986) “Individualism and Psychology” , Philosophical Review95:3-45.
Clark, A (1989) Microcognition: Philosophy, Cognitive Science and Parallel Distributed Processing, MIT Press/Bradford Books, Cambridge: MA)
Clark, A (1992) “The Presence of a Symbol” Connection Science 4:3/4 1992 p 193-205 . Reprinted in J.Haugeland (ed) Mind Design II (MIT Press, 1997) p.377-393
Clark, A (2003 Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence (Oxford University Press, NY).
Clark, A (2005) ‘Intrinsic Content, Active Memory, and the Extended Mind’. Analysis 65:1 January 2005 p.1-11
Clark, A (2007) “Curing Cognitive Hiccups: A Defense of the Extended Mind” Journal of Philosophy CIV:4:p163-192
Clark, A (2008) Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford University Press, NY
Clark, A (In Press) “Minds in Space” in E. van der Zee (ed) Language And Space (Oxford University Press: NY)
Clark, A and Chalmers, D. (1998). “The Extended Mind.” Analysis58:1:7-19. Reprinted in (P. Grim, ed) The Philosopher’s Annual, vol XXI, 1998), Also reprinted in D. Chalmers (ed) Philosophy Of Mind: Classical and Contemporary Readings. Oxford University Press, 2002).
Clark, A and Thornton, C (1997) Trading Spaces: Computation, Representation, and Limits of Uniformed Learning. Behavioral and Brain Sciences, 20:57-90
Craver, C (2009) Mechanisms and natural kinds. Philosophical Psychology22: 5: 575 – 594 .
Dennett, D (1991) Consciousness Explained (Little Brown, Boston)
Donald, M (1991) Origins of the Modern Mind (Harvard University Press, Cambridge, MA)
Elman, J.L. (1990). “Finding Structure in Time”. Cognitive Science 14: p. 179–211
Friston, K (2005). “A theory of cortical responses.” Philosophical Transactions: Biological Sciences 369(1456): p. 815 -836.
Friston K. (2009) The free-energy principle: a rough guide to the brain?Trends in Cognitive Sciences 13:7: p. 293-301
Gray, W. D., Sims, C. R., Fu, W.-T., & Schoelles, M. J. (2006). The soft constraints hypothesis: A rational analysis approach to resource allocation for interactive behavior. Psychological Review 113(3) 461-482.
Haugeland, J (1998) “Mind Embodied and Embedded” in J. HaugelandHaving Thought: Essays in the Metaphysics of Mind (Harvard University Press, Cambridge, MA) 207-240, (this originally appeared in Acta Philosophica Fennica 58 (1995) p.233-267 (a special issue on Mind and Cognition edited by L. Haaparanta and and S. Heinamaa)).
Hohwy, J. (2007) Functional integration and the mind. Synthese 159(3): 315-328
Hurley, S. (1998) Consciousness in Action. (Cambridge, MA: Harvard)
Hutchins, E (1995) Cognition In The Wild (MIT Press: Camb. MA)
Malafouris L (2008), “Between brains, bodies and things: tectonoetic awareness and the extended self.” Philos Trans R Soc Lond B Biol Sci363(1499):1993-2002
Menary, R (2007) Cognitive Integration: Attacking The Bounds of Cognition. Palgrave Macmillan
Noë, A. (2004) Action in Perception. Cambridge, MA: The MIT Press.
Noë, A (2009) Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness Farrar, Straus and Giroux: NY
Paul, C. (2004) “Morphology and Computation” in Stefan Schaal, Auke Jan Ijspeert, Aude Billard, Sethu Vijayakumar, John Hallam , and Jean-Arcady Meyer (eds) From Animals to Animats, Proceedings of the 8th International Conference on the Simulation of Adaptive Behaviour, Los Angeles, CA, USA, 2004 (MIT Press, Camb. MA) pp 33-38
Putnam, H., 1975, “The Meaning of ‘Meaning’”, in K. Gunderson (ed.),Language, Mind and Knowledge. Minneapolis, MN: University of Minnesota Press. Reprinted in H. Putnam, Mind, Language, and Reality: Philosophical Papers vol.2. New York: Cambridge University Press.
Roepstorff, A (2008) Things to think with: words and objects as material symbols. Philosophical transactions of the Royal Society of London. Series B, Biological sciences 363(1499):2049-54
Rowlands, M (1999) The Body in Mind: Understanding Cognitive Processes (Cambridge University Press, Cambridge, UK)
Rowlands, M (2003) Externalism: Putting Mind and World Back Together Again, Acumen/McGill-Queen’s University Press
Rowlands, M (2006) Body Language: Representing in Action, MIT Press
Rupert, R (2004) Challenges to the Hypothesis of Extended CognitionJournal of Philosophy 101:8: 389-428
Spivey, M., Richardson, D., & Fitneva, S. (2004). Thinking outside the brain: Spatial indices to linguistic and visual information. In J. Henderson and F. Ferreira (Eds.), The Interface of Vision Language and Action. New York: Psychology Press
Sprevak, M (forthcoming) Inference to the hypothesis of extended cognition. Studies in History and Philosophy of Science
Strogatz SH (2001) Exploring complex networks. Nature 410: 268–277
Sutton, J (in press) ‘Exograms and Interdisciplinarity: history, the extended mind, and the civilizing process’, in Richard Menary (ed), The Extended Mind (MIT Press, 2010).
Thelen, E and Smith, L (1994) A Dynamic Systems Approach To The Development Of Cognition And Action (MIT Press, Camb. MA)
Wheeler, M (2005) Reconstructing the Cognitive World (MIT Press, Camb. MA)
Wilson, R. A. (1994) “Wide Computationalism” Mind 103:351-372
Wilson, R. A. (2004) Boundaries of the Mind: The Individual in the Fragile Sciences–Cognition (Cambridge University Press, Cambridge, UK