julie lee - neuroscience phd student

github twitter rss
The Not-So-Empty Brain, or Lessons Against Confusing the IP Metaphor
May 21, 2016

There is a theory in philosophy called “computational theory of mind” (CTM) which argues that the brain is literally a computer, or more specifically, a “computing system”. Separately, the brain has been commonly likened to a computer or information processor. Psychologist Robert Epstein has attacked this metaphor in a recent essay called “The Empty Brain”. However, the theory in which Epstein takes issue with is at times the information processing (IP) metaphor (which he names in the article), at times the CTM, and at yet other times, some strange straw-chimera. Epstein directly illustrates his conflation of the IP metaphor and CTM partway through: “The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it”. In one statement, he both refers to the metaphor of the brain as a computer but also the notion of computing on representations, which is in part argued by the related/umbrella theory to CTM, representational theory of mind (RTM).

There is certainly value (and fun!) in debating whether or not it is valid to define the brain as a computing system. It is also worth considering the practicality of the IP metaphor. However, for Epstein to display his opponent as the IP metaphor and yet for the most part argue against an “anti-representational” (i.e. RTM) view is invalid. Further, I will argue that even if the IP metaphor == the CTM/RTM, his arguments are flawed.

Epstein starts with one of many fallacious reductiones ad absurdum by describing the human propensity for social connections as supposedly undermining the IP metaphor. He makes several references to newborns’ “learning mechanisms”, ironically antithetic given that the Oxford dictionary definition of “mechanism” is “A system of parts working together in a machine; a piece of machinery” and “The doctrine that all natural phenomena, including life and thought, can be explained with reference to mechanical or chemical processes”. In fact, connectionism, one strand of CTM, is often used exactly to characterise “learning mechanisms”, usually in language acquisition.

This immediate contradiction aside, Epstein states that computers, but not humans, are imbued with “information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers”, and “operate on symbolic representations of the world”. From these two quotes alone, it is evident that Epstein is simultaneously arguing against the CTM, in that he elaborates on the apparent absurdity of considering the brain as a computes-on-representations machine (in fact he uses the word “representation” 10 times in his essay). This is an entirely separate position to the IP metaphor, as stated above. That in mind, it is still worth considering the evidence Epstein provides in favour of his anti-representational stance.

He first mentions the book “The Computer and the Brain” by John von Neumann. Von Neumann argues that neurons perform digital operations, such as the all-or-nothing principle of neural firing, and linear operations like summation. In addition to these, non-linear dynamics are emergent from the neural architecture, such as correlations between spikes. It is worth a tangent here that Tim Van Gelder argues that “representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.” This interpretation dodges the criticism of the brain dealing with symbols, even though such criticism is orthogonal to the IP metaphor in the first place. That is, brains can operate on things other than symbols (Epstein’s argument) and still support a “soft” view of CTM.

Epstein goes back to attacking the “faulty logic of the IP metaphor”:

It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.

Here he whittles down the argument to a logical invalidity, namely: (1) if A, then B. (2) if A, then C. (3) if B, then C (invalid). This appeal to logic is a completely legitimate move. The argument presupposes the validity of (1), i.e. if [X is a computer] then [X behaves intelligently], but the bigger issue is Epstein’s inference from the rightly-characterised logically invalidity of (3). Namely, is the IP metaphor stating that “all entities that are capable of behaving intelligently are information processors”? Is it even stating that if [capability to behave intelligently] then [property of being information processors]? No. The IP metaphor has been thrown around in various forms, but none of them would endorse a necessary relationship between intelligent behaviour and information processing (when it comes to humans or otherwise).

Moving on, Epstein goes back to arguing against the representational view by describing an anecdote where his student was unable to perfectly draw a U.S. dollar bill from memory. However, when she copied it with reference to the physical object, it was drawn perfectly. He lambasts this as an example of why humans do not store representations in the brain, stating “a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.” This conclusion makes a straw man of the opponent. First, representation does not necessitate perfect representation (there is, after all, very likely variability in the brain!). Second, representations could be distributed (indeed this is a natural consequence of connectionism!) rather than found in any one place. This latter point undermines Epstein’s subsequent dismissal of the existence of so-called “grandmother cells” (single neurons that could represent a single thing such as the concept of your grandmother). Again, Epstein’s argument is irrelevant to the question of the IP metaphor. Put simply, the existence or non-existence of grandmother cells does not affect the validity of the IP metaphor.

Another attempted nail in the wrong coffin involves baseball players’ ability to catch a fly ball. Epstein wrongly asserts that “the IP perspective requires” the creation of an internal model of the ball’s trajectory, velocity, and so on prior to the catch, rather than a more heuristic explanation of maintaining essentially constant position of the ball with respect to the background. This “linear optical trajectory” model is apparently “completely free of computations, representations and algorithms”, despite the original authors of the paper describing the model as an “error-nulling tactic”. Deferring to the OED, one definition of error is “A measure of the estimated difference between the observed or calculated value of a quantity and its true value.” Measures, estimates, and calculations are not, most would agree, “completely free of computations”.

Epstein finishes with “the most egregious way in which the IP metaphor has distorted our thinking about human functioning” - that is, the apparent prediction of the IP metaphor that a complete simulation of all our neurons would have meaning if simulated outside the brain. This idea is in fact a cool thought experiment called the China brain, that is unfortunately “out of left field” (to return to baseball) in a conversation about the IP metaphor. In an apparent attempt to illustrate the economic cost of supporting the IP metaphor, he then criticises the Human Brain Project (HBP), a $1.3 billion EU initiative to simulate the entire human brain, failing to notice that many of the signatories in an open letter against the HBP are in fact computational neuroscientists (click “read the full letter” and scroll down), some (if not all) of which probably subscribe to the IP metaphor.

To summarise, Epstein’s well-publicised argument is poorly argued as it conflates two orthogonal stances, (1) the information processing metaphor, and (2) the very much non-metaphoric computational theory of mind. Even if these were the same, Epstein frequently contradicts his anti-representational stance with logical inconsistencies. Regardless of the validity of the CTM, Epstein fails to mount a successful argument against it, not to say the softer metaphor of the brain as an information processor. Therefore, the counter-arguments above must be considered when using Epstein’s essay to score points against the representational theory of mind.


Back to posts