Turing on Computation, Memory and Behavior

Eli Dresner

In the first section of this paper I discuss the notion of ‘effective memory’, as invoked by A. M. Turing in his celebrated 1936 article ‘On Computable Numbers, With an Application to The Entscheidungsproblem’. In particular, I shall argue that this conception of memory is of an inherently inter-subjective nature. In the second section I apply this analysis to the discussion of the content and justification of Turing’s Thesis: it is suggested that the Thesis delineates a class of functions through the more basic characterization of a class of inter-subjectively accessible patterns of human action, and it is argued that Turing’s justification for the finiteness conditions imputed on computation is tied to its inter-subjective nature. In the final section of the paper it is considered how Turing’s post-war views regarding AI are related to the above described pre-war themes in his thought. It is shown that, on the one hand, there is discontinuity between the two periods: after the mid-forties Turing does not view memory and computation as public in nature, as he previously did. On the other hand, Turing’s general inter-subjectivist orientation is upheld, and expressed in his approach to intelligence and thought.

Turing machines and effective memory

One of the key components of any modern computer is its memory: every digital computer contains a memory unit where the data for computation are stored, as well as the program according to which these data will be algorithmically manipulated. (This is the ‘stored program’ architecture, according to which contemporary computers are designed.) During any given computational run of the computer, data stored in its memory will be retrieved by the central processing unit, processed, and possibly changed.

It is well-acknowledged that contemporary digital computers can be described as realizations of Turing’s abstract computation machines (Turing machines), introduced in his 1936 paper (Turing 2004a) ‘On Computable Numbers, With an Application to The Entscheidungsproblem’. In particular, due to the above-mentioned stored program architecture, modern computers are said to be realizations of the Universal Turing machine, i.e., a Turing machine that can simulate the performance of any other Turing machine. The existence of such a machine was proved by Turing in the same 1936 seminal paper.

Due to this connection between Turing’s machines and contemporary computers it can be asked which component of the abstract machines corresponds to the memory unit(s) that are found in today’s computers, as noted above. It may be futile to expect to find in Turing’s model precursors to every aspect of today’s elaborate machines, but memory is of a basic and essential nature, and hence we are allowed to look for its mathematical template.

The seemingly obvious answer to this question is this: a Turing machine’s memory is its tape. Like the memory of contemporary computers, the tape is the symbolic store on which the data for computation appear in the beginning of computation, and where they are manipulated during computation. In the case of the universal machine, the tape is where the symbolically coached table of the machine that is to be simulated is written, exactly on a par with stored programs today. Thus it is no surprise that we can find contemporary texts describing the tape of a Turing machine as its memory. For example, Barker-Plummer (2005) writes in his entry on Turing Machines in the Stanford Encyclopedia of Philosophy: ‘In modern terms, the tape serves as the memory of the machine, while the read-write head is the memory bus through which data is accessed (and updated) by the machine’. Another example is Copeland’s recent edition of Turing’s major writings (Copeland 2004), where Copeland introduces Turing machines with the sentence (p. 6) ‘A Turing machine consists of a scanner and a limitless memory-tape that moves back and forth past the scanner’.

The use of the term ‘memory’ to describe the machine’s tape is thus in tune with today’s computer parlance, and as such should not be objected to. Nevertheless, this usage stands at odds with the way Turing himself talks about memory in his 1936 paper, and can therefore be described as anachronistic. Here is why.

As is widely agreed upon among logicians and philosophers since the publication of Turing’s paper in 1936, a major strength of the paper is the argument given by Turing for the claim that the model presented in the paper (i.e., Turing machines) fully captures the intuitive notion of computation. This argument is articulated in the 9th section of the paper, but a preview to this argument is tersely given in the very beginning of the first section, where Turing (2004a, p. 59) writes ‘No real attempt will be made to justify the definitions given until we reach §9. For the present I shall only say that the justification lies in the fact that the human memory is necessarily limited’. Now if the machine’s memory is its tape, as suggested above, then surely it is the part of the machine that stands for human memory. However, in this case Turing’s remark is difficult to make sense of: the tape of the machine is its only component that is defined as limitless. As opposed to the finite number of inner states of the machine, and the finite number of symbols it can work with, the tape can be indefinitely extended in each run of the machine. Without this extendibility the class of functions computable by the machines will be curtailed. So how can Turing hold that his definitions capture the finiteness of human memory?

Sieg (1994) seems to be grappling with this difficulty in the following passage (p. 96):

Turing sees memory limitations as ultimately justifying the restrictive conditions. But none of the conditions is motivated by such a limitation; so how are we to understand his claim? I suggest the following: if our memory were not subject to limitations of the same character as our sensory apparatus, we could scan (with the limited sensory apparatus) a symbolic configuration that is not immediately recognizable, read in sufficiently small parts so that their representations could be assembled in a unique way to a representation of the given symbolic configuration, and, finally, carry out (generalized) operations on that representation in memory. Is one driven to accept Turing’s assertion as to the limitation of memory? I suppose so, if one thinks that information concerning symbolic structures is physically encoded and that there is a bound on the number of available codes.

It is clear that Sieg has in mind here a representational-symbolic conception of human memory. This conception is in tune with the one discussed above, which views the machine’s memory as symbolically coached as well, and Sieg finds it difficult to reconcile it with Turing’s own remarks. The solution Sieg proposes goes far beyond what can be found in Turing’s text.

The problem is resolved in the lines following Turing’s remark on the finiteness of human memory, where he outlines the general structure of his machines (p. 59):

We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions q1, q2, ..., qR which will be called ‘m-configurations’. The machine is supplied with a ‘tape’ (the analogue of paper) running through it, and divided into sections (called ‘squares’) each capable of bearing a ‘symbol’. At any moment there is just one square, say the r-th, bearing the symbol G(r) which is ‘in the machine’. We may call this square the ‘scanned square’. The symbol on the scanned square may be called the ‘scanned symbol’. The scanned symbol is the only one of which the machine is, so to speak, ‘directly aware’. However, by altering its m-configuration the machine can effectively remember some of the symbols which it has ‘seen’ (scanned) previously (my italics--e.d.). The possible behavior of the machine at any moment is determined by the m-configuration qn and the scanned symbol G(r). This pair qn , G(r) will be called the ‘configuration’: thus the configuration determines the possible behavior of the machine.

The key sentence in this passage for our purposes is the italicized one, in which Turing talks again about memory. Here, in brief, is the role of this sentence. After stipulating that the scanned symbol is the single symbol that the machine is ‘directly aware of’ at any given stage of its operation, Turing considers (albeit implicitly) an objection to the claim that the machine may be compared to a human performing computation: humans can perceive more than one symbol at a time, while Turing’s machines can scan only one. Turing’s answer is that changes in the inner m-configurations of a Turing machine can compensate for this seeming deficiency, thereby consisting in effective memory. That is, symbols previously scanned affect the current m-configuration of the machine, and thus the behavior of the machine as well. In this sense, the machine can be said to remember previously scanned symbols and therefore act in any given stage of its operation not only on the basis of the single symbol that is currently scanned, but rather in reliance on a whole string of previously scanned symbols as well. Therefore the gap between what Sieg (2002, p. 397) calls a string machine and Turing’s so-called letter machines is closed, and the implicit objection can be rejected.

The first thing to note about the above quoted passage from Turing is this: it offers a simple, straightforward answer to the question raised above with respect to Turing’s remark about the connection of his model to the fact that human memory is limited. The number of the inner states of any given Turing machine is finite. Therefore, if changes in these states consist in effective memory, the finite number of states does indeed represent (and is justified by) the fact that human memory is limited. Thus the part of the machine that Turing associates with memory is not the tape, as is often done in later literature (including Turing’s own later writings, see section 3 below). Rather, as noted by Webb (1990), it is through changes in the non-symbolic inner m-configurations that memory is effected. This is not surprising, in view of Turing’s general analogy between the human computer and his machines, an analogy that serves him to justify the adequacy of his analysis of computation. In this analogy the tape corresponds to a piece of paper, or a notebook—an artifact that is external to the human’s mind and accessed through the human’s perceptual apparatus. The machine’s inner part (that changes m-configurations) corresponds to the human himself, and therefore internal human capacities such as memory should be realized by this part, rather than by the tape.

A second issue raised by Turing’s comments is the conception of memory that emerges from it, so-called ‘effective memory’. As I argue in more detail elsewhere (Dresner 2004), this conception is notably non-representational. Turing does not think of human memory as an internal symbolic register, analogous to the external tape. Changes in machine configurations do not yield memory of previously scanned symbols by incorporating copies of these symbols within them, but rather in virtue of their being mediators between past scannings and present (as well as possibly future) operations. Thus Turing’s view of memory greatly differs from the conception of mind and memory which is at the foundation of so-called ‘classical’ cognitive science. According to the latter outlook, cognitive processes are performed on internally stored symbolic representations, one type of which are memories. Turing of 1936, as we just saw, leaves symbols outside the head.1

An alternative (some would say opposite) characterization of Turing’s view of memory seems to suggest itself for consideration: can Turing’s ideas here be described as behavioristic? At first blush the answer to this question may seem to be positive. (Indeed, one of the main thrusts of Shanker (1998) is that Turing’s thought about intelligence and mind is infected by behaviorism from the start, i.e., from 1936.) The reasons for this should be clear: Turing takes a central cognitive notion—that is, memory—and interprets it in his model as an establishment of adequate connections between perceptual inputs and behavioral outputs. (The inputs are previous scannings of symbols, and the outputs are movements of the machine’s reading/writing head along the tape, and/or changes of symbols on it.) According to Turing, for the machine to remember that a certain symbol has been scanned is for the fact that it has been so scanned to affect its future behavior. This does indeed sound like a behaviorist outlook.

However, I believe that this characterization of Turing’s position should be qualified. Inner m-configurations, that stand for human inner ‘states of mind’, play a key role in Turing’s analysis of computation. This aspect alone of Turing’s view of mind would make it unpalatable for staunch behaviorists, who would object to any talk about internal entities such as states of mind altogether. Moreover, symbols previously scanned affect present and future operations of the machine by helping determine the inner m-configuration which the machine will switch into: this is part of the ‘possible behavior’ of the machine that is determined by the current m-configuration and the symbol currently scanned. This broad conception of behavior, encompassing changes in (what stand for human) internal states, is not in accord with strict behaviorism.

Still, even if Turing’s notion of effective memory is not outright behaviorist, it does deserve to be labeled as inter-subjectivist. Inner states are not viewed by Turing as consisting in memory states in virtue of any representational characteristics they have. Rather, their content as memory states is conceptually connected to overt, public behavior, i.e., symbolic manipulation on the tape. For an inner state to realize effective memory is for this state to help determine current and future operations of the machine on the tape—there is no memory without such determination. And because the tape stands for a publicly available medium (e.g., a piece of paper), Turing’s effective memory is public as well.

I therefore argue that Turing is better thought of not as promoting a behaviorist outlook in his discussion of memory, but rather as hinting at a position that is more in the spirit of contemporary inter-subjectivist philosophers of mind and language. Donald Davidson, for example, is one such philosopher. Davidson (1980a) allows himself to talk about intentional mental states such as belief and desire; indeed, talk of such states is central to his philosophy. However, his view if these states is inter-subjectivist: being in a certain mental state is essentially tied to the manifestation of patterns of overt, public behavior—in particular, linguistic behavior, that is amenable to interpretation. This does not make Davidson a behaviorist: he does not claim that talk of inner mental states should be eliminated in favor of talk about patterns of behavior, but rather only that there is conceptual interdependence between the internal and the public. Now although this sophisticated philosophical position was not available in 1936, I suggest that it may be useful, and not completely untrue to the text, to ascribe a similar view to Turing.

Effective memory and Turing’s thesis

Memory is not the main concern of Turing’s 1936 paper. Rather, the objective of the paper is to introduce a formal model that captures the intuitive notion of computation, and then to use this model to give an answer (a negative one) to an open logical question, viz. the decision problem for 1st order predicate logic. The claim that the model introduced in the paper—i.e., Turing machines—indeed captures the intuitive notion of computation has been labeled Turing’s Thesis, and its exact content and source of justification have been the subject of much scrutiny and discussion in recent years. In this section I propose to examine whether and how the foregoing analysis of Turing’s comments on memory can contribute to our understanding of these two aspects of Turing’s Thesis, namely its content and justification.

The content of Turing’s thesis

What is the intuitive notion of computation that Turing intended to capture by his model of abstract machines? It has been erroneously held by many that according to Turing’s Thesis these abstract machines can calculate whatever is calculable by any machine whatsoever. Now as any dynamic physical process may in principle serve as a (possibly analogue) calculating machine of the mathematical functions that describe its dynamics, this understanding of the Thesis yields the consequence that any physical process is Turing computable. Combined with a materialistic view of the human mind this consequence entails the further result that any mental process is Turing computable as well.

However, as has been argued forcefully, e.g. by Copeland (2002) and Sieg (1994), this conception of Turing’s Thesis, as he defends it 1936, is misguided. As is made clear by the context in which Turing worked (i.e., his aim to solve a logical problem), and by his explicit arguments for the adequacy of his model, Turing machines are not meant to represent procedures that can be carried out by computing machines, but rather by humans. Thus the ‘computer’ that Turing talks about in the 9th section of his paper is not a machine, as the term is used today. To read Turing this way would be another case of anachronism, similar to the anachronistic application of the term ‘memory’ discussed above. Rather, Turing’s computer (which Sieg spells ‘computor’, with an ‘o’, in order to avoid the anachronism just mentioned) is a human engaged in computation, and Turing machines are supposed to capture the essentials of such computation. This point is most probably the one Wittgenstein gets at when he says (1980, §1096) ‘Turing’s ‘Machines’. These machines are humans who calculate’, and it is made much more clearly and directly by the aforementioned later writers. In particular, Copeland (2002), following Gandy (1980), labels the view that any physical process is Turing computable as Thesis M, and is at great pains to distinguish it from Turing’s own Thesis.2

It is important to insist, then, that Turing’s Thesis concerns computation by humans. However, such insistence does not remove all ambiguity concerning the content of the thesis. There are many levels at which a human may be described as carrying out a process, or procedure. There are physiological processes that take place within our body, there are mental processes that take place in our mind, and there are the overt, public things that we do. Which of these classes of processes does Turing’s Thesis apply to? The answer, implicit in Turing’s text and in many of the discussions of this text, but still worthwhile to be stated explicitly, is this: according to Turing, computation is a type of public, inter-subjective human action. Computations are a sub-class of the class of public human behaviors, not a sub-class of the class of mental processes, be they conscious or not. Turing’s Thesis characterizes the functions that can be computed in this overt, inter-subjective sense.

The grounds for this characterization of Turing’s Thesis can be found in the 9th section of his 1936 paper, and, as has been already noted above, have been discussed extensively in recent literature (Copeland 2002; Sieg 1994, 2002). Among them are Turing’s construal of the machine’s tape as analogous to a piece of paper, which is an external, inter-subjective medium, and his justifying the finite number of possible symbols that can be used by any given machine by an appeal to the limitations of human perception. These are indications that computation is something a human does, in the most literal sense; it is not something he or she does (either consciously or not) in his or her mind, nor an unintentional function of any bodily part. Now in order to state explicitly what distinguishes human action from processes and events of other kinds we can appeal again to Donald Davidson, already mentioned in the previous section. Davidson (1980b) tells us an action is (p. 46) ‘whatever a person does intentionally under some description’, and that all primitive actions (i.e. acts that are not caused by prior actions of the agent) are bodily movements (p. 49). I believe that for Turing computation is an action is this strict sense: it is an intention-governed bodily manipulation of symbols.

The discussion in the previous section of this paper provides further support for this characterization of Turing’s Thesis. The upshot of this discussion was that memory, a capacity that is commonly viewed as private an internal, is conceived by Turing in inter-subjective terms. Therefore there should be no doubt that Turing is similarly inter-subjectivist vis-à-vis computation, which is a notion of much stronger overt intuitive characteristics.

As in the previous section, it is natural to ask at this point whether Turing’s conception of computation can be characterized as behavioristic. And again, as in the previous section, there may seem to be good reasons for giving a positive answer to this question. Indeed, a search through Turing’s paper yields a striking number of appearances of the term ‘behavior’ in his discussion of computation. In section 1 he writes (Turing 2004, p. 59):

The possible behaviour of the machine at any moment is determined by the m-configuration qn and the scanned symbol S(r). This pair qn,S(r) will be called the ‘configuration’: thus the configuration determines the possible behaviour of the machine.

In the beginning of section 3 (examples of computing machines) he writes (p. 61):

The machine is to have the four m-configurations ‘b’, ‘c’, ‘z’, ‘e’ and is capable of printing ‘0’, and ‘1’. The behaviour of the machine is described in the following table in which ‘R’ means ‘the machine moves so that it scans the square immediately on the right of the one it was scanning previously’.

In several tables in this (the 3rd) section the header of one of the columns is ‘behaviour’. In the opening sentence of section 7, where Turing defines the universal machine, he writes (p. 69):

A table is given below of the behaviour of this universal machine.

In part I of section 9 he writes (p. 75);

The behaviour of the computer at any moment is determined by the symbols which he is observing and his ‘state of mind’ at that momen,

and again in part III of the same section he says (p. 79):

In other words we assume that there is an axiom U which expresses the rules governing the behaviour of the computer.

What should we make of this insistent talk about behavior? It is certainly not my aim to downplay such talk as merely metaphoric: I certainly do believe that Turing’s association of computation with behavior strongly supports the perspective on his Thesis suggested here. However, as already stated in the previous section, it is my view that this constant talk of behavior does not entail that Turing is a behaviorist in his conception of computation, as least not in the strict sense of the term. The reasons for this were already presented above. Turing does not eschew talk about mental states—not in general, and not in his analysis of human computation in particular. Nor is Turing’s analysis inconsistent with talk about intentional action—this was argued for several passages above. Rather, Turing’s talk about behavior only stresses the fact that the human function under discussion, namely computation, is a public activity—open for the observation of other humans.

This characterization of Turing’s Thesis, and its striking expression in Turing’s talk about computation as a kind of behavior, brings to the fore yet another question, with which I conclude this sub-section. How does Turing’s conception of computation as overt action bear upon the relation of his Thesis to the other theses concerning the identity of the class of computable functions—e.g., that of Church? Certainly Church does not appeal to behavior in his discussion of computable functions, nor do Gödel, Herbrand and Kleene relate computation to behavior when they analyze this notion through the concept of recursive functions. Post’s analysis of computation, on the other hand, is much more in tune with Turing’s behavior-oriented approach. What, if anything, should be made of these differences?

The answer to this question that I propose is this. It is misleading to describe Turing’s analysis of computation and, e.g., the analysis of computation in terms of recursive functions, as two formally equivalent elucidations of the same intuitive concept. Such a statement implies that among the distinct approaches involved there is an agreement in the philosophical or intuitive outlook on what computation is, and difference only in the way this outlook should be expressed or captured formally. However, the foregoing discussion stresses the fact that this is not the case: although all these figures and approaches are concerned with human computation they do so in variegated ways. For example, Turing’s conception of computation as a kind of behavior, or action, is radically different from a characterization of computation as a legitimate derivative of a set of basic operations on the natural numbers as abstract entities. Now these different conceptions, when cashed out formally, indeed prove to be mathematically equivalent. However, mathematical equivalence does not entail logical or philosophical equivalence. Indeed, in some philosophical contexts these different views of computation may have distinct consequences and implications.

An upshot of this string of considerations is that the term ‘Church-Turing Thesis’, which may be adequate for mathematical purposes, is problematic in other contexts, and should be replaced in these contexts by talk about Turing’s Thesis and Church’s Thesis as two distinct theses. (This view is presented in Kleene (1952), and is given implicit support in Copeland (2002), where each of these theses is stated separately. However, the title of Copeland’s article is ‘The Church-Turing Thesis’, which stands in contrast to the text itself.) The reason suggested here for this claim is that these two theses, while coinciding mathematically, may involve distinct outlooks as regards what computation—human computation—is.

The justification for Turing’s thesis

It is generally agreed upon that Turing’s formalization of the notion of computability is of stronger intuitive appeal than the other formalizations proposed in 1936. Indeed, Turing’s approach was the one that convinced Gödel at the time that the class of recursive functions exhausts the mechanically computable functions. In a series of papers (1994, 1997, 2002) W. Sieg has explicated and developed Turing’s support for his Thesis. According to Sieg (2002), Turing provides in the 9th section of his paper a conceptual analysis of computation, in which are implicit (or from which can be derived) several restrictive conditions on computational processes. Turing then puts forward what Sieg calls his Central Thesis—the claim that any process satisfying these conditions can be simulated by a general symbol manipulating machine (called by Sieg a ‘string machine’). Finally, implicit in Turing’s text is a theorem that Turing machines are equal in computational power to so called ‘string machines’.

Now the grounds for several of the restrictive conditions on computation extracted by Sieg (and others) from Turing’s text can be stated and defended in a relatively straightforward fashion. Such are the Boundedness and Locality conditions on the human computer’s ability to recognize and manipulate symbols, which are derived from limitations on human perceptual capacities, and so is the Determinacy condition, which has solid grounding in the intuitive conception of computation. However, there is yet another condition, about the justification of which there is controversy. This is the Finiteness condition, according to which the algorithm (or procedure) that determines the steps in each given computation should be of finite size, i.e., include a finite number of basic rules, or commands.

What are Turing’s reasons for formulating this restriction, and should we accept them? The text of Turing’s 1936 paper includes a much discussed unclarity on this point. On the one hand, as already discussed above, in the first section of his paper Turing justifies this finite aspect of computation procedures by appealing to the finite number of possible states of the human mind. Moreover, in the 9th section of the paper Turing offers a quick (and not very convincing) argument why the number of mental states is indeed finite (p. 76): " If we admitted an infinity of states of mind, some of them will be "arbitrarily close" and will be confused". On the other hand, in part III of the same section Turing suggests a way to avoid talking about states of mind altogether, by talking instead of formally, symbolically coached instruction lists. Presumably these are necessarily finite due to the above mentioned human perceptual limitations, and the way these limitations restrict inter-subjective communicative interaction such as the specification of command lists. So what is the source for the justification of the finiteness condition, according to Turing—is it internal or external?

I propose that the considerations raised in the first section of this paper suggest a way to reconcile this seeming inconsistency in Turing’s text, and this in a way that favors the second, inter-subjective grounding of the finiteness condition. Turing indeed holds that a limitation on the number of states of mind justifies the restriction on computation processes, but how does he think of states of mind? The discussion of ‘effective memory’ gives us a plausible answer to this question: according to Turing states of mind are inherently tied to overt, public action. They are not internal, representational entities, hidden from site and only causally related to behavior. Rather, part of the identity of a certain mental state are the actions it gives rise to. We saw this above with respect to memory states (as realized in Turing machines by m-configuration): They consist in such states by creating certain patterns of behavior. However, the same should hold also for command or instruction states: they too consist in the states they are not in virtue of having some representational, symbolic content, but by creating certain patterns of behavior as well. Indeed, the m-configurations of a Turing machine realize instructions and memory in the same way, at the same time, mixing memory and desire, so to speak.

The upshot of these considerations is this, then. For Turing states of mind are public in an essential way. Therefore there is no contradiction between what he says in part I and part III of section 9 of his paper. In both the justification of the finiteness condition amounts to this: deterministic, rule-following behavior can be judged as such from a public, third-person perspective only if the set of rules according to which such behavior is performed is finite. Put otherwise, in order to communicate the list of rules that a certain procedure must obey this list needs to be finite—otherwise it cannot be communicated. Both formulations ground the finiteness of computation algorithms in their inter-subjective nature.

It is certainly not my claim here that this view of the finiteness condition is an original contribution of this paper. Stephen Kleene (1987), for example, expresses such a view in the following passage (p. 493):

Let us have a try at making sense out of there being a potential infinity of states of mind by a human computer with an expanding mind in applying an algorithm. So I encounter Smarty Pants, who announces to me, ‘I can calculate the value of a certain function φ(x), for each value of x for which it is defined, by rules already fixed which will determine my every step, so that what function φ is is already determined. But I can’t tell you, Wise Acre, how, because the rules have to tell how I will respond in each of an infinity of ultimately possible states of my expanding mind.’ I would reply ‘Phooey! If you can’t tell me what your method is, it isn’t effective in my understanding of the term!’

The contribution of the present paper is in showing how this view can be justifiably and consistently ascribed to Turing in 1936, and how it coheres with a conception of mind that has solid grounding in contemporary philosophy of mind (e.g., in the writings of Davidson). As was argued above, this conception of mind can reasonably be associated with Turing’s ideas at that time.

Memory, computation and behavior in Turing’s later writings

In the third and final section of the paper I present, albeit briefly, Turing’s post-war views regarding the concepts discussed above, taking them up one after the other. I argue that we see some significant changes in these views, as well as underlying continuity.


After World War II Turing changes his mind with respect to memory, and the way it is realized in computing machines. Turing adopts (or, rather, probably helps originate) the outlook which is dominant today, and which was claimed above to be erroneously, anachronistically applied to his 1936 paper. According to this outlook the part in the machine that corresponds to human memory, and that consists in the machine’s memory, is its tape. Therefore memory is symbolic and representational, as opposed to Turing’s earlier views. This approach is expressed in the following passage from Turing’s famous 1950 paper (Turing 2004c, p. 444):

A digital computer can usually be regarded as consisting of three parts:

(i) Store.

(ii) Executive unit.

(iii) Control.

The store is store of information, and corresponds to the human computer’s paper, whether this is the paper on which he does his calculations or that on which his book of rules is printed. In so far as the human computer does calculations in his head a part of the store will correspond to his memory (my italics—e.d.).

In Turing’s 1948 report (2004b) this view is expressed even more explicitly (p. 413):

In [chapter 1] a certain type of discrete machine was described. It had an infinite memory capacity obtained in the form of an infinite tape marked out into squares on each of which a symbol could be printed.

The difference between this sentence and Turing’s previously discussed comments from 1936 is striking. In 1936 Turing says that (2004a, p. 59) ‘the justification [of the machine’s structure—e.d.] lies in the fact that the human memory is necessarily limited’, and here he says that the machine has ‘an infinite memory capacity’! Make no mistake, Turing talks in both places of the same formal model. It is just that his thinking of this model and the way it is related to human cognitive capacities has changed.

What are the reasons for Turing’s change of perspective? As suggested by Hodges (in conversation), these reasons most probably have to do, at least in part, with Turing’s experiences as a master code-breaker during the war. These experiences included the construction of real, physical pioneering prototypes of computing machines, and the interaction with these machines may have pulled Turing towards the conception of computing machines that is so prevalent today. A related set of reasons that will be touched upon below, in clause 3 of this section, has to do not with Turing’s past experiences, but with his future designs—i.e., his interest not only in artificial computing but also in the much broader and stronger concept of artificial intelligence.


As was argued above, in sections 1 and 2, Turing’s 1936 perspectives on memory and computation are tied to each other—from one we can learn about the other. Therefore it is in place to ask whether Turing’s different, post-war conception of memory reflects a shift also in his overall view of the nature of computation. I believe the answer to this question is positive. If symbols are thought of as memory—i.e., as analogous to an internal human faculty, then computation need not be conceived of as an overt, public process anymore. It is no more viewed as a kind of inter-subjective action, but rather as process that could take place within the human mind.

One interesting, albeit implicit way this view of Turing’s is expressed is in the set-up of his famous imitation game (Turing 2004c). The human interlocutor in this game converses with the computing machine symbolically—i.e., through a mechanism that is equivalent to a Turing machine’s tape. However, is this to say that the whole of the machine’s tape is exposed to the interlocutor throughout the game, as it is publicly exposed in Turing’s 1936 conception of his model? Of course not. The computing machine would not have a chance to deceive the interlocutor into thinking that it is human if all its computations were so exposed. Thus from the interlocutor’s point of view (some of) the symbolic computations performed by the machine are internal.

Turing’s report from 1948 (Turing 2004b) contains further, ample proof of the change in his orientation. Turing’s discussion in this paper of (what would later be called) connectionist computing machines is a clear indication that he no longer views computation as human action. Even his remark (p. 416) on so-called ‘paper machines’—i.e., humans performing computations with a pencil and a piece of paper—is in tune with his new outlook. Admittedly, in this remark Turing reestablishes the connection between computation and human action, and thus comes back full circle to his 1936 departure point: humans can do whatever can be done by machines that were designed to mimic human action in the first place. (Shanker (1998) notes this circularity.) However, I argue that what Turing says in this comment is better thought of not as going in a circle, but rather as going back in an opposite direction to the path he took before. His exact words are (p. 416):

It is possible to produce the effect of a computing machine by writing down a set of rules of procedure and asking a man to carry them out.

These words do not ground computation in human action. Rather, they state the fact (which should seem surprising to the uninitiated reader, I think) that computation, which is conceived here in non-human terms, can be also realized by man.

Now in section 2 above is was claimed that the conception of computation ascribed to 1936 Turing bears upon the content of his Thesis, and his justification of this Thesis. It is thus yet another natural step to go on and ask whether Turing’s different conception of computation bears upon the way he thinks of his Thesis. I hazard that the answer to this question too is positive, but shall not argue the point here. Thus I wish only to raise for consideration the hypothesis that Turing’s later view of his own thesis differs from the way he views it in 1936: it is no more a thesis about human computation, and not justified as such. If indeed this is the case, then there may be suggested here a resolution of a recent debate between Hodges and Copeland (Hodges 2007) as regards the content of Turing’s Thesis. As noted in section 1 above, Copeland stresses the difference between Turing’s Thesis and what he calls Thesis M, which concerns the bounds of machine computation in general. Hodges, on the other hand, argues that no such clear-cut distinction can be found in Turing’s own writings. The suggestion made in this paper is that Turing’s ideas should not be thought of as a constant, consistent whole: his earlier views are in accord with Copeland’s account, but his later thinking may have diverged in the course suggested by Hodges. It is a mistake, I argue, to force the earlier Turing on the later one, or vice versa.

Behavior and intelligence

Finally, a note on the place of behavior in Turing’s later thought. We saw above that computation is no longer viewed by Turing as a kind of action, or behavior. It may therefore be asked whether this is an indication of a more general rejection of the inter-subjectivist outlook concerning the human mind that characterized Turing’s earlier period.

The answer to this question is negative. Turing’s later objectives and interests go beyond computation, and are concerned with the artificial reproduction of intelligence and thought. With respect to these more general concepts Turing retains his inter-subjectivist outlook: in his 1950 paper (Turing 2004c) and in later opportunities (Turing 2004d, Turing 2004e) Turing seems to hold that having thoughts, or being intelligent, amounts to producing the right kind or inter-subjective behavior—exemplified, e.g., in successfully participating in the imitation game. Computing machines are of interest in this context only because they seem to be adequate candidates for producing such behavior, not because of their nature as symbol manipulating machines. Thus Turing may have changed his mind and views symbol-manipulation as internal, but he does not endorse an all-out classical-cognitivist view of thought, according to which thought is symbolic representation (Block 1981). For Turing symbolic processes are not representational, but rather only an adequate means with which to produce the right overt effect. In this he adheres to his earlier convictions, according to which the content of the inner state is conceptually tied to its connections with overt action.

The same qualification made already two times above is order here as well. Turing is clearly not a strict behaviorist with respect thought and intelligence. This fact is expressed, e.g., in the criterion for success in his imitation game, which is not stated in some intention-free scientific language. Rather, Turing is inter-subjectivist about these mental notions. It is therefore no surprise that in this context too an analogy with Davidson is in place, and, in fact, is explicitly made by Davidson (1990) in his article ‘Turing’s Test’. In this article Davidson criticizes Turing’s criterion for intelligence in many ways, but does so on the background of sympathy and affinity to the inter-subjectivist nature of this criterion.

Turing may therefore be said to have sacrificed the public nature of computation for the cause of an inter-subjectivist approach to thought and intelligence. The tradeoffs of this bargain, which we cannot go into here, are being researched and discussed to this day.


Barker-Plummer, D. 2005. "Turing Machines", The Stanford Encyclopedia of Philosophy (Spring 2005 Edition), Edward N. Zalta(ed.), URL = <http://plato.stanford.edu/archives/spr2005/entries/turing-machine/>.

Block, N. 1981. ‘Psychologism and Behaviorism’, The Philosophical Review 90, 5-43.

Copeland, B. J. 2002. "The Church-Turing Thesis", The Stanford Encyclopedia of Philosophy (Fall 2002 Edition), Edward N. Zalta(ed.), URL = <http://plato.stanford.edu/archives/fall2002/entries/church-turing/>.

Copeland, B. J. 2004. (Ed.) The Essential Turing. Oxford: Oxford University Press.

Davidson, D. 1980a. "Mental Events", in D. Davidson, Essays on Actions and Events, Oxford: Clarendon Press, pp. 207-225.

Davidson, D. 1980b. "Agency", in D. Davidson, Essays on Actions and Events, Oxford: Clarendon Press, pp. 43-61.

Davidson, D. 1990. ‘Turing’s Test", in K. Said, W. Newton Smith, R. Viale and K. Wilkes (eds.), Modelling the Mind, Oxford: Clarendon, pp. 1-12.

Dresner, E. 2003. ‘Effective Memory and Turing’s Conception of Mind,’ Journal of Experimental and Theoretical Artificial Intelligence vol. 15, 2003, 113-123.

Fodor, J. and Pylyshyn, Z. 1988. ‘Connectionism and Cognitive Architecture: A Critical Analysis’, Cognition 28, 3-71.

Gandy, R. 1980. ‘Church’s Thesis and Principles for Mechanism’, in Barwise, Keisler and Kunen (eds.), The Kleene Symposium, Amsterdam, 123-148.

Hodges, A. 2007. ‘Did Church and Turing Have a Thesis about Machines?’ In A. Olsewski, and J. Wolenski (eds.), Church’s Thesis after 70 Years, Frankfurt: Ontos Verlag, 242-252.

Kleene, S. 1952. Introduction to Metamathematics. Amsterdam: North-Holland.

Kleene, S. 1987. "Reflections on Church’s Thesis", Notre Dame Journal of Formal Logic 28, 490-498.

Shagrir, O. 1997. ‘Two Dogmas of Computationalism’, Minds and Machines 7, 321-344.

Shanker, S. 1998. Wittgenstein’s Remarks on the Foundations of AI. London: Routledge.

Sieg, W. 1994. ‘Mechanical Procedures and Mathematical Experience’, in George, A. (ed.), Mathematics and Mind, Oxford: Oxford University Press, pp. 71-114.

Sieg, W. 1997. "Step by Recursive Step: Church’s Analysis of Effective Calculability", Bulletin of Symbolic Logic, vol. 3(2), 154-180.

Sieg, W. 2002. "Calculations by Man and Machine: Conceptual Analysis", in W. Sieg, R. Sommer and C. Talcott (eds.), Reflections on the Foundations of Mathematics, Natick, MA.: Association for Symbolic Logic, pp. 390-409.

Turing, A. 2001. ‘Systems of Logic based on Ordinals’. In A. Turing, Collected Works — Mathematical Logic, Amsterdam: Elsevier, pp. 71-148.

Turing, A. 2004a. ‘On Computable Numbers, With an Application to The Entscheidungsproblem’ in Copeland [2004], pp. 58-90.

Turing, A. 2004b. "Intelligent Machinery", in Copeland [2004], pp. 395-432.

Turing, A. 2004c ‘Computing Machinery and Intelligence’, in Copeland [2004], pp. 433-464.

Turing, A. 2004d. "Intelligent Machinery, A Heretical Theory", in Copeland [2004], pp. 472-475.

Turing, A. 2004e. "Can Digital Computers Think?", in Copeland [2004], pp. 482-486.

Webb, J. 1990. ‘Introductory Note to Remark 3 of Godel 1972’. In K. Gödel, Collected Works II, Oxford: Oxford University Press, pp.292-304.

Wittgenstein, L. 1980. Remarks on the Philosophy of Psychology, vol. I, G. E. M. Anscombe and G. H. von Wright (eds.), Oxford: Basil Blackwell.


* This research was supported by the Israel Science Foundation (Grant No. 153/2004). I am grateful to Jack Copeland, Juliet Floyd, Andrew Hodges, Gualtiero Piccinini, Oron Shagrir, Wilfried Sieg and Judson Webb for their comments on earlier versions of this paper.

1 As noted by Webb (in conversation), the tape can be thought of as external memory, in the same way that books are often viewed as external extensions of our memory. Such a characterization of the tape is indeed consistent with what Turing says in 1936. However, if we use the term ‘memory’ in this dual fashion we need to bear in mind that (i) only internal memory consists in a cognitive capacity, and that (ii) this internal capacity need not be realized symbolically.

2 It should be acknowledged that in 1937 Turing (2001) does not uphold this human-oriented approach of the earlier paper, and talks of his model as capturing processes (p. 86) ‘which could be carried out by a machine’. However, this later remark does not undermine the above described interpretation of the paper with which we are concerned here. See also section 3 below.

Illustration credits

Eli Dresner1

Ludwig Wittgenstein2

Alan Turing3

Stephen Cole Kleene4