Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

The Case Against Computational Theory of the Mind: A Refutation of Mathematically-Contingent Weak A.I., Study notes of Mathematics

The goal of formalism is to demonstrate the internal consistency and hence, the theoretical soundness of mathematics.

Typology: Study notes

2021/2022

Uploaded on 03/31/2022

mjforever
mjforever 🇺🇸

4.8

(25)

258 documents

1 / 27

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
The Case Against Computational Theory of the Mind:
A Refutation of Mathematically-Contingent Weak A.I.
Anthony D. Rhodes
Portland State University, 2011
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b

Partial preview of the text

Download The Case Against Computational Theory of the Mind: A Refutation of Mathematically-Contingent Weak A.I. and more Study notes Mathematics in PDF only on Docsity!

The Case Against Computational Theory of the Mind:

A Refutation of Mathematically-Contingent Weak A.I.

Anthony D. Rhodes

Portland State University, 2011

“The Whole Dignity of Man Lies in His Thought”

  • Pascal

In the last century, the debate over the nature of the mind has enjoyed a renewed

prominence and relevance. Gregory Chaitin has, for instance, recently made the bold claim

that the pure Gedankenexperiments provided by philosophers of mind in the 20th^ century

directly engendered one of mankind’s crowning intellectual achievements: the invention of

the computer.^1 For the purposes of this paper, my interests lie in assessing and critiquing

the current debate centering around the question prompted by the aforementioned

genealogy of ideas, namely: is the human mind a computer? It is likely that the answer to

this question will have an important impact upon considerations in the philosophy of mind

as well as conceptions of the nature of the self over the course of the next century.

I will use an article published by the preeminent philosopher of mind, John Searle,

entitled “Roger Penrose, Kurt Gödel, and Cytoskeletons” (1997) as my main point-of-entry

into the “computational theory of mind” (i.e. the theory that human minds are essentially

equivalent to computational machines ) discussion. Searle is well-known for his attempted

refutation of the core premises of the computational theory of the mind, known as the

“Chinese Room” thought experiment (c. 1980). In the present paper I wish to focus on

arguments that spawn from the theses of the Chinese Room argument, particularly with

regard to Searle’s more recent musings on Penrose and Gödel. Using concrete

mathematical and logico-deductive tools, Searle’s work on Penrose and Gödel frames the

computational theory of the mind problem more cogently than many of the more

theoretically informal or purely speculatively studies on this topic. The tropism toward a

disciplinary inclusiveness with regard to investigations into questions in the philosophy of

led in a direct way to the reinvigoration of the study of formal systems of logic in the latter

half of the 20th^ century. These lines of inquiry have proven fruitful in expanding

conceptions of scientific “systems,” as well as paving the way for additional deep-seated

enigmas. In what follows, I wish to reconsider the philosophical implications of Gödel’s

findings with respect to the question of whether the human mind is in fact a computer. I

will demonstrate how a version of Incompleteness may be used to bolster the notion of the

inimitability of the human mind by refuting the “weakest” version of computational theory

of the mind (CTM).^2 I begin this approach with a discussion of Turing.

Although he is unanimously considered one of the greatest scientific minds of the

past century, Alan Turing is rarely given appropriate recognition for his important

contributions to philosophy.^3 Contemporary literature has reassessed the importance of

Turing‘s philosophical investigations, particularly in relation to the mind/computer

problem.^4 Frequently, Turing is presumed (unjustly, in my view) to be a proponent of the

CTM. This belief is largely informed by the sanguine predictions which Turing puts

forward in his famous “Computing Machinery and Intelligence” (1950) paper where he

conjectures that computational simulations of human minds could exist by the close of the

20 th^ century (a prediction that he based on an assurance that storage capacities and

processing speed limitations would not, alone, obviate the development of such a “perfect”

simulation). But as we shall see, Turing’s own constructivist reworking of Gödel’s

Incompleteness leads, conversely, through subsequent results by the logician John R.

Lucas and Penrose, to a repudiation of CTM. Turing’s philosophical inquiries by way of

the inventions of both: (1) Turing machines and (2) the Turing test/the “Halting problem”

are crucial to these explorations and require further examination.

Turing briefly described what has since become known as a “Turing machine”

in a thought experiment from a paper dating back to 1937, and his more refined definition –

a definition which essentially inaugurated the modern field of artificial intelligence (A.I.) –

first surfaced in a 1948 essay entitled “Intelligent Machinery.” Turing describes a machine

consisting of several components: a read/write head, an infinite capacity/infinite length

tape marked out into squares, and a finite table that suffices to define the internal

instructions (read: axioms) of the machine/program. Typically, one can describe a Turing

machine in terms of ordered 4 - tuples , e.g. ( q 1 , Sr, Os, q 2 ). These 4 - tuples can be interpreted

as follows: When the machine is in state q 1 and its read/write head “reads” symbol Sr (for

the sake of simplicity, the machine-lexicon may consist solely of binary symbols 0 and 1 )

from the table of instructions, then the machine proceeds to implement operation Os , where

the permissible operations include basic processes such as: move head left/right,

write/erase, and so forth; after the execution of the designated operation, the machine

transitions to the specified “new” state, q 2. It is not difficult to see, for example, that a

machine constructed in this way can successfully add numbers together (and by extension,

perform any arithmetic operation you like). I illustrate this notion with a common textbook

example of a Turing machine that adds any two arbitrary numbers together.

0 0 0 1 1 1 1 2 2 2 2 3 3 3

q B q q B R q q R q q q q q R q q B L q q B q

“Turing addition”

This machine adds numbers m and n in the following fashion: On its tape the numbers m (a

consciousness and other mental phenomena consist entirely in computational processes.

Weak A.I. , conversely, requires that brain processes cause consciousness, and these

processes can be potentially simulated on a computer. The remaining positions, the third of

which Penrose endorses, consist in the notion that brain processes cause consciousness but

these processes “cannot even be properly simulated computationally”, while the last

position (which I do no pursue in this paper) alleges that science cannot explain

consciousness.

Before I unpack Searle’s position (he concedes the legitimacy of weak A.I. but not

strong A.I.) I want to first clarify the manner in which I propose to appraise CTM from an

ontological point of view. After all, if CTM suggests that the mind and computer are

equivalent , what here is meant by equivalent? Leibniz’ principle of the identity of

indiscernibles provides a sound way to check the purported equivalence of mind and

computer upheld by CTM. This criterion is arguably the most widely-accepted principle

relating to the assignation of ontological equivalence. The identity of indiscernibles states

that two entities are identical if they have all conceivable (which is to say not merely

empirical) properties in common. Stated more formally, then: if, for every property F ,

object x has F if and only if object y has F , then x is identical to y ; alternatively, in the

language of symbolic logic we have:  F Fx (  Fy ) xy. Let us now apply this

analytic standard to the claims made by CTM in order to illuminate Searle’s approach,

beginning with the famous Chinese Room argument.

Searle presents his arguments in the article “Minds, Brains and Programs” (1980).

From the outset, he makes clear his position against weak A.I.: “my discussion [is] directed

[against the specific] claim that the appropriately programmed computer literally has

cognitive states and that the programs thereby explain human cognition.”^6 Searle draws an

analogy between computer simulations of human mental phenomena and the process of

mindless symbol shunting. The thought experiment is described as follows: Suppose that a

monoglot, call him Searle, who speaks only English and understands not a word of Chinese

is placed in a room. In the room Searle has access to a large instructional table of

conditional statements which direct Searle to reply to such and such Chinese symbol with

such and such intelligible response (assume that the table is exhaustive so that Searle is

prepared to handle any string of questions written in Chinese). Questioners from outside

the room pass Searle a series of yes/no queries written in Chinese. Although such questions

are inscrutable to Searle, he nonetheless uses the table to formulate coherent replies to each

of the questions posed to him so that his interlocutors are unable to distinguish his inability

to read and write in Chinese. They therefore conclude that Searle understands Chinese;

Searle qua a Turing machine (note that the table serves as a program or a set of axioms if

one extends the analogy) has passed the Turing test; mutatis mutandis , the computer and

mind are therefore equivalent.

But does such a situation pass muster with respect to our standard-bearing test for

ontological indisernibility? Searle reckons not. Recall that by Leibniz’ lights, the mind and

computer are equivalent if and only if every conceivable property that holds for the mind is

preserved for the computer. Searle alleges that the inhabitant of the Chinese Room does not

understand Chinese in the same way that a native speaker does; moreover, the process of

mindless symbol shunting in no way helps to explicate the internal cognitive states of the

native speaker. Ergo:

Whatever purely formal principles you put into [a] computer, they will not be sufficient for understanding, since a human will be able to follow the

hence content-less.

In building my own response to Searle, I wish to avoid many of the eristic concerns

that attend the rebuttal of strong A.I. in the form of the Chinese Room argument. The key to

such a tactic involves an investigation into the unsolvability of the Halting problem which

Turing presented in 1936 as a constructive or computational analogue to Gödel’s 1931

Incompleteness Theorem.

The rudiments of the unsolvability of the Halting problem as it applies to a rejection

of CTM are as follows. Naively, one can imagine certain elementary problems in

mathematics whose truth or falsity can be known to us semantically or perhaps better put:

structurally, and yet a purely computational approach to this same problem yields an

inconclusive result. A trivial example might involve, for instance, the case in which we ask

whether the sum of some pair of even numbers ever produces an odd number. Any

schoolchild knows the falsity of such a claim because they can see the result, and this is

precisely the point. More formally, we can concisely prove the result as follows:

2 m  2 n  2( mn )  m n ,  Z

So the sum of any two even integers is even; a fortiori , an even number cannot be odd.

Now if a computer exhaustively attempted to discover whether there are two even numbers

whose sum is odd, e.g., “2+2 is even [next]…2+4 is even [next]…2+6 is even [next]…”

such a procedure would never halt and therefore the answer to this problem would remain,

in a computational sense, unknown. But the astute reader will, in all likelihood, find

quibbles with this particular example. Isn’t the statement: the sum of two even numbers is

_______, simply, pace Kant, an analytic a priori proposition – whereupon the evenness of

the sum of two numbers is connoted simply through the meaning of the term even itself?

Certainly such a criticism is perfectly valid. Yet things aren’t quite as simple if we devise a

more intricate problem.

Take as a second example a question regarding prime numbers. Bear in mind that a

prime numbers is any integer, call it p, that is greater than or equal to two and whose only

positive divisors are p, itself, and the number one. Now prime numbers are important for all

sorts of reasons – but at this juncture let us simply ask, as many have wondered before: are

there an infinite number of primes? The answer, which, to the best of anyone’s knowledge

first dates to Euclid, is a resounding yes. Notice that this result is less apparent than the

previous example; we might, in the language of Kant, term it a synthetic a priori

proposition. The idea is that it takes a bit more work in order to see the truth in the assertion

that there are an infinite number of primes. Euclid proposed the following argument:

Proof: Suppose not, and allow that there are only a finite number of primes: {p 1 ,p 2 ,p 3 ,…,pn}. Define the number p * = p 1 p 2 p 3 ...pn + 1, which is to say p * equals the product of all primes plus one. Then we are faced with two possibilities. Either p * is prime, in which case our original finite set of primes is incomplete, a contradiction; otherwise, p * is composite (non-prime) whereby the fundamental theorem of algebra guarantees that it is divisible by some prime number – but this too is a contradiction because p * is not divisible by any prime by its very construction. Consequently, by the method of reductio ad absurdum, there exist an infinite number of primes.

Just as in the previous example with even numbers, the truth or falsity of a

mathematical proposition may require extrinsic or special structural knowledge of the

given elements at hand in addition to information about the relational properties of these

entities in order to ascertain a definitive result. With the prime number example, we are, I

think, advancing closer to the line of demarcation between what is mentally possible and

that which is computationally impossible; since this problem, for instance, offers no simple

of the Halting problem yields a somewhat more practical result. The Halting problem poses

the question as to whether there is a general algorithm that can accurately determine, given

a description of a specific program, whether this program will eventually halt or continue

to run forever. Turing proved that there is no such algorithm that can account for any

arbitrary program and its associated input values. Obviously, many programs are evidently

terminable; and these programs are therefore of little concern. However, let us reconsider

the prior example regarding prime numbers. Now perhaps there is a simple algorithm we

could construct to test this specific case; the algorithm might go something like this: “IF

program requests search for maximal prime THEN return (0)” (where 0 denotes that the

program is interminable). But now consider the program that asks a computer to find an

odd perfect number (a currently unsolved problem) or to find a counterexample to the

Goldbach conjecture (also unsolved). We can get a sense perhaps, that constructing a

universal algorithm to test for terminability is deeply problematic.

Penrose’s idea is that by extension of Incompleteness through the unsolvability of

the Halting problem, it is possible to construct a computation that we can see does not ever

stop and yet no universal algorithm can tell us this. But this result holds for any set of

“computational algorithms”; whereupon he concludes that we are not computers carrying

out an algorithm.

Before assessing this conclusion, I would like to walk through Searle’s own version

of Penrose’s take on Turing and Gödel for two chief reasons. In the first case, his

exposition is the most concise recounting of the proof of the unsolvability of the Halting

problem that I have encountered; and secondly, the proof itself uses a version of a

technique commonly known as Cantor’s diagolization argument , which I will later

replicate during the brief discussion of consciousness contained at the end of this paper.

Proof (sketch) : For any number n we can think of computational procedures C 1 , C 2 , C 3 , etc., on n as dividing into two kinds, those that stop and those that do not stop. Now how can we find out which procedures never stop? Well suppose we had another procedure (or finite set of procedures) A which when it stops would tell us that the procedure C ( n ) does not stop. Think of A as the sum total of all the knowable and sound (read: algorithmically correct) methods for deciding when computational procedures stop. So if the procedure A stops then C ( n ) does not stop. Now think of a sequence of well-ordered computations numbered C 1 ( n ), C 2 ( n ), C 3 ( n ), etc. These are all the possible computations that can be performed on n. These would include basic arithmetic and algebraic properties such as multiplying a number by n , squaring n ., etc. Since we have numbered all the possible computations on n we can think of A as a computational procedure which given any two numbers q and n tries to determine whether C q ( n ) never stops. Suppose, for example, that q = 17 and n = 8. Then the task for A is to figure out whether the 17th^ computation on 8 stops. Thus, if A ( q , n ) stops, then Cq ( n ) does not stop. Previously we stipulated that the sequence C 1 ( n ), C 2 ( n ),… included all the computations on n , so for any n , A ( n,n ) has to be a member of the sequence { Cn ( n )}. Well, suppose that A ( n,n ) is the k th computation on n , that is, suppose A ( n,n ) = Ck ( n ). (*)

Now consider the case when n = k , so that A ( k,k ) = Ck ( k ). From above, it follows that if A ( k,k ) stops, then Ck ( k ) does not stop. However, if we substitute the identity into (*), we have: if Ck ( k ) stops, then Ck ( k ) does not stop. But if a proposition implies its own negation, it is false. Thus: Ck ( k ) does not stop. It therefore follows that A ( k,k ) does not stop either, because it is the same computation as Ck ( k ). This indicates that our comprehensive set of procedures is insufficient to tell us that Ck ( k ) does not stop, despite the fact that we know it stops. So A can’t tell us what we really know, namely that Ck ( k ) does not stop. Thus from the knowledge that A is sound, we can show that there are some nonstopping computational procedures, such as Ck ( k ), that cannot be shown to be nonstopping by A. So we know something that A cannot tell us, so A is not sufficient to express our understanding. But A included all the knowably sound algorithms we had. Thus no knowably sound set of computational procedures such as A can ever be sufficient to determine that computations do not stop, because there are some such as Ck ( k ) that they cannot capture. So we are not using a knowably sound algorithm to ascertain what it is that we know. 8

perfectly matched by a change in brain states” commits Searle to an irrevocable

contradiction. Remember that Searle agrees, along with Penrose, that strong A.I. is

infeasible, so that he claims that mental phenomena in total cannot be properly

simulated by a computer. But Searle seems to suggest in the preceding comment

that there can be a perfect, isomorphic mapping between mental states and brain

states. This belief is incompatible with a rejection of strong A.I. If the hypothesis of

strong A.I. is false, then brain states are not equivalent to mental states (for if they

were, an ideal physical model of the brain would satisfy the conditions of strong

A.I.); and if brain states are different from mental states then Searle is a committed

dualist and his statement above is nonsensical.

I likewise reject Searle’s second rejoinder to Penrose. Searle explains his

reasoning:

He [Penrose] thinks he has shown that you could not program a robot to do all the mathematical reasoning that human beings are capable of. But once again that claim has to be qualified. He has, if he is right, shown that you could not program a robot to do human-level mathematical reasoning if you programmed it solely with algorithmic mathematical reasoning programs. But suppose that you programmed it solely with totally nonnormative brain simulator programs. There is no question of ‘truth judgments’ or ‘soundness’ in the programs. Nothing in his argument shows that ‘human-level mathematical reasoning’ could not emerge as a product of the robot’s brain simulator program, just as it emerges as a product of actual human brains…From the fact that we cannot simulate a process at the level of, and under the description, ‘theorem proving’ or mathematical reasoning’ it does not follow that we cannot simulate the very same process, with the same predictions, at some other level and under another description.^10

Searle’s comments strike me as more of a semantic diversion than a substantive

philosophical objection. I am not at all opposed to utilizing thought experiments of

varying degrees of plausibility in the course of one’s reasoning – indeed, without

such experiments many of the most important developments in philosophy and the

sciences would have been otherwise unattainable. Even so, it is not at all clear to

me that the “nonnormative brain simulators” which Searle references above are the

least bit conceivable. Other than denoting something antithetical to what we

envisage as a “computer” today, what does this mean? Perhaps sensing the

flimsiness of his rebuke of Penrose, Searle admits such a thought is akin to “science

fiction fantasy.” Fictional though they may be, thought experiments (at least the

compelling ones) must bear some manner of existential constituents ; we must be

able to portray them in terms that go beyond a mere vacuous semantic husk. Where

Penrose conveys the rough correlation “mind implies mathematical-reasoning

aptitude”; Searle seems to me, by contrast, to effectively scream “nonnormative!”

in a crowded theater of mathematicians and neuroscientists. Not only is his

comment openly specious, it is, moreover, something of a philosophical non

sequitur.

These qualms aside, Searle’s criticism is also faulty on the level of analytic

depth. He maintains that Penrose fails to prove that non-computational processes

could not give rise to simulated, human-level mathematical reasoning. So then a

counterexample to Penrose’s thesis, which Searle entertains, would imply that

human-level mathematical reasoning could exist within a system of mental states

(or some process approximating mental states) that is systemically

non-computational; in other words, such an embedding of processes are each or in

total, devoid of any semblance of computational mathematics – at any depth. But

then suppose, for the sake of argument, that we are once again sending an electrical

adjacent regions have the same color. The theorem was first proposed by Arthur

Cayley in 1879, crediting August De Morgan, and two early attempted proofs were

offered within a year of its inception. Although both proofs were initially accepted

as valid within the mathematics community, flaws were nevertheless later

discovered in each, and by 1890 both proofs were dismissed. For decades the

theorem remained impervious to a formal proof, until at which point in the 1960s

mathematicians began to develop methods involving computers. Apel and Haken

later proved the theorem by reducing the set of all such possible configurations of

the plane into maps down to 1,936 reducible constituent maps (meaning that any

map in the plane can be reduced down to a union of this set of simple maps). Then,

through a painstaking combinatorial process, computers confirmed that the

theorem held for this particular set of maps; the result of the four color theorem

then followed.

I want to be clear as to why this example proves that computers or Turing

machines can achieve things that are qualitatively different from the types of things

achievable by human minds. The fact that a Turing machine is more

computationally efficient than you or I is not at all relevant to this purpose.

Computers naturally have colossal (even potentially infinite) storage capacities and

arithmetical potential. And yet, this feature does not represent an inherent structural

difference from the way human minds perform similar mental operations; both

processes involve an equivalent syntactic analysis. The point then is that in the case

of the four color theorem, the computer possesses additional (present) knowledge

beyond that of human minds (whether the four color theorem is unprovable by

human minds is another story – though it has been alleged that certain results, the

Reimann hypothesis being the most notorious, are unprovable by human minds).

Where the truth-value of the proposition: the four color theorem is valid , is

undecided for human minds, possibly hopelessly so, such is not the case for Turing

machines. Computers and human minds therefore possess different epistemic

properties.

Consider now a second example. We have known of the irrationality of the

number pi since Lambert’s proof dating from 1761. Because the decimal expansion

of an irrational number is non-recurring and interminable, we may conjecture (such

a claim today remains however unproven) that this expansion approximates

random number generation. If, for instance, one applies Kolmogorov’s complexity

criterion, it would follow that because it is impossible to reduce the informational

content of the decimal expansion of pi to a proper subset of itself, this expansion is

infinitely complex. It is not, for instance, known whether the decimal expansion of

pi contains somewhere, say, one billion consecutive 1 ’s. Despite the almost

imponderably small probability that one could locate such a string of numbers, the

existence of such a string is not a mathematical impossibility – so far as we know.

Unassisted, a human mind will never discover, I would presume, the truth-value of

the proposition: the decimal expansion of pi contains one billion consecutive 1’s

(call this property p ˆ ). On the other hand, it is conceivable that a computer – while it

is impossible to disprove such a claim – could credibly confirm that p ˆ holds for pi

(though not by using today’s levels of processing power). A computer could, in

other words, know something that we can’t know – if in this instance, there is in