Notes on Artificial Intelligence

 

UPHS cites are from "The Chinese Room Argument and Replies to it."

Strong AI, Weak AI, and Cognitivism

"Strong AI" is the view that the programmed computer actually has cognitive states — that it literally is a mind that understands meanings and has intentionality (thinks about things). Advocates of strong AI insist that it is possible to create a program running on a digital computer which not only behaves as if it were thinking, but actually does think. For strong AI, the word "mind" just means "runs the right computer program", so human minds, too,  are computer programs.

"Weak AI", according to Searle, the view that the programmed computer is a very powerful tool for studying the mind. Weak AI advocates say that it is possible to run a program on a digital computer which will behave as if it were a thinking, conscious entity, i.e., computers can simulate minds (but are not minds and have no intentionality).

Cognitivism (in AI context) is the view that the human brain is a digital computer

What AI Thinks It’s About

From Searle’s APA talk  "Is the Brain a Digital Computer?":

"I want to begin the discussion by trying to state as strongly as I can why cognitivism has seemed intuitively appealing. There is a story about the relation of human intelligence to computation that goes back at least to Turing's classic paper (1950), and I believe it is the foundation of the Cognitivist view. I will call it the Primal Story:

"We begin with two results in mathematical logic, the Church-Turing thesis (or equivalently, the Church's thesis) and Turing's theorem. For our purposes, the Church-Turing thesis states that for any algorithm there is some Turing machine that can implement that algorithm. Turing's thesis says that there is a Universal Turing Machine which can simulate any Turing Machine. Now if we put these two together we have the result that a Universal Turing Machine can implement any algorithm whatever.

"But now, what made this result so exciting? What made it send shivers up and down the spines of a whole generation of young workers in artificial intelligence is the following thought: Suppose the brain is a Universal Turing Machine.

"Well, are there any good reasons for supposing the brain might be a Universal Turing Machine? Let us continue with the Primal Story.

"It is clear that at least some human mental abilities are algorithmic. For example, I can consciously do long division by going through the steps of an algorithm for solving long division problems. It is furthermore a consequence of the Church - Turing thesis and Turing's theorem that anything a human can do algorithmically can be done on a Universal Turing Machine. I can implement, for example, the very same algorithm that I use for long division on a digital computer. In such a case, as described by Turing (l950), both I, the human computer, and the mechanical computer are implementing the same algorithm, I am doing it consciously, the mechanical computer nonconsciously. Now it seems reasonable to suppose there might also be a whole lot of mental processes going on in my brain nonconsciously which are also computational. And if so, we could find out how the brain works by simulating these very processes on a digital computer. Just as we got a computer simulation of the processes for doing long division, so we could get a computer simulation of the process for understanding language, visual perception, categorization, etc.

"But what about the semantics? After all, programs are purely syntactical." Here another set of logico-mathematical results comes into play in the Primal Story.

"The development of proof theory showed that within certain well known limits the semantic relations between propositions can be entirely mirrored by the syntactic relations between the sentences that express those propositions. Now suppose that mental contents in the head are expressed syntactically in the head, then all we would need to account for mental processes would be computational processes between the syntactical elements in the head. If we get the proof theory right the semantics will take care of itself; and that is what computers do: they implement the proof theory.

"We thus have a well-defined research program. We try to discover the programs being implemented in the brain by programming computers to implement the same programs. We do this in turn by getting the mechanical computer to match the performance of the human computer (i.e. to pass the Turing Test) and then getting the psychologists to look for evidence that the internal processes are the same in the two types of computer."

The Chinese Room

Searle points out that a human can simulate a computer running a program to (e.g.) pass the Turing test. Thus someone ignorant of Chinese could follow the rules of a program for communicating in Chinese. Even if the program were capable of passing the Turing Test, the human would not understand Chinese as a result of following the program. The program would just be about manipulating Chinese symbols.

The systems reply: akin to the cast of millions argument.

"The systems reply is that while it is true that the Searle-in-the-box has no understanding of Chinese, the system, when taken as a whole, does. The system includes Searle, his ledger of instructions, the notes he makes, the bits of paper; it includes exactly everything inside the box."

Searle’s response to the systems reply

According to Searle, some realities are intrinsic: they would be the same whether or not humans knew about them (things like gravity and electromagnetism). But according to Searle, other realities are socially constructed: they depend on humans for their being (things like money — money is real, but nothing is intrinsically money). Likewise, nothing has meaning intrinsically. The intrinsic reality of brains includes chemical reactions and structures like neurons and synapses, but not meaning or computation, which are socially constructed. People sometimes intrinsically "compute" (e.g., when you figure out your taxes), but non-human systems never intrinsically "compute"; rather, non-humans systems' "computation behavior" is always "behavior-viewed-as-computation by an observer".

Proponents of strong AI often cite the fact that computers are capable of multiple realizations. That means different physical things can perform like computers. This is the insight exemplified in the "cast of millions" argument in Leiber. But Searle is suspicious about multiple realizability. Something about the concept of a "computer" must be seriously confused if anything can be a computer (even pigeons pecking, or cats and mice, or humans lined up on football fields).

You can see Searle's point when you consider how children play. Nothing is intrinsically a gun, for example, so in children's play, anything can simulate ("be" in the non-intrinsic, observer-dependent way) a gun. (Little boys could pretend their sisters' Barbie dolls are guns, for example.) No line in space is intrinsically the goal line, so the goal line can be (in the observer-dependent way) any line. In the same way, nothing is intrinsically a bit, so anything can "be" a bit, in the observer-dependent way. And if you agree with this, you see Searle's point that semantics is just not the same kind of thing as syntax. The "syntax" of a gun is its physico-chemical properties; the "syntax" of a goal line is its spatio-temporal situation. The syntax of the gun and the goal line is intrinsic. Similarly, the syntax of the computer is whatever comprises its intrinsic properties (silicon, electromagnetic impulses, etc.). The reason anything can simulate or function as a digital computer (the reason computers are multiply realizable), is simply that bits don't exist intrinsically. Neither do meanings. Nothing is intrinsically meaningful, so semantics isn't part of intrinsic reality. Meanings are real, but they are socially constructed (not intrinsic) realities.

The robot reply (Jerry Fodor)

"The robot reply is that if the experiment were modified so that the [Chinese] box were inside of (sic) a robot, and the input derived from the sights and sounds around the robot, and the output able to control the actions of the robot, then the syntax of the Chinese symbols would be augmented with semantics and true understanding could occur." (Winston)

Many defenders of AI camp argue against Searle that machines can have minds, but they qualify the claim. In order to have a mind you need the physical capacity to interact with an environment, and this requires more than merely the right program. You need the right hardware as well. In our case the right hardware (or wetware!) is the brain, which is capable of taking input from our sense organs and controlling our speech, movements etc. Thus, given sophisticated enough hardware, a robot can have a mind.

Fodor's core argument

Fodor argues that if a system has:

(a) the right kind of symbol (syntax) manipulation program

(b) the right kind of causal connections with the world (this is what the robotic system is supposed to have)

then

(c) its symbols have a semantics (meaning)

and hence

(d) it has representational states (it represents things to itself)

and hence

(e) it has intentionality (aboutness).

Fodor argues that we have no reason to think that (a) and (b) can be true only of organic systems.

Searle’s response to the robot reply

The semantic problem remains and can’t be overcome either by a system or a robot. (c) does not follow from (a) and (b) above. What exactly is "the right kind" of symbol manipulation program? "the right kind" of causal connections? Do a system's symbols "have a semantics" in the same way conscious humans' symbols do? Surely not.

Supporters of the robot view are not really clear in the first place on the very idea of a symbol. The very notion of a symbol rides on the back of intentionality. A symbol is always a symbol for someone engaged in a certain project. The very notion of a symbol implies not seeing, but "seeing as." Thus there is no hard and fast distinction between  a symbol and a thing. Symbols can be things; things can be symbols. Meaningfulness can't be created ex nihilo; meaningfulness is an event for an already-intentional being. So you can't generate intentionality out of mechanical symbol recognition and manipulation. Rather, semantics itself presupposes intentionality.

The adequately-complex-systems/robot reply to Searle

Fodor, Harnad, Winston, and many others say: Searle talks as if syntax could never get linked to semantics -- but it could, if the Chinese box itself could move around in the world and associate internal feedback with things in the world. There's no reason to suppose a computer is always going to be an isolated immobile box. The robot, like a person, is/embodies a system! That is, to solve Searle’s objections, we only need to suppose the thing that does the translating of Chinese is a complex entity (human or non-human) with a body and links out to the world. Then it could have semantics (because it could link symbols to things), and syntax, too, because it does the looking up according to rules, which it can adapt and modify according to circumstances. We’re talking about something that is not a mere computer (a machine that simply manipulates 0’s and 1’s) — since it also has an electronic brain and can support sensory interaction, control limb movement etc. — but it isn’t "just a simulation" either, since it has both the semantics and the syntax.

Gilbert Harnad, in " Minds, Machines, and Searle," argues that Searle just doesn’t appreciate the power of an excellent simulation. If weak AI is just the view that a computer can simulate a brain, but the simulation is really really good, then is there any way to tell the difference between a brain and a computer, aside from speciesist reasons of origins and chemistry?

I.e., Searle’s objection to strong AI is fine as far as it goes; he’s right that a mere computer can’t have a mind because it doesn’t have semantics. But he doesn’t consider the possibility of an artificial being consisting of an embedded computer plus a body interacting with the world and thus the possibility of semantics and intentionality. Such a being would be more than a mere simulation of a mind (i.e., it would be much closer to "strong" AI than to "weak" AI).

As the UPHS lectures put it: " … what Searle does not recognise is that there are a range of Artificial Intelligence Positions each of which differs from Strong AI, and from the mere claim that computers can be used to simulate thought. For example there are the positions:

"AI: Alternative 1: to have a mind is to run the right program on a physical system that is capable of interacting with the environment in the right sort of way.

"AI: Alternative 2: to have a mind is to run the right program on a physical system that has the right history and that is capable of interacting with the environment in the right sort of way.

"Turing's view is a version of Alternative 1."

A robot constructed along the lines of Alternative 1 or 2 could pass not only the Turing Test, but the Super Turing Test. It would be able to do things as well as converse. In the Super Turing Test, objects are passed in and out of the Turing Test room, and the two beings in the room would be asked to perform actions on these objects (ranging from painting to scientific experiments). The Super Turing Test would require that the machine in the room be able to perform all these tasks and thus be capable of robotic capacities: using sensors and motor control to interact with the world.

This is, of course, what the Vehicles creatures do.

Successful Simulations Are the First Step in Successful Implementation

The distinction between strong and weak AI assumes a robust distinction between a simulation and the real thing: it assumes that we can always discover the difference.  Harnad argues that it's a false dilemma (more on this below).

 From Harnad:
 "A large portion of Searle's argument seems to revolve around the notion of "simulation," so we must first indicate much more explicitly just what a simulation is and is not: Suppose that computers had been invented before airplanes (and that a lot of aerodynamic theory, but not enough to build a successful plane from scratch, was likewise already known at the time). Before actually building a prototype plane, engineers could under those conditions save themselves a lot of trial and error experience by simulating flight -- i.e., putting into a computer program everything they knew about aerodynamic factors, lift, drag, etc., and then trying out various designs and materials by merely simulating these "model" airplanes with numbers and symbols.

 "Now suppose further that enough of the real factors involved in flight had been known or guessed so that once the computer simulation was successful, its first implementation -- that is, the first prototype plane built according to the principles that were learned from the simulation -- actually flew. Note, first, that the sceptics who had said "simulation ain't flying" would be silenced by the "untested" success of the first implementation. But an important distinction would also become apparent, namely, the difference between a simulation and an implementation. A simulation is abstract, an implementation is concrete; a simulation is formal and theoretical; an implementation is practical and physical. But if the simulation models or formalizes the relevant features (Searle calls them "causal" features) of the implementation (as demonstrated in this case by the successful maiden voyage of the prototype airplane), then from the standpoint of our functional understanding of the causal mechanism involved, the two are theoretically equivalent: They both contain the relevant theoretical information, the relevant causal principles.

 "The idea of a mechanism is really at the heart of the man/machine (mind/program) problem. We must first agree on what we mean by a mechanism:

 "A mechanism is a physical system operating according to causal, physical laws (including specific engineering principles). A windmill is a mechanism; so is a clock, a plane, a computer and, according to some (including this writer), an amoeba, a rodent, and a man. We must also agree on what we mean by understanding a mechanism: A mechanism is understood if you know its relevant causal properties. How can you show that you know a mechanism's relevant causal properties? One way is by building it. Another is by providing a successful formal blueprint or program for building it.

 "As to the belief that the mind is a program: Consider my own beliefs, for example. I happen to believe that all of our cognitive capacities, conscious or otherwise, are the functions of a causal mechanism; I also believe that the best way to get to understand a mechanism is to try to model it. So far, we only have trivial toy models for tiny parts of what the cognitive mechanism can do. I believe that with continued effort, more creative talent in cognitive science and many new and powerful cognitive principles, those toy models will grow until they converge on an all-purpose model for all of our human capacities. It will be possible (in principle) to test that grand model, not only with a computer, but also with an abacus, a Chinese army, or a paper and pencil. The computer simulation will not have a mind. The implementation of that model as an actual mechanism, however, will pass the Total Turing Test, and we will have no better (or worse) grounds for denying that it has a mind than we have for denying that anyone else does. Does that make me a believer in Strong AI?"


Sandy's X10 Host Home Page | Sandy's Google Sites Home Page
Questions or comments? sandy_lafave@yahoo.com