Computing Machinery and Intelligence
By Pranav S. Dahisarkar
Computing Machinery and Intelligence
Published in “Mind: A Quarterly Review of Psychology and Philosophy”, in 1950.
“I propose to consider the question, ‘Can machines think?’”
About the paper
Describes the imitation game, now called the Turing Test.
Possibly one of most important and disputed topics in AI, philosophy of mind, cognitive science.
The foundation of AI, and its ultimate goal?
Useless and even harmful?
A key paper regardless.
The Imitation Game is man, WhichWhich is machine, which andand which is is woman??? woman???
Let’s try it…computer of poet? At six I cannot pray: Pray for lovers,
through narrow streets And pray to fly
But the Virgin in their dark wintry bed
Let’s try it…computer of poet? What seas what shores what granite islands toward my timbers
And woodthrush calling through the fog My daughter.
Let’s try it…computer of poet? Men with picked voices chant the names of cities in a huge gallery: promises
that pull through descending stairways to a deep rumbling.
Let’s try it…computer of poet? Where were thou, sad Hour, selected from whose race is guiding me, Lured by the love of Autumn's being, Thou, from heaven is gone, where was lorn Urania When rocked to fly with thee in her clarion o'er the arms of death.
A Brief History of AI, preTuring Test
Greek mythology: Hephaestus, idea of intelligent robots. 13th century: talking heads, supposedly owned by Robert Bacon, Albert the Great 15th century: da Vinci drafted robot design 16th century: the Maharal of Prague’s Golem 17th century: Descartes – “animals are complex machines” 19th century: Charles Babbage’s Analytical Engine 1940’s: Isaac Asimov – “Three Laws of Robotics” 1943: McCulloch and Pitts, model neurons with algorithms?
Turing’s contemporaries, and subsequent related work in AI
Claude Shannon, 1950: algorithm for playing Chess. Alan Newell and Herbert Simon, 1956: one of first expert systems, Logic Theorist. Noam Chomsky, 1957: analyze language mathematically, Syntactic Structures. Friedberg, 1958: genetic algorithms. Joseph Weizenbaum, 1966: writes computer program ELIZA, with some success at imitation game
Computerized human psychologist
Minsky and Papert, 1968: wrote book Perceptrons, showing some limitations of neural nets. Slowed research in area. Kurzweil Reading Machine, 1976: read printed text. MYCIN, 1979: expert system that diagnosed some diseases.
Proponents and Opponents of AI
Lots of debate about potential success and limitations of AI. Herbert Simon, 1958: “within ten years a digital computer will be the world’s chess champion.” Hubert Dreyfus, 1972: What Computers Can’t Do
Human intelligence is more than manipulation of symbols.
John Searle, 1980: Opposed idea of strong AI, that machines can think, with “Chinese Room” thought experiment.
The Paper
‘Can machines think?’
Description of machines, and universality of digital computers Possible objections to the question and the test, with responses:
Not a meaningful question, definitional issues Instead, suggests imitation game
Theological, mathematical, arguments from consciousness, originality, etc.
Learning machines
Digital Computers
‘Are there imaginable digital computers which would do well in the imitation game?’
Manchester Mark 1
Predictions
In 1950, Turing predicted that 50 years later it will be possible to program a computer with ~100 Mb memory to pass TT 70% of the time, with 5 minute conversations.
It will be natural to speak of computers ‘thinking’.
“[The machine] may be used to help in making up its own programmes, or to predict the effect of alterations in its own structure.”
“We may hope that machines will eventually compete with men in all purely intellectual fields.”
Some Objections
Theological objection: Thinking is part of humans’ souls, and so animals/machines can’t think.
Head-in-the-sand objection: Consequences of thinking machines are dreadful, so let’s hope it’s not possible.
Futuristic movies and books build upon this fear.
Machines will never be able to do X.
X = {be kind, friendly, have sense of humor, fall in love, etc.}
Mathematical Objection
Gödel’s Incompleteness Theorem: in any consistent logical system that includes number theory, there are statements that can’t be proved or disproved
The halting problem: no machine can determine whether another machine will halt on a given input
These show limitations to discrete-state machines
But humans are not infallible
Judge of the imitation game will not know if incorrect response is because of limitation or human error.
Consciousness
“Not until a machine can write a sonnet…because of thoughts and emotions felt…could we agree that machine equals brain…” – Professor Jefferson
Really just attack on TT, but TT does not test whether computer thinks or feels.
Solipsism: the only way to really know if a machine is thinking is to be the machine.
Lady Lovelace’s Objection
Wrote about Babbage’s Analytical Engine. Machine can not originate anything, and only does what it is programmed to do. But what about learning machines? Maybe machines can’t surprise? But then again, humans often are surprised by machines. In addition, what is surprise? Theorems may not be surprising after they are proven, but is there no virtue in proving them?
Continuity of the Nervous System
Nervous system is not a discrete-state machine, so can’t mimic by computer.
Again, interrogator can’t tell difference
Extra-Sensory Perception
Seems to acknowledge overwhelming statistical evidence for telepathy.
Imitation game fails with ESP, since human can communicate with interrogator via ESP.
Telepathic human is better at guessing games (i.e. which hand is coin in?)
To solve this, Turing suggests putting subjects in ‘telepathy-proof room’.
Learning Machines
“Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets.” Replicate child brain, and then feed it information. Gives estimate of amount of storage in human brain: 109 decimal digits.
Much less than currently believed.
Believes that once memory is available, constructing a computer with a human-like mind is “mainly one of programming”. Even discusses ways of “teaching” computer.
Later debate on the TT
Stuart Shieber’s analogy:
Deniers: intelligence is like bad cold.
There is a germ, a hidden cause.
Can’t “fake it”.
Approvers: intelligence is like fluency in Italian.
Talk to someone in Italian for an hour.
Can’t say, “he doesn’t really know Italian, he’s just faking it.”
Now say someone gets good grades, does well on Psychometry
Can you say, “He’s not really intelligent, he’s just faking it to get into a good University”?
Searle’s “Chinese Room”
Thought experiment, 1980. Refined consciousness objection. There is a room, with a man who only speaks English. Man has book, with instructions: given some scribble in Chinese, output this scribble. A man fluent in Chinese sends messages (in Chinese) into room, and gets responses (also in Chinese).
He can’t distinguish b/w man in room and fluent Chinese speaker.
But does this mean the room knows Chinese?? Conclusion: TT only tests for “weak” AI, not “strong” AI.
Psychologism and Behaviorism
Ned Block, 1981. Intelligence can’t be based only on behavior TT does not demonstrate general capacity of machine for producing reasonable responses Even a mindless machine can pass TT:
Have all possible conversations of given length in memory Machine just looks up correct response Clearly not intelligent
For intelligence, need capacity and compactness:
No exponential blow-up in storage
TT Variations
Harnad’s Total Turing Test: same as TT, but machine has to respond to all inputs, not just verbal.
Needs robot with sensorimotor capabilities
Watts’s Inverted Turing Test: roles reversed. Computer shouldn’t be able to distinguish its own outputs from those of a human. Schweizer’s Truly Total Turing Test: machines shouldn’t just be able to converse or play chess, but develop language and invent chess. Subject Matter Expert Turing Test: test only in some field.
TT as Interactive Proof
Shieber’s argument in favor of TT, against Block:
Block: “Intelligence is capacity to produce sensible verbal responses to verbal stimuli without exponential storage” Shieber: TT does test for this!
Conventional proof: prover P sends proof of assertion to verifier V, who verifies correctness. IP adds interaction and randomness
Interaction: many rounds of message-passing Randomness: V may use random bits in message Also, V approves, but possibly with some small chance of error
Interactive Proof of Capacity
TT as proof of capacity
“capacity to produce sensible responses to stimuli”
Consider space of sequences of verbal stimuli
A machine/person has capacity if it answers correctly on, say, 50% of space
Now run k tests, and say subject passes 75% of time
Chernoff: subject has capacity with probability exp(O(k))
Interactive Proof of Compactness
TT as proof of compactness
How can TT test for storage of machine?
Assumption: machine can’t store more than amount of information in world.
Upper bound:
“without exponential blow-up in storage”
Store bit at finest granularity: 10-35 meters Volume of universe: 1079 meters Total storage capacity of universe: < 10200
So computer needs capacity to pass a TT of log 10200 = 670 bits = 140 words < 1 minute of conversation!
Current TT Achievements
No computer has passed TT to date. AOLiza: modern spin-off of ELIZA IRC chatbots: people have conversations online with machines, can’t distinguish Loebner Prize: annual TT competition, prizes for passing TT and for “most human conversation”.
Longest Bets Foundation has $10,000 bet b/w Mitch Kapor and Ray Kurzweil:
No prize awarded for former yet.
Will a computer pass a TT by 2029?
CAPTCHA’s: completely automated public Turing test to tell computers and humans apart
Conversations with a Computer PROGRAM: So what do you do for a living? JUDGE: I think. I do research. I write. I love. I live. PROGRAM: I wonder whether you think sometimes! I am feeling like the judge here!
JUDGE: Huh?
Conversations with a Computer PROGRAM: Is love the most important thing to you? How important is that to you? JUDGE: essential cure to my existential schizophrenia. PROGRAM: A striking example of the essential identity of the two tongues. Existential schizophrenia, that’s amusing.
So do computers think?
Interview with Gary Kasparov’s advisor after loss to Deep Blue:
Q: “Did Gary Kasparov think the computer was thinking?”
A: “Not thinking but that it was showing intelligent behavior…it understands strategy…”
What now?
“We can only see a short distance ahead, but we can see plenty there needs to be done.” -Alan M. Turing