Vision: A Resource for Writers
Lazette Gifford, Editor
Vision@sff.net

Artificial Intelligence, Part I

By Kat Feete
2003, Kat Feete

You don't see artificial intelligence in science fiction the way you used to, and for a good reason. Back in the Golden Age of science fiction, when computers were new, the size of a small house, and overwhelmingly powerful, it seemed obvious that pretty soon computers would be as smart as - no, smarter than! - humans. They would be capable of dizzying virtue or terrifying vice. Any day now, experts predicted confidently, the breakthrough would come. Any day now we'd see true artificial intelligence. Any day now we'd have computers we could talk to.

Any day now.

There are still quite a few optimists saying any day now, but they are a shrinking group. As with so many other things, AI turned out to be much more complicated than was originally thought. This was because we did not - as it turned out - know what we meant by "intelligence." As anyone who has ever used one can testify, computers are capable of the most astounding bouts of intelligent stupidity imaginable, proceeding from a rational basis through perfectly logical steps to a totally useless and meaningless conclusion. The rational capability is there, but the ability to judge is missing. There is no mind or consciousness directing the intelligence.

The realization of this fact spawned a new term, "artificial consciousness." It then turned out that we did not know what we meant by "conscious" either.

Things went downhill from there.

In the meantime, science fiction authors who just wanted a nice talking computer faced an ever-increasing tangle of conflicting opinions, philosophical conundrums, ethical issues, and technological mishmash that led most to abandon the whole arena and go write about FTL engines instead.

I can't pretend that this article will cover the full range of the artificial intelligence debate. It will, however, give a concise summary of the main positions, attempt to clarify the main issues, and offer a road map to any enterprising science fiction writers who still consider thinking computers a nifty idea.

 

The Turing Test

Anyone working with AI will, sooner or later, come across soemthing called the Turing Test. It's the goal which all enterprising programmers must aim for, the only empirical method of measuring artificial intelligence currently available. It goes something like this:

In Room Number One there is a computer. In Room Number Two there is a human being. In Room Number Three there is another human being. The man in Room Three can communicate with both rooms one and two via a terminal, into which the computer thinks answers and the human types them. The man in Room Three can ask any questions he likes. His purpose? To tell which room holds the machine and which room holds the man. The purpose of the man in Room Two is to help him; the purpose of the computer is to deceive him into believing that it is, in fact, the man. If the man in Room Three cannot tell the difference between the human and the machine, the machine wins: it is a true artificial consciousness.

Is imitating a human being really the primary goal of artificial intelligence? Humans are the only sentient beings we have for comparison, and it is against us that AIs must be measured. This comparison is also an easy, scientific method for determining what the vague and annoyingly nebulous terms of "consciousness" and "intelligence" refer to.

Is the Turing test actually used? Yes, it is; there's a yearly contest called the Loebner Prize (http://www.loebner.net/Prizef/loebner-prize.html ) dedicated to finding a computer that will pass the test. None has yet, but the two-time winner of the "most human computer" bronze, A.L.I.C.E., can be talked to online (http://www.alicebot.org/ ). She is the pinnacle of the current AI movement. Make of that what you will.

 

What Use Consciousness?

A few months ago we had a massive snowstorm, and he power went out. After the requisite swearing and lighting of candles, my mother went off to call the phone company and report the outage.

She came back in a state of shock. "I talked to somebody," she said, "but it might have been a person, and it might have been a computer. I couldn't tell."

My brother and I both ended up calling the phone company, for updates, and neither of us could tell, either. We might have been talking to a very bored, overworked, stressed human, or we might have been talking to a computer. Of course, if we'd asked what color his hair was, we would presumably have gotten a polite "I don't understand" from the computer and some intelligible response from the man (even if it was "What the #@*! are you talking about?"), but in a very limited, highly controlled way, it is possible that a computer had just passed the Turing test.

Devices like this are the main focus of the artificial intelligence industry (as opposed to artificial intelligence research), which wants something that can handle as many routine inquiries as possible, thus eliminating the need for expensive humans.

In a similar vein, the artificial intelligence effort reached a massive landmark in 1997 when IBM's Deep Blue beat World Chess Champion Garry Kasparov. Skeptics such as Hubert Dreyfus had contended that a computer could never play chess as well as a person.

Now, both of these are clearly highly advanced machines. And yet neither one can pass the Turing test - neither even comes close. Neither one is satisfactory to those who pursue the goal of thinking machines.

When we talk about AIs, what is it, exactly, that we are looking for?

In his book What Computers Still Can't Do, Hubert Dreyfus categorizes intelligent behavior in four areas.

Area I is Associationistic.  This ability is innate or learned by repetition and is not affected by the situation. The question has one answer, and that answer is given without reference to anything else that might be happening. Examples are memory games or the word-by-word translators commonly available on the Internet, which will happily churn out sentences of nonsense Spanish if you ask them to.

Area II is Simple-Formal. It, too, is learned by rule, and applied in highly structured situations: examples are tic-tac-toe and mathematical proofs.

Area III is Complex-Formal, also learned by rule or practice but heavily dependent on the situation for the correct interpretation. Examples are chess (which computers can now play very well), Go (which they struggle with), and recognition of complex patterns of noise -- speech recognition, with which a level of success is finally, and slowly, being reached.

Area IV is Nonformal, which is entirely situation-dependent and learned by example and what can only be called intuition. Examples are riddles, effective translation, and conversation.

If you're having trouble imagining this, think of the very first question in Tolkien's famous riddle game:

What has roots as nobody sees,

Is taller than trees,

Up, Up it goes,

And yet never grows?

Now imagine handing that to a computer and seeing what it made of this kind of sideways, elliptical talk. Even the very densest of us (like me) could come up with a few guesses, tossing out whatever associations the words brought to mind, and most of us would eventually come up with the right one -- mountains. But can you explain the rules of the riddle game? Can you explain the logical progression by which an answer is reached, and program those steps into a computer in such a way that it would be able to answer, not just this riddle, but any riddle it was asked?

But what does it matter if a computer can't answer a riddle?

Think of a recent conversation you've had - any conversation. Try to analyze the reasons you said the things you did. Conversations are riddles on a massive scale. As a matter of fact, almost any decision you'll make - from what restaurant you'll eat at to whether you'll quit your job - utilizes the same principle: not clearly defined rules, but a web of complicated memories, emotions, and associations.

The limits of modern day computers are reached at this point, and without an artificial intelligence breakthrough, here they will stay in the future. Without some kind of guiding consciousness computers are merely excellent data repositories and calculators, capable of responding to any situation they have been programmed for - but never able to adapt their responses to a new situation and never able to produce more than a rote answer to the asked question. Never able to think.

How programmers hope to overcome the consciousness problem will be discussed in the second half of this article.

Works Cited

The A.L.I.C.E. Project. http://www.alicebot.org

Dreyfus, Hubert. What Computers Still Can't Do: A Critique of

Artificial Reason. ISBN # 0262540673

The Loebner Prize Homepage. http://www.loebner.net/Prizef/loebner-prize.html

Tolkien, J.R.R. The Hobbit. ISBN # 0345339681

Turing, Alan. "Computing Machinery and Intelligence." http://www.loebner.net/Prizef/TuringArticle.html