Vision: A Resource for Writers
Lazette Gifford, Editor
Vision@sff.net

Artificial Intelligence, Part II

By Kat Feete
2003, Kat Feete


Read Part I of this series here

The difficulty in creating artificial intelligence lies in building computers that can think, rather than simply reply to preprogrammed stimuli with automated responses.  Computers excel at simple, reflexive response, but how can they be built to consider, judge, and react to unexpected information, as human beings do every day?

Programming Around the Consciousness Problem

The most standard answer to this problem in AI research has been simple: build bigger databases. The assumption is that there are rules to conversation, decision making, and other nonformal human activities; they're just more complicated than we had at first realized. If we can just build a database big enough, with enough cross-references, flexibility and a greater ability to randomize, the "web of association" by which humans make decisions can be reproduced. It's on this assumption that the voice-answering systems, as well as more theoretical applications like A.L.I.C.E., are based. Such systems are still very limited, but every few years someone comes out with an even bigger database in hopes of fixing the consciousness problem once and for all.

The result is nearly universal: the computers crash.

Still, programmers are optimistic, and probably not without reason. It's obvious that there is some system of rules that we are all following, something that prevents us from answering "hello" with a quick blow to the other person's head. (Well, most of us, anyway.) The limitation, they insist, is not in the theory but in the mechanical capabilities of the technology. Faster processors, greater memory, and more storage capacity will fix the problem.

Skeptics point out that the silicon brain is already ten times faster and capable of storing far, far more information than the human one. Will making it a hundred, a thousand, ten thousand times faster and bigger really solve the problem?

 

Building Consciousness from the Bottom Up

Another approach to the problem goes at it from the other end. Rather than providing the computers with massive databases, switching them on, and expecting an artificial intelligence to spring, like Minerva, fully formed from the mainframe, some programmers are trying to build very, very simple programs, which are then released into carefully structured environments and allowed to learn from them. Such programs are called "artificial life," and they are becoming increasingly more popular.

The premise of AL is simple. Consciousness, its advocates believe, is an epiphenomenon -- a response of the organism to its environment, which will allow it to deal more effectively with its surroundings. Or, to put it more simply, we became conscious because it heightened our chance of survival. The theory is that by starting off programs that are simple and putting them through roughly what we humans had to go through, only in a speeded-up way, some of them will eventually hit on consciousness as a way of dealing with the world, just as we did.

Unfortunately, there have been few positive results in this forum, in spite of some hopeful results from the robotic end. Most of the AL movement is now focused on better understanding the evolutionary process through AL simulations like TechnoSphere (http://www.technosphere.org.uk ) and not on producing conscious machines. Nevertheless, many people still consider learning machines to be the way to go, and focus on perfecting and programming learning algorithms rather than on building monstrous databases.

Skeptics say that the successes of AL prove nothing save that a computer will do what it is programmed to do. Programmed to compete with other programs or robots for resources, a computer will obediently compete, but it is still doing nothing save carrying out the will of its makers. It is acting and reacting as its programming permits it, but it cannot act outside its programming. It cannot think.

 

Goals and Souls: Beyond Programming?

The last major approach to AI comes not from researchers, but from some of its most vocal opponents. These folks - most notably Hubert Dreyfus and John Searle - say that none of these things, and no programming, howsoever clever, will make a computer think. Nothing built, they insist, can duplicate the complexity of an organic system. We are more than the sum of our parts.

There is a depressing amount of sense in this. The fictional classic super logical and emotionless computers -- such as Data of Star Trek or Asimov's famous robots -- are fading in popularity, and may in fact be impossible. A famous neuroscientist, Antonio Demasio, recently struck on something pertinent to this in his book Descartes' Error: Emotion, Reason, and the Human Brain, in which he describes the behavior of certain brain-damaged patients of his. These unfortunates were described by Demasio as "emotionally distant." Because of their injuries they are unable to dredge up any real feeling for previous relationships, or for anything at all. They were also perfectly rational and perfectly logical, they scored exceptionally well on tests designed to study their decision-making capabilities - and they were completely unable to handle day-to-day situations. One patient was given the task of sorting and ranking documents, but "... he was likely, all of a sudden, to turn from the sorting task he had initiated to reading one of those papers, carefully and intelligently, and to spend an entire day doing so. Or he might spend a whole afternoon deliberating on which principle of categorization should be applied: Should it be date, size of document, pertinence to the case, or another? The flow of work was stopped. One might say that the particular step of the task at which Elliot balked was actually being carried out too well, and at the expense of the overall purpose."

On the basis of evidence of this sort Demasio proposed that emotion, rather than being the evil stepsister of logic, might actually be the key component. Logic is the tool we use for organizing information, but emotion is the measure by which we are able to decide which of the hundreds of thousands of bits of information we process every minute are important. Computers, without the benefit of emotions, are in the same position as Demasio's patient: doing a task too well at the expense of an overall purpose they are unable to see.

But where do these emotions come from? Dreyfus suggests that they are a byproduct of our goals. We make goals for ourselves - from "survive" to "take over the planet."  Everything we see is viewed in light of these inner goals, with those things that help us towards our goals producing pleasure, and those that hinder us producing pain - a simplified view, to be sure, but roughly accurate. And where do these goals come from? From ourselves. From our sense of self. Therefore, Dreyfus concludes, true artificial intelligence is impossible. You can program a computer to simulate emotion, but you can't have real emotion without goals; you can program a computer to pursue certain goals, but these will still be your goals, and no more effective in producing conscious thought than training your child from the cradle to be a doctor is in producing a free-thinking human being. Goals imposed from the outside are not real goals.

Skeptics - and in this case they are many and loud - insist that Dreyfus and others like him are putting too much emphasis on some ephemeral "sense of self" or "soul" that creates a monopoly for humanity on self-consciousness and intelligence out of fear of the unknown. They point, too, at advances like A.L.I.C.E and Deep Blue, which are indeed worlds beyond what Dreyfus originally asserted the topmost limits of machine intelligence would be.

 

So How Can I Use This?

Theory is all very well, but for the writer the question will always be, "How can I turn this into a story?"

Unless you're writing hard science fiction, little of this information will make it into the actual story. Knowing your background, however, can help you determine what shape the AIs in a given universe take. AIs based on massive databases, for example, are far more likely to be the sort of huge, massively powerful, super-intelligent machines found in so much Golden Age science fiction (and in some modern stuff as well, particularly Ian Banks' Excession.) They might also be faintly alien; with that much speed, power, and data at their command, they would think differently from their human creators, for better or for worse.

AIs based on the artificial life model would be a far different type. For one thing, there is the question of how they are created. Was one conscious machine enough to create a race of them, or do the machines still have to go through a culling process, competing in artificial arenas with the winners - those that can evolve and adapt themselves far enough to be truly conscious - granted their freedom... or auctioned off to the highest bidder? Or perhaps they're merely the byproduct of sophisticated "teachable" machines and software, common as dirt, which may from time to time "learn" enough to become conscious (as Images, sophisticated custom software programs designed by high-class hackers, sometimes do in Daniel Keys Moran's The Long Run). What if your character woke up one morning to discover her vacuum cleaner had achieved sentience? Such a concept may seem more approachable than the distant, know-it-all supercomputers of the Golden Age.

The notions of the anti-AI faction have their place too - what is a discouraging dead end to a programmer is manna to a writer, who can ask: well, what if a computer somehow evolved a sense of self? What would push it into doing that, and what would it need to do that? What could make a computer form its own goals, begin to live its own life - and what kind of goals would those be? The options range from touching to downright sinister. This idea has the appeal of the uncontrolled - both of the other options are, after all, direct creations of humanity in one way or another.  However, something that we make by mistake, and which does not necessarily want what we want, is bound to cause difficulties.

The possibilities are endless, limitless, frightening and exciting. Machine intelligences could be the first aliens - the aliens we don't have to leave the planet to meet; the aliens we create ourselves. They are a science fiction trope which has fallen by the wayside, and they more than deserve a revival.

Works Cited

The A.L.I.C.E. Project. http://www.alicebot.org

Banks, Iain M. Excession. ISBN # 0553575376

Demasio, Antonio R. Descartes' Error: Emotion, Reason, and the Human Brain. ISBN # 0380726475

Dreyfus, Hubert. What Computers Still Can't Do: A Critique of

Artificial Reason. ISBN # 0262540673

Moran, Daniel Keys. The Long Run. ISBN # 1576466396

TechnoSphere. http://www.technosphere.org.uk