We incur another risk whenever we try to escape the responsibility of understanding how our wishes will be realized. It is always dangerous to leave much choice of means to any servants we may choose — no matter whether we program them or not. For, the larger the range of choice of methods they may use, to gain for us the ends we think we seek, the more we expose ourselves to possible accidents. We may not realize, perhaps until it is too late to turn back, that our goals were misinterpreted, perhaps even maliciously, as in such classic tales of fate as Faust, the Sorcerer's Apprentice, or The Monkey's Paw (by W.W. Jacobs).
The ultimate risk, though, comes when we greedy, lazy, master-minds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful, by using learning and self-evolution methods which augment and enhance their own capabilities. It will be tempting to do this, not just for the gain in power, but just to decrease our own human effort in the consideration and formulation of our own desires. If some genie offered you three wishes, would not your first one be, "Tell me, please, what is it that I want the most!" The problem is that, with such powerful machines, it would require but the slightest accident of careless design for them to place their goals ahead of ours, perhaps the well-meaning purpose of protecting us from ourselves, as in With Folded Hands, by Jack Williamson), — or to protect us from an unsuspected enemy, as in Colossus by D.H. Jones, or because, like Arthur C. Clarke's HAL, the machine we have built considers us inadequate to the mission we ourselves have proposed, or, as in the case of Vernor Vinge's own Mailman, who teletypes its messages because it cannot spare the time to don disguises of dissimulated flesh, simply because the new machine has motives of its very own.
Now, what about the last and finally dangerous question which is asked toward True Names' end? Are those final scenes really possible, in which a human user starts to build itself a second, larger Self inside the machine? Is anything like that conceivable?
And if it were, then would those simulated computer-people be in any sense the same as their human models before them; would they be genuine extensions of those real people? Or would they merely be new, artificial, person-things which resemble their originals only through some sort of structural coincidence? What if the aging Erythrina's simulation, unthinkably enhanced, is permitted to live on inside her new residence, more luxurious than Providence? What if we also suppose that she, once there, will be still inclined to share it with Roger — since no sequel should be devoid of romance — and that those two tremendous entities will love one another? Still, one must inquire, what would those super-beings share with those whom they were based upon? To answer that, we have to think more carefully about what those individuals were before. But, since these aren't real characters, but only figments of an author's mind, we'd better ask, instead, about the nature of our selves.
Now, once we start to ask about our selves, we'll have to ask how these, too, work — and this is what I see as the cream of the jest because, it seems to me, that inside every normal person's mind is, indeed, a certain portion, which we call the Self — but it, too, uses symbols and representations very much like the magic spells used by those players of the Inner World to work their wishes from their terminals. To explain this theory about the working of human consciousness, I'll have to compress some of the arguments from "The Society of Mind", my forthcoming book. In several ways, my image of what happens in the human mind resembles Vinge's image of how the players of the Other Plane have linked themselves into their networks of computing machines — by using superficial symbol-signs to control of host of systems which we do not fully understand.
Everybody knows that we humans understand far less about the insides of our minds, than what we know about the world outside. We know how ordinary objects work, but nothing of the great computers in our brains. Isn't it amazing we can think, not knowing what it means to think? Isn't it bizarre that we can get ideas, yet not be able to explain what ideas are. Isn't it strange how often we can better understand our friends than ourselves?
Consider again, how, when you drive, you guide the immense momentum of a car, not knowing how its engine works, or how its steering wheel directs the vehicle toward left or right. Yet, when one comes to think of it, don't we drive our bodies the same way? You simply set yourself to go in a certain direction and, so far as conscious thought is concemed, it's just like turning a mental steering wheel. All you are aware of is some general intention — It's time to go: where is the door? — and all the rest takes care of itself. But did you ever consider the complicated processes involved in such an ordinary act as, when you walk, changing the direction you're going in? It is not just a matter of, say, taking a larger or smaller step on one side, the way one changes course when rowing a boat. If that were all you did, when walking, you would tip over and fall toward the outside of the turn.
Try this experiment: watch yourself carefully while turning — and you'll notice that, before you start the turn, you tip yourself in advance; this makes you start to fall toward the inside of the turn; then, when you catch yourself on the next step, you end up moving in a different direction. When we examine that more closely, it all tums out to be dreadfully complicated: hundreds of interconnected muscles, bones, and joints are all controlled simultaneously, by interacting programs which locomotion-scientists still barely comprehend. Yet all your conscious mind need do, or say, or think, is Go that way! — assuming that it makes sense to speak of the conscious mind as thinking anything at all. So far as one can see, we guide the vast machines inside ourselves, not by using technical and insightful schemes based on knowing how the underlying mechanisms work, but by tokens, signs, and symbols which are entirely as fanciful as those of Vinge's sorcery. It even makes one wonder if it's fair for us to gain our ends by casting spells upon our helpless hordes of mental under-thralls.
Now, if we take this only one more step, we see that, just as we walk without thinking, we also think without thinking! That is, we just as casually exploit the agencies which carry out our mental work. Suppose you have a hard problem. You think about it for a while; then after a time you find a solution. Perhaps the answer comes to you suddenly; you get an idea and say, "Aha, I've got it. I'll do such and such." But then, were someone to ask how you did it, how you found the solution, you simply would not know how to reply. People usually are able to say only things like this:
"I suddenly realized…"
"I just got this idea…"
"It occurred to me that…"
If we really knew how our minds work, we wouldn't so often act on motives which we don't suspect, nor would we have such varied theories in psychology. Why, when we're asked how people come upon their good ideas, are we reduced to superficial reproductive metaphors, to talk about "conceiving" or "gestating", or even "giving birth" to thoughts? We even speak of "ruminating" or "digesting" as though the mind were anywhere but in the head. If we could see inside our minds we'd surely say more useful things than "Wait. I'm thinking."
People frequently tell me that they're absolutely certain that no computer could ever be sentient, conscious, self-willed, or in any other way "aware" of itself. They're often shocked when I ask what makes them sure that they, themselves, possess these admirable qualities. The reply is that, if they're sure of anything at all, it is that " I'm aware hence I'm aware."