Return to MODULE PAGE
The Extraordinary Future: Chapter 4
Winfred Phillips: Author
Introduction
Our authors discuss many possible scenarios about the actual mechanism of human-computer mind transfer. These scenarios involve discussions of the type of robot to be constructed, ways to capture the details of a particular human mind, and the ways to upload a mind into a robot. I will first describe some details of the scenarios presented and then later attempt to classify them into a common framework of different options. Finally, I will comment on their feasibility.
In the scenarios presented there are usually several parts to the process. First, we have to ascertain the contents or details of an individual's mind. Primarily this would be done by scanning it, either invasively (destructively) or noninvasively. Then we have to transfer the results of the scan into a robot brain or construct a robot brain to match the structure of the human brain scanned. Some scenarios have the robot brain ready and waiting, while others portray it as constructed 'on the fly.'
Mind Transfer in the Extraordinary Future
Let's consider Kurzweil's comments about determining the contents and structure of the human mind/brain, which could be through either invasive (destructive) procedures or noninvasive scanning. Destructive procedures usually are considered to involve tearing apart the brain part-by-part or slice-by-slice and observing its structure. (Though these authors do not mention the alternative prospect of inserting probes instead, presumably this would change the brain as well.) Presumably this would kill the patient if he or she were not already dead, unless some kind of mind transfer were being carried out simultaneously. In contrast, noninvasive procedures do not significantly alter the brain. For example, a possible technique would be noninvasive scanning; Kurzweil envisions using advanced MRI or optical scanners. Today MRI scanners are able to view individual somas (neuron cell bodies). Newer, higher-resolution MRIs under development will be able to scan individual nerve fibers ten microns in diameter. Eventually scanning presynaptic vesicles 'that are the site of human learning' will be possible. An optical scanning technology in use and being refined can observe the firing of individual neurons. Mapping the human brain one synapse at a time may seem overwhelming, but then so did the Human Genome Project, which should be completed in a few years (Kurzweil, 1999, pp. 122-123).
Kurzweil envisions two methods of using brain scan results. The first is in order to understand and transfer the contents of the brain by focusing on overall patterns. The goal here is to ascertain the architecture and implicit algorithms of interneuronal connections in different regions of the brain. Learning the overall pattern is more important than determining the exact position of every neuron. This information would then be used to design neural nets that operate in a fashion similar to the scanned brain, even though the structure of the robot brain would not match the structure of the scanned brain. This could be thought of as similar to a case of running the same software, with the same end-user functionality, on two different hardware platforms. The second method seeks to re-create the actual structure of the scanned brain, and here every detail needs to be captured. Copying it is more important than understanding how it works (Kurzweil, 1999, pp. 124-125). Presumably the method envisioned for using brain scan results may affect how much and what kind of information one attempts to obtain from the scan.
Paul and Cox mention a number of considerations and complications to mind transfer not mentioned by other authors. They remind us that the brain already moves memories and information. First the brain moves perceptual information into the hippocampus to be part of a temporary memory pool. In a week or two the brain will move it to the cortex for long-term storage or else lose it. Once in the cortex, the memory may shift it to additional synapses so that it is more robust (Paul & Cox, 1996, p. 187).
This suggests that the contents of the mind could be transferred into a different brain, rather than just elsewhere in the same brain. But the brain is not a static set of neurons and synapses that can accommodate an alien set of data. In a sense the brain evolves to fit the mind, because synaptic connections develop to fit a set of experiences, the data stimulating and strengthening the synapses. 'The neural network for any given brain is constructed for the particular mind and only that mind.' Now consider a scenario in which a donor mind is being transferred to an existing conscious robot brain. If the robot brain works anything like a human brain, the 'synaptic connections' (or their equivalent) will have to be reconfigured to accept the new data from the donor mind, and of course this destroys the old recipient mind because it resets the connections' strengths. One might instead use an 'empty' robot brain as the recipient, but the notion of an empty brain needs to be clarified. To empty a brain the synaptic connections would have to be destroyed to try to recreate a brain similar in these respects to that of a baby. As the new mind is transferred, the recipient robot brain would have to grow synapses and their strengths (Paul & Cox, 1996, pp. 187-188).
Paul and Cox also discuss the possibility of mind transfer via a transplant process scenario, in which parts of a person's brain are systematically replaced by electronic parts from a robot brain. They envision replacing each neuron, synapse, and connection with a circuit switch and line that functions like the natural system. At the end the mind has been transferred to the robot brain. They also mention the possibility of growing the brain to twice its size (twice as many neurons), with all information duplicated. The brain is then spilt down the corpus callosum and each half put into a new body, and you wind up with two people and successful mind duplication. However, this would not be an instance of mind transfer from human brain to robot brain (Paul & Cox, 1996, p. 189).
This last possibility mentioned by Paul and Cox leads to an important point. The authors depicting an extraordinary future are concerned that we achieve personal immortality by transferring out of our inferior 'wetware' of human brain and body tissue into something more durable or replaceable. We think of robots as made of metal and silicon, but they could be any material. However, we should rule out of our discussion any mind transfer scenario that merely duplicates the human brain material and uses that as the recipient. Obviously the task of creating an exact human duplicate, possibly by some kind of cloning process, involves some of the same issues of human-computer mind transfer. But we are primarily concerned with the question of transferring a human mind to something significantly different. Consequently our discussion will not spend any significant time on the question of human to human mind transfer.
In his earlier work Mind Children, Moravec presents several possible scenarios for mind-transfer. He clearly recognizes that transfer must maintain personal identity for it to be a continuation of the same person. Transplanting a brain into a robot body would not be enough, because it would not have changed the limitation of the limited and fixed intelligence of the human brain. What is needed is a way to get the mind out of the brain (Moravec, 1988, p. 109). Here of course we see that another goal of mind transfer is not immortality but getting a lot smarter.
Moravec describes one scenario in which a person (you) undergoes the transfer in an operating room. It is accomplished by a robot brain surgeon. Your skull but not brain have been anesthetized, and you are conscious. With your 'brain case' open, the robot surgeon's hand contains instruments that scan a few millimeters of your brain surface, which allows a 3-dimensional high-resolution MRI image to be built. At the same time, magnetic and electric antennas reveal the moment by moment neuron pulses. Because of the then present 'comprehensive understanding of human neural architecture,' all this data allows the robot surgeon to write a program on the spot that models the behavior of the scanned brain tissue. This program is then installed in a 'small portion' of a nearby computer and activated. Meanwhile, the robot surgeon's hand measures the inputs the brain tissue portion is receiving, which is presumably also duplicated in the computer, allowing the comparison of output from that portion of your brain and from the computer. The surgeon then 'fine-tunes' the computer simulation until the correspondence is 'nearly perfect.' To give you confidence, you are provided with a push-button that allows you to switch back and forth between using your brain tissue and the simulation. Pressing the button activates electrodes in the surgeon's hand (Moravec, 1988, p. 109), which provide input to nearby brain tissues from the output of the computer simulation rather than the portion of your brain tissue being modeled. 'As soon as you are satisfied' that the simulation is perfect, the simulation connection is permanently established. The simulated brain tissue is then removed. In a similar fashion the robot surgeon works its way through your entire brain. 'Eventually your skull is empty, and the surgeon's hand rests deep in your brainstem. Though you have not lost consciousness, or even your train of thought, your mind has been removed from the brain and transferred to a machine.' When the surgeon lifts out its hand, your body goes into spasm and dies, but then quickly the computer simulation is disconnected from the cable leading to the surgeon's hand and reconnected to a new body, and your metamorphosis is complete. Moravec also notes a variation on this type of transfer in which it is done by a high-resolution brain scan in 'one fell swoop'; on this variation, it is done without using surgery (Moravec, 1988, p. 110).
Another approach, one 'more psychological,' is to have a computer shadow you. A portable computer is programmed with the 'universals of human mentality,' your specific genetic makeup, and any 'conveniently available' details of your life. As you go about your business, carrying this computer, it records how you behave and monitors your brain while it learns to mimic you exactly, and it can then anticipate your responses and actions. When you die the program of this computer is installed in a robot body that can then take over your life; Moravec implies that it would be you (Moravec, 1988, pp. 110-111).
Still another method involves connecting to the corpus callosum, the large bundles of nerve fibers connecting the two hemispheres of the brain. Your corpus callosum would be severed and the severed ends connected to cables of a computer. The computer passes the 'traffic' back and forth as it was done previously, and the computer also monitors this traffic. From what is learned during this 'eavesdropping' the computer constructs a model of your mental activities. Gradually it inserts its own messages into the flow, 'insinuating itself into your thinking,' and giving you new knowledge and skills. As you age and your brain starts to deteriorate, the computer takes over the lost functions, and when your brain dies, your mind 'finds itself entirely in the computer.' In the future it might be possible to do this without invasive surgery; maybe you would wear a helmet or headband to monitor and alter the traffic between hemispheres by using carefully controlled electromagnetic fields (Moravec, 1988, pp. 111-112).
The technology for scanning human brains to the desired level, building robot brains or parts of brains, and transferring minds into such robot brains of course does not yet exist. But when it does, our authors predict a variety of possible lifestyles for humans and robots. According to Kurzweil, in the second half of the next century people will make this leap. Initially there will be partial 'porting' via neural implants, but eventually 'and well before the twenty-first century is complete, people will port their entire mind file to the new thinking technology' (Kurzweil, 1999, p. 126). We have seen that at times Kurzweil seems hesitant to come to a conclusion, but here he certainly doesn't waffle on this prediction.
As we port ourselves, we will also vastly extend ourselves. Kurzweil thinks that $1,000 of computing in 2060 will have the computational capacity of a trillion human brains. So we might as well multiply memory a trillion fold, greatly extending recognition and reasoning abilities, and plug ourselves into the pervasive wireless-communications network. While we are at it, we can add all human knowledge as a readily accessible internal database (Kurzweil, 1999, pp. 126,128).
Moravec too notes that with the transfer some things will change. Many of your old limitations are overcome. You can think 'a thousand times faster.' The program can be copied into similar machines, 'resulting in two or more thinking, feeling versions of you.' You can decide to move your mind from one computer to another that is more advanced or better suited to a particular environment. The program can also be copied, so that if the machine 'you inhabit' is destroyed, the 'tape can be read into a blank computer, resulting in another you minus your experiences since the copy. With enough widely dispersed copies, your permanent death would be highly unlikely' (Moravec, 1988, p. 112).
You can also travel faster than you can now. ''As a computer program, your mind can travel over information channels'' perhaps encoded as a laser message beamed light years away into a robot in a distant place. Further changes are possible. Suppose you devise a way to build a robot brain in a distant place out of neutrons. Nuclear reactions are a million times quicker than chemical ones, so you could think a million times faster. Either your original body could remain dormant during the trip, receiving you on return, or it could be kept active, in which case 'there would be two separate versions of you, with different memories for the trip interval' (Moravec, 1988, p.114). Any residual doubt that Moravec thinks of the mind as a computer program has been dispelled.
Your personality will change as a result of your new abilities. Because you can think faster, you no longer must react instinctively to events--you can think it through: 'You will have time to read and ponder an entire on-line etiquette book when you find yourself in an awkward social situation.' But because you will be faster, you might be more easily bored unless you change your program to 'retard the onset of boredom' (Moravec, 1988, p. 114).
Anyone who takes advantage of the brain-porting technology will be in essence immortal. Our identity will be based on our evolving mind file: 'We will be software, not hardware.' Currently our software cannot grow because it is stuck in our human brain, but freed from this, our minds will grow. Porting from the current computer to one more advanced will be like transferring files to a new PC. 'Our immortality will be a matter of being sufficiently careful to make frequent backups. If we're careless about this, we'll have to load an old backup copy and be doomed to repeat our recent past' (Kurzweil, 1999, pp. 128-129).
In order to provide for experience, we would replace our ordinary bodies with new ones manufactured through nanotechnology, which is building machines at the atomic level. These could be built to be self-replicating (Kurzweil, 1999, pp. 137, 139). This will enable the building up of organs and other parts of the body. But we don't really always need to go somewhere physically, because we can visit virtual environments through implants in our brains. In fact, Kurzweil thinks, this will be the essence of the World Wide Web in the second half of the next century. We will be able to plug into various virtual environments and even select a virtual body for the trip (Kurzweil, 1999, pp. 142-144).
While I have presented Moravec's views of the nature of human-computer mind transfer, I don't want to leave the impression that this is the main focus of his interests. To a lesser extent the following comments apply to the other authors as well, though Moravec's imagination runs far beyond those of the other authors. His interest in the future is much more diverse, including as it does the possibility of human-computer mind transfer, humans living as brains in vats and in virtual reality, robots developing independently, and various creatures colonizing physical space and cyberspace. Consider how advancing technology might widen the range of possibilities. When direct connections to the nervous system become possible, a human could live as a brain in a vat. The brain, physically maintained by life-support machinery, would be mentally sustained by the connection of its peripheral nerves to a computer providing a simulation of the world and its body. Moravec suggests that this situation might be useful for accident victims, pending creation of a new body. Of course the virtual life of the brain could still be affected by something affecting the well-being of the vat. This possible problem could be eliminated if the brain itself were replaced by a brain simulation. Damaged or endangered parts of the brain could be replaced with functionally equivalent simulations. Individuals would be able to survive total physical destruction but kept alive as 'pure computer simulations in virtual worlds' (Moravec, 1999, pp. 191-192).
A simulated world hosting a simulated person can be a closed self-contained entity. It might exist as a program on a computer processing data quietly in some dark corner, giving no external hint of the joys and pains, successes and frustrations of the person inside. (Moravec, 1999, p. 192)
There is no limit in principle on how indirect the relationship can be between simulated entities and the simulation. The simulation's internal relationships would be the same no matter what platform the program was running on, as long as it was running correctly, and whether it was running slowly, quickly, backwards in time, intermittently, and no matter how the data were stored. These all have no impact on the virtual lives of the conscious inhabitants of the simulations, though they might make it difficult for outsiders to interpret what is going on (Moravec, 1999, pp. 192-193).
Moravec believes that humans need a sense of body, and so transplanted human minds will need either a physical body or the illusion of having one (a simulated body as above) to provide a consistent sensory and motor image. Computer programs do not need bodies and thus Moravec calls a chess program, for example, a pure mind. The fact that humans need a body or body-simulation means that humans would not be as competitive as pure robots in the competition for survival in cyberspace. Humans would have to 'lumber about in a massively inappropriate body simulation.' Every interaction with the universe would first need to be 'analogized' into a recognizable physical or psychological form, which will be inefficient compared to those creatures who need to meet no such requirement. Some streamlining of the interface to cyberspace would be attempted. Simulated sense impressions could be reduced to mental abstractions, and some of these innermost mental processed replaced by 'cyberspace-appropriate programs purchased from the AIs.' By a series of such replacements a human might liberate thinking procedures from any connection to a body. But Moravec thinks the resulting bodiless mind would no longer be human but an AI (Moravec, 1999, pp. 170-173).
Finally, I should point out that the inventive Moravec even raises the issue of merging minds. 'It should be possible to merge memories from disparate copies into a single one.' Memories of events would need to indicate which body they happened in to avoid confusion, much in the same manner that we associate a context of time and place with current remembered events. Merging could involve two copies of the same individual or two individuals. Moravec even thinks selective mergings involving just some memories would be a 'superior form of communication' enabling rapid and effective sharing. 'Concepts of life, death, and identity will lose their present meaning as your mental fragments and those of others are combined, shuffled, and recombined' (Moravec, 1988, p. 115).
Varieties of Mind Transfer
This is enough of a summary of the authors' views on the mechanism of transfer to enable us to try to classify and evaluate them. The authors who present scenarios of the extraordinary future have clearly given some attention to the issue of the mechanism of the transfer of mind from human to robot. They have suggested a number of ways this transfer could happen. I should point out that not only technical issues are involved; some of these ways involve peripheral issues of ethics and personal identity. For example, if a robot is first created as a conscious being or potentially conscious being (maybe it's not conscious yet because it hasn't been turned on), then it might be considered to have its own right to life. How then could one annihilate its self or identity by transferring the contents of another being into its brain? Or consider that on some of the scenarios the contents of a human's brain are transferred to the robot while keeping the human being functioning and conscious. After the 'transfer,' we have the robot with the human's mind and the still-existing human with the human's mind. If this is the case, it may not be as clear as the authors assume that one's identity has been transferred; perhaps it has just been duplicated, much as in the creation of a clone. But initially I want to comment on the technical issues and refrain from commenting further on these ethical and personal identity aspects until the end of this chapter. A full discussion of personal identity will have to wait until the next chapter.
The mind-transfer mechanism scenarios often involve confusing differences of detail, and there are a number of ways one could group them. One way to classify them is with respect to how the robot is being built, and we might see three options here. The first option in this regard is to build a robot that is or can be a fully-functioning, conscious person, or that is similar to a human baby in terms of mental development. In this approach the details of the human who will undergo the transfer are ignored during this building process. Rather, knowledge of human capabilities in general is used to build a robot person or potential person. Then after this construction is completed, somehow the mind of the human person to be transferred is captured and uploaded into the robot.
The second option is to pay close attention to the details of the specific human person to be transferred when building the robot. The general idea here is that during construction one is trying to make the robot brain somehow more or less 'isomorphic' with that of the particular human brain or mind of the human undergoing the transfer. The key questions are about what 'more or less' and 'isomorphic' mean in this context. The level at which this isomorphism holds can vary, but the basic idea, and an important difference with the first option above, is that after construction of the robot is completed, there need be no further uploading of the contents of the mind or brain of the human. The transfer has already occurred.
The third option is for the mind transfer to occur by having the human person undergoing the transfer do so by experiencing a piece by piece series of brain part transplants. This series of transplants could occur over an extended interval of time (perhaps during repeated visits) or more or less all at once, whether the human is conscious or not. When the transplants are complete the mind transfer is complete, and perhaps the next stage can be a moving of the resulting brain into a robot body or the replacement of parts of the old human body with new robot parts, however one wishes to describe it.
We can see how the possible scenarios presented by our authors can be accommodated by the options of this basic scheme. (Often in the specific scenarios presented by our authors, enough details have been left out to allow the scenario to fit more than one option, depending on the interpretation or supply of such details.) The first option, to build a fully functioning conscious or potentially conscious robot person, is not really favored by our authors. We can place some of Moravec's scenarios here because he had left open some details which allow the scenarios' interpretation as under this first option. For instance, in the scenario of the robot surgeon 'copying' the program elements of specific parts of your brain into a waiting computer (via writing an equivalent program), the computer might already be powerful and capable enough to function at the level of a person. It depends on whether the existing robot already has been programmed and at how sophisticated a level. (The hardware will at least be there, but if there is no software loaded into the computer this will not be the case.) Moravec also describes another more 'psychological' approach that involves having a computer 'shadow' you. Moravec says that it has been given details of your genetic makeup and some details of your life, but with respect to your brain details it has been given nothing, provided only with what Moravec calls the 'universals of human mentality.' This computer seems not to have been given many details about your brain, and initially certainly doesn't incorporate them, but it might already be capable of conscious personhood in general, though it is not yet you in particular. Moravec doesn't say whether this computer possesses those universals as stored data or has incorporated them into its own life and is a conscious entity in its own right. This computer is sophisticated enough to be capable of doing the transfer itself as it shadows you and learns to mimic you, so it indeed might be already a conscious or potentially conscious person about to be made into you after it does the transfer.
Paul and Cox mention the possibility of using this first option, but they also seem to realize more than the others the complications that would accompany it. Because the particular details of a human's mind are closely related to specific neuron synapses, these would have to be recreated in the robot, meaning the robot would have to start as empty like a baby or have its existing synapses destroyed and replaced.
Recall that the second option involves using the specific details of the mind or brain of the human undergoing the transfer in the construction of the robot brain. Kurzweil seems to favor this, but he really has two distinct situations in mind here. In one situation, he envisions copying the overall patterns of the human brain of the person undergoing the transfer. Information about these patterns would be used to design an equivalent robot brain. Here some details of the particular human brain are clearly relevant and essential, though the exact position of each neuron is not. The other situation he mentions however seeks to capture and recreate every detail of the structure of the particular human brain. It should be noted that in the re-creation in the robot brain there is no assumption that human neurons will be used, so the robot brain will not be an exact duplicate of the particular human brain with respect to all details. But the idea is that it will exactly duplicate that particular human brain structure. Obviously Kurzweil is presupposing some kind of equivalence between the particular human brain and a robot brain such that although they are not made of the same materials, they have some kind of structural identity. Kurzweil does not spell out exactly what this means.
This second type of option also seems to be that considered by Cox and Paul in their mention of the fact that the details of the human mind are already hard-wired, as it were, into the particular set of synapses existing in that person's brain. So unless the robot constructed were to be empty or equivalent to a 'baby,' the robot would have to be built with specific details matching the human brain details. As in the case of Kurzweil, there is no specification of the exact nature of the equivalence that allows a robot brain made of a different material to have the same structure in detail as the recreated human brain.
Moravec's scenarios might be interpreted as falling within this second option as well. This would be the case if what he has in mind in both the robot surgeon case and the 'psychological' case is that prior to the transfer process the computer hardware exists but not the program needed to make it function as a person. If this is the interpretation, then on both scenarios as the particular details of your brain and mind are learned, the robot or computer gets the ability to function for the first time as a person only as those details are duplicated or copied (as software) into that robot or computer. Here also we might place Moravec's scenario of the computer monitoring your brain activity via the corpus callosum. The idea seems to be for you to carry this computer around and have it build a model of your brain program in itself, with this new computer program functioning to gradually replace the functions of the pieces of your brain as they age and die. Of course, we have seen that this scenario can be interpreted under the first option: if this computer is already a person or potential person, with the requisite hardware and software, then this scenario does fit in more with the first option. And if the computer actually installs its hardware in your brain as it takes over the functioning of specific brain parts, it might even fit the third option below. (But this seems forced since Moravec mentions nothing like this type of transplant when describing this type of scenario.)
The distinctive feature of the third scenario is for the human experiencing the transfer to undergo a series of transplants that occur in that human's brain. Rather than building a separate robot duplicate of the brain of some sort, or a copy or equivalent in another location, this option makes the location of transfer activity to be the human skull. Here best fits one of the methods discussed by Cox and Paul, in which successive neuron transplants occur, each neuron, synapse, and connection being replaced with a circuit switch and line that functions like the natural system.
The above discussion has focused on classifying our authors' scenarios with respect to how and where the robot is being built. But there are other ways to classify the same scenarios. For example, one could classify them based on how the transfer is being made (invasively or noninvasively) or how long the transfer is supposed to take place (all at once or gradually). Keep in mind that since the scenarios are not fully specified with respect to all the relevant details, some interpretation must be made in some of them to have them fit this or that option of a classification scheme. Since I think that at this point we already have a basic understanding of the scenarios, I am not going to bore you further by attempting to classify them in all possible ways with respect to all possible differences. Let's turn rather to trying to deal with some of the underlying issues and questions that pop up in evaluating the plausibility of the various scenarios.
One point to mention has been touched on already in an earlier chapter. It concerns the fact that our authors really alternate between two different methods to get a smart robot. The first way is to create one through AI techniques. To do this we would have to overcome classic AI problems and it would take us years to do this. The other way is to just create a robot brain by copying a human brain, though making it of a different material. These two methods correspond to the first two transplant scenario options above.
For the sake of argument let's assume that crucial problems mentioned in earlier chapters have been resolved, nanotechnology is available, all AI problems have been solved, and we are able to build a robot as smart as a human, a robot that could have a conscious mental life and be considered a person. Still, technical questions arise about the mechanism of the transfer. Most of these questions arise in the case of more than one option or scenario.
1. How can the details or content of a specific human mind or brain be captured in a form suitable for transfer to or re-creation in another device?
2. In various ways it is often claimed that the transfer involves creating a robot brain with either 'human brain equivalent' overall patterns or modules, or brain equivalent particular features, details, or structures. What does 'equivalent' mean in this context?
3. Assuming a suitable notion of equivalence (as mentioned above) can be provided, how can such equivalence be shown, demonstrated, tested, etc.?
4. What further ethical and metaphysical problems might arise in connection with transplantation, activation, or other aspects of the transfer?
Capturing the Mind
I now delve into these questions. The first question is about capturing the details or content of the specific human mind. Both invasive and non-invasive techniques have been mentioned. Let's consider the case of invasive methods. Invasive methods, which destroy or change the brain, would presumably involve some surgical technique that cuts through or probes layer after layer of the brain at a very fine level of detail. Such a level of detail might be cell by cell, or neuron by neuron in the case of much of the brain. The idea here is to construct some kind of three dimensional model of the entire brain, at an incredible level of detail, by going through the brain from top to bottom systematically, or from section to section, until all cells and their connections have been mapped onto the model. From this, the assumption proceeds, one can have a basis for the programming or construction of the robot brain.
The limitation of this type of model may be that it is too static. The brain is constantly active, with individual neurons firing many times a second in concert with many other neurons. Building a static model of brain anatomy would not seem to capture this brain activity adequately. A static model, if it includes all the current electrical states, might be enough if all you want is to build a human brain tissue duplicate. But here we are trying to build something equivalent and to do this we might need to see the brain 'in action,' as it were. It can be argued that the activity might be observed while the probing or cutting is taking place. But trying to record this brain activity using invasive techniques would seem to run into the following problem, at least on some scenarios. Since neuron activity seems to occur in concert with large numbers of fellow neurons, perhaps spread out over significantly large regions of the brain, destroying or changing neurons by using an invasive activity would seem to preclude one from seeing these large patterns fully. You can't see a natural pattern of neuron firing if some of the neurons that would naturally be involved have been changed or destroyed due to probing or cutting. Also, some of this brain activity undoubtedly occurs in response to particular external environmental stimulation. But you can't for very long subject the human to particular environmental stimuli and observe the natural effect on activity within the brain if you are disturbing and even destroying the brain tissue at the same time.
Moravec thinks that the robot surgeon will be able to ascertain how a few neurons at a time work by stimulating them electrically and observing the activity of other parts of the brain in response. This assumes that the surgeon knows just how many cells to stimulate at a time, and in what way, which is a big assumption. But the present worry is that soon in the process a significant brain portion will already have been destroyed, which causes problems for observing the others in context. Moravec tries to remedy this by having the new robot circuits function immediately to take the place of the lost neurons. They better be perfect, since the old ones are soon destroyed, thus precluding comparison later. If they are not perfectly equivalent, we won't be seeing the remaining neurons in their proper context either. And will we be sure these are equivalent to those lost neurons without system testing them in the context of the new whole?
Noninvasive methods show more promise here, because it seems plausible to assert that MRI scans could occur without affecting the natural operation of the brain. But, again, it's not a static model of brain anatomy that we may want but a comprehensive knowledge of how a particular brain responds to a variety of stimuli. It's not clear how much stimulation needs to be done to determine exactly how a particular person's brain operates. You can't keep him on the operating table indefinitely; Moravec's invasive surgery requires testing a few neurons at a time, which, with 100 billion neurons, would take an eternity!
The robot surgeon is not supposed to rely on extensive testing anyway, just a little for each neuron group for 'fine-tuning.' This may not be enough. The problem is that the robot surgeon has a general knowledge of brains, not of yours in particular. As alluded to above, it's not clear what level of detail one would really want to observe. If you understood how a particular person's brain operated, you might know which neurons to observe together. It may be that a comprehensive knowledge of not just the human brain in general but this particular brain may be needed, if as is seems to be the case, the exact structure of neurons and their connections form in response to particular genetic makeup and environmental stimuli and this varies from person to person. Apparently during the infant and early childhood development of the brain a lot of individual patterns of hardwiring go on. So while it may be true in general that vision occupies a certain region of all human brains, for example, more about which specific sections do what may be determined by individual differences among brains. It may not be as easy to pick out, for a particular person, the exact brain cells to stimulate to cause the person to feel thirst, for example, as it is to pick out their heart. But we then seem to be trapped in a circle--we first need to know the specifics of this brain to determine what groups of neurons to stimulate, but we need to know what groups of neurons to stimulate to know the specifics of this brain. Recall that the robot surgeon here is not relying on detailed testing of the subject (observing response to various stimuli) but purely on its knowledge of brain anatomy in general, to be followed by a little fine tuning.
There is the further complication of whether one can learn a program by observing anatomy and physiology and even the brain in action in a variety of situations, which is supposed to be what the robot surgeon is able to do (or what would have to happen on any scenario that doesn't merely copy the brain cell for cell in human tissue form). The task is to write a program or programs for the robot brain based on ascertaining the programs working in the human brain. But how exactly can we find out what programs are working in a particular human brain?
Consider the case of current computers. Suppose you have a variety of inputs (the data) and know the rules (the program and the hardware). Theoretically you could predict the output, though in the case of a complicated program and large amounts of data it might be a difficult task. But if you were given only the input and output could you determine the rules (the program and the hardware)? It seems not, because for any given set of input and output there would be more than one set of rules to take you from the former to the latter. One could get from one number to another for example, by a particular set of rules or by the same set of rules with the added rule to double and then halve the input, etc. There is the additional complication that you have only a small set of the possible input/output combinations. Perhaps you have deduced what you think must be the rules, with the qualification that though the actual rules might be different, they are in some sense equivalent to those you have deduced. How do you preclude the possibility that the output might be different than your rules predict if the input were different than what you have tested?
So it seems you need more than just input and output to arrive at the rules that take you from one to the other. Would it be enough to have perfect knowledge of the physical state of the computer, assuming this were possible? It would seem to depend on what you are allowed access to. If you could observe primary memory and the internal processor state (registers, etc.) only, you still might not be able to deduce the exact program since more than one program could have produced that state of the machine. It might be that not all of the program is loaded into RAM at the time you make the observations. Unless you happened to be able to read the program itself on a hard drive, for instance, you might not be able to determine it by examining the machine level states including those in primary storage. If you could gain access to the program stored in primary and secondary storage, and all of it were in fact loaded into such storage, then you might or might not be able to deduce the program as it would be written out in a particular language. It might still be the case that there could be more than one combination of programming language and program instructions that could have produced this state. Running the machine under a variety of data might enable you to narrow this down.
In the case of a modern digital computer one can imagine theoretically being able to 'reverse engineer' the code in the above fashion, based on knowledge of input, output, and hardware states, because we already know (or could imagine determining) the correlation between particular hardware states and the code that produced them. That is, we have built the computer and the operating system running it, and we know what kinds of code produce what kinds of changes in internal hardware states. So working backwards seems at least theoretically possible if we can observe not just input and output but also hardware states. But note that even here would be required some extremely tedious mapping of machine level language states back to the higher level language in order for you to deduce it and the actual program code. And so far we have been talking of theoretical possibilities only. If we try to get realistic the problem of trying to unravel any reasonably complicated system seems practically insuperable. Any large commercially produced software system undoubtedly has many bugs, and this represents a situation in which the programmers knew the language and specified the rules. Microsoft Word, for instance, probably has many bugs. Bugs complicate the problem in two ways. First, if one wants to determine the exact code, one has to reverse engineer the bugs too, since they are in the code! In the reverse engineering, one is tempted to assume coherent values, rational and logical relationships, etc. But bugs might exist that make data incoherent or make parts of the program seem irrational. Second, the fact that bugs are present in great quantity in forward-engineered software shows how hard it is to accomplish software creation. How much more difficult must it be to work backward and reverse engineer the software from the data and machine states? If one is trying to work backwards in the manner under consideration, how difficult would it be, for example, with no knowledge of the application or the high-level program language, to determine the C code for Microsoft Word, which must run into the millions of lines? But then how much more complicated is the human brain (billions of neurons and trillions of connections) than a word processor? How complex a task would it prove to be to examine the states of a particular brain and determine the program that produces its activity? With no bugs introduced in the reverse engineering?
The above discussion of current computers assumes the situation of deriving a program in the context of modern sequential processing, stored program, von Neumann machines. As we have discussed, it seems that the human brain is not this kind of machine but rather a massively parallel processor. It is not clear that in the case of the brain there is something analogous to the master program of a large scale computer system. We don't have a 'hard drive' located where we can look to find the program, and it is not clear there is a program anyway. So the attempt to ascertain the exact program of a particular person's brain may be undermined by the fact that it rests on a faulty assumption, irrespective of the other problems cited above. All we have are states of hardware, much like the old fashioned kind of computer where the software was hardwired into the machine already, not loaded at will to replace what was loaded before. What would the software be for such a hardwired machine? A set of algorithms? A hardware specification? If it is hardware specific for a massively parallel system, it is not clear in what sense one can just capture it as some kind of neutral specification and beam it across the galaxy to be conveniently loaded on different hardware.
But even allowing there is something appropriately called the 'program' of a particular human brain, we also need to note a crucial disanalogy between reverse engineering the program for a computer and reverse engineering the program for the brain. We might be able to (theoretically, at least) reverse engineer the program for a modern computer because we have built the computer, designed the operating system and similar application programs, and know (or theoretically could find out) what hardware states and changes mapped to what kinds of code. But in the case of the human brain in general, and a particular human brain, as well, we have no such experience. We haven't built the brain or written the operating system or the 'application' that the brain is running, so we don't know what wetware states map to what lines or types of code. It seems that in the process of reverse engineering the brain, even knowing the states of wetware as well as input and output would leave out a needed piece of the puzzle.
The Notion of Equivalence
Of course in our context the whole reason for determining the overall program of a particular person's brain is to be able to construct an equivalent program for the robot brain. This notion of 'equivalence' cannot be escaped. For the sake of argument let's ignore all the above difficulties and assume we have reverse engineered the program running on a particular human brain. We need to clarify the notion of equivalence involved in saying that we now need to just construct an 'equivalent' program for the already existing robot brain. If instead of loading a program on an already prepared robot brain, the process were to involve copying the human brain by creating an equivalent robot brain, the idea of equivalence once again arises, this time in the context of trying to get the hardware of a robot brain to be equivalent to the wetware of the human brain. I emphasize that this question is bound to come up if we are doing anything short of simply duplicating a human brain by creating another identical human brain, which is not what we are talking about with human-computer mind transfer. So whether with respect to software or hardware, equivalence must be involved.
We now turn to the question about what such equivalence could mean. The basic idea might seem clear, but considering the issue in more detail shows that the idea is not at all clear. Here is the basic idea. The robot brain is going to be made of different stuff than the human brain. But we want the particular robot brain to be able to do (at least) all that the particular human brain does in response to the same stimuli, and hold the same data. So we want to create a robot brain that does the same stuff and holds the same data as the particular human brain. This is how it will be equivalent.
This basic idea is unfortunately too vague. We don't require that the robot brain do all that the human brain does. It won't have to circulate blood, for instance, or utilize oxygen from its blood supply. We certainly want it to be able to produce the same output in response to the same input as would the human brain. (Sort of--the new brain might be better, enabling faster response, so the output is not exactly the same.) Also, whatever level and type of organization is required to produce phenomenal consciousness, for instance, should be present in the robot brain, though we are not calling this 'output.' We can't be sure that consciousness would be produced just because the other output is the same. So we might need more than just input-output equivalence--maybe we need equivalence of structure or organization or computation, whatever that kind of equivalence means. But can we even be sure that consciousness can be produced in the robot brain material simply by virtue of its organization or computation? Certainly it would be questioned whether equivalence of computation would be enough. This alludes to what might be learned from Searle's discussions, namely that we can't be sure that conscious awareness arises purely in virtue of computation (assuming we have already defined the notion of equivalence of computation). We don't know that conscious awareness is possible in any other medium than animal wetware. It might be possible in human wetware only in instances of a certain level and type of organization. We don't know whether, if it is possible for it to arise in material other than wetware, it would do so in any particular level and type of organization considered equivalent to that of human brain wetware. Maybe the appearance of consciousness would occur only if the processing speed were equivalent. Then we would be dead in the water with a faster brain. Or maybe a faster, less parallel electronic brain would be equivalent to a slower, more parallel brain? We don't know this, but it seems that something like this is what our authors are hoping!
Let's suppose that consciousness does arise in a robot brain were that electronic brain to be equivalent to the human brain in some sense. In the case of a particular robot brain, we would want it to be equivalent to a particular human brain, that of the human undergoing mind transfer. Let's spell out the possible meanings of equivalence. One might claim that equivalence means sameness of (publicly observable) output in response to input, but as mentioned above, this seems inadequate, since it neglects the kind of structural equivalence we might need. As I observed in an earlier chapter, it is logically and physically possible for a computer to pass the original Turing test just by sheer chance. Here we have the desired output in response to input, but no one would consider such a machine equivalent to the brain, or a human, or a computer that passed the Turing test by virtue of running a sophisticated program. The internal structure or way the program operates does seem relevant, so it could be relevant to brain equivalence too. Even if we assume that a robot brain could be conscious, it might be by virtue not just of producing the same output for input but because of similarity of structure, organization, computation, etc. As we discussed in the last paragraph, we can't be sure of this, but we shouldn't assume this structural sameness has no relevance.
So while we can't rule out sameness of output for input to be the right kind of equivalence for consciousness, we should be prepared to look for more. Certainly someone like Searle would argue that we have to have more. So we should be open to looking for some understanding of equivalence that involves more than sameness of output for input.
Let's consider current computers. Suppose we have an early version of WordStar running on an old PC and the latest version of Microsoft Word running on Windows98 on a new machine. Ordinarily no one would claim these computer systems were equivalent, but perhaps one could claim they were in a very crude sense of word processing as opposed to database processing, for instance. There are reasons why they ordinarily wouldn't be considered equivalent. The computer processors would be running differently, the routines in the software would vary greatly, and the features and functionality of the applications would differ significantly. One wouldn't get the same output for a given input. In that the early WordStar program would not even respond to mouse input, one couldn't even provide the same input.
Consider a different scenario. Microsoft Word is available for both IBM-compatible personal computers and Apple MacIntosh personal computers. Let's assume we have the proper release such that they incorporate the same features and functionality, etc. We might say that the two software packages are equivalent, even though they are running on entirely different hardware platforms, with different operating systems. It might be that a particular keystroke combination on one package is not duplicated exactly in the other package, but this might not preclude us from considering the two packages to be equivalent. I do not know that Microsoft Word running on each of these machines has the same high-level algorithms, in the sense of high level routines of procedures and subprocedures. But we would still commonly say that the two versions of Word were equivalent. However, note that if one ran much faster than the other for common business tasks, one might claim that the computer systems as a whole were not equivalent.
Or consider Microsoft Word running under Windows98 on two IBM-compatible personal computers, one using an Intel chip and the other an AMD chip. The fact that the processors were operating differently would not preclude the claim that the programs were equivalent. One might consider the two Word programs equivalent, even more so than in the former case; we might even have used the same compact disk to install the program on the two machines (successively). Even if each were installed from a different compact disk, one would consider them equivalent. Again, if there were significant speed differences one might allow that the computer systems as a whole were not equivalent, even if the programs were considered to be.
In the case of these situations we might draw several conclusions. First, the notion of equivalence even in the seemingly simple sense of same output for input is not entirely clear. Word on the IBM-compatible PC and on the Mac might be considered equivalent, in the sense of the same features, yet strictly speaking you might not have same output for same input, since a keystroke combination on one might not give the same response as on the other. And we already considered the fact that if your robot brain works a lot faster than your old human brain, we might question in what sense the output was the same for the same input.
Second, even when going beneath the surface output to examine some deeper sense of equivalence, things are still murky. Unless you have a program running on two absolutely identical machines, it seems that at some level or other there is going to be some difference in the processing that occurs, and so you will have to wrestle with the notion that at that deep level the two executions are not identical, though at a higher level you might claim that they are equivelent. Even if the PC version and Mac version of Word were standardized so there was absolutely no controversy about equivalence of output for input, how much of the underlying program logic would have to be the same to make for some kind of structural or organizational equivalence? If you go deep enough, it seems you would see differences just because they are running on different processors with different operating systems. It sounds odd to say this, but you might hold that even the same copy of Word is not equivalent to itself at the absolutely deepest level of structure if running on different processors, since presumably two different processors are not carrying out identical processing. We might say these two instances of the running Word program are not equivalent at this deep level. In considering the question of equivalence the question of level of detail or context arises. So overall processing algorithms could be the same and yet finer level algorithms or processes are different. Which would be important to establish equivalence?
Third, though a judgment of equivalence is not arbitrary, it might be to some degree relative to our interests. The two instances of Word were considered equivalent in most cases because they function the same. Old WordStar versus new Word in most cases wouldn't be considered to function similarly, but in the rough sense of word processing as opposed to database processing, one might consider their functioning similar enough to hold them equivalent. But if one's interests included the algorithms of the programs, or the machine level code, or the processor activity (as ours ordinarily do not), then one might judge them not equivalent. If they function similarly at the level of our interest, then we consider them equivalent. If at the level of our interest they do not function similarly, they are not considered equivalent. Note that we have not strictly specified the meaning of 'function' here--it could involve output from input, organization, speed, structure, or any of a number of other properties.
We might disagree about equivalence in word processing applications running on personal computers, but at least we can describe levels and interests to get clear on just where the disagreement might arise. It's no big deal if we change our minds later. What we have to consider in the case of human-computer mind transfer, however, is equivalence between a human brain and a robot brain, whether in general or in particular, and whether as a whole or in part. We want the overall robot brain to be equivalent to the overall human brain in the sense of matching (at least) output for similar input, assuming we can get clear about this. But we may need to get more. And once you port your mind over, you may not get a second chance if it turns out the equivalence was not the right kind or wasn't present the way you thought it was.
Do we care if the lower level algorithms are the same? Our authors apparently think not. It's true that Kurzweil speaks of one kind of transfer involving the creation of a robot brain such that the overall patterns are preserved. The architecture and implicit algorithms of interneuronal connections in different regions would be the same. One gets a vague idea of what he is trying to get at, but it is unclear how far down these algorithms would have to be the same. He uses the analogy of running the same software, with the same end-user functionality, on two different hardware platforms. We have seen that such a comparison might need to be specified further. But though he puts it in terms of end-user functionality, it certainly seems to mean more than just sameness of output for input. If he means a situation similar to running Word on an IBM-compatible PC and on a Mac, or of running Word on personal computers with different processors, then we get a rough idea that he thinks the high level algorithms would be the same but not the low level ones (at the level of the operating system and microprocessor). Moravec writes of his robot surgeon taking groups of neurons and writing software to allow that part of the robot brain to function equivalently to that part of the human brain. But these scenarios by Kurzweil and Moravec are about as strong a statement of the need for algorithmic equivalence as we find in our authors. More often we get the allowance that the robot brain and the human brain could be quite different, not only at the level of detail of individual neurons (or the robot equivalent), but in grosser structure and algorithms, and still be equivalent. This is why the emphasis on increasing processor speed is so relevant. We already have processors much faster than human neurons, so why do we need still faster ones to make robots as smart as humans? Only because what we have in mind is not creating a robot brain that will work like a human one but one that, while possibly operating in some parallel processing fashion, will work very differently than a system with 100 billion neurons connected in a massive parallel architecture. Very little equivalence of algorithms seems envisioned.
Our authors mention the transfer situation in which the human brain is copied, and here equivalence would seem to go down pretty deep. But the reason for presenting such a scenario is not that they think this level of equivalence would be needed to ensure consciousness. It is because this might be the easiest way to carry out the transfer. There does seem to me to be some unclarity or ambiguity present in the writings of our authors here. On the one hand, they talk of the robot brain as being far different than the human brain--operating much faster, for instance, and being made of entirely different materials. On the other hand, when it comes to the mechanism of transfer, one often hears of duplicates being made of very small sections of the human brain. How far down the equivalence goes, and the exact respects in which equivalence holds, are left unanswered, it being stated merely that the robot brain and human brain will be equivalent.
We can at least make a guess at the notion of equivalence that seems really implicit in the discussions, and it may not be at fine enough a level of structure or organization to provide for consciousness. Actually they don't seem concerned that equivalence go deep at all. It seems to be equivalence only in the functional sense of same (publicly observable) output for same input, whether this is considered at the level of the brain as a whole or of large regions of the brain (the usual sense) or of smaller brain sections (less usual). There does seem to be the implicit assumption that if this type of equivalence is obtained, in the sense of the production of similar electrical or electrochemical activity to interact with the body, then the robot brain will be equivalent to the human brain.
But this seems a little naove to me, or at least optimistic. As I have hinted before, is there any reason to be confident that this kind of equivalence will produce a correspondingly equivalent mental life? It may be that what we need is a lower level kind of equivalence of structure or lower level algorithms, maybe even down to the level of the neuron number and arrangement. We don't know that any other structure with fewer units and less parallelism will give rise to conscious awareness, even assuming that it is the arrangement, organization, structure, processing, etc. that gives rise to conscious awareness and not the particular material of human brain wetware. Of course if it is the latter, as Searle thinks, then the project is doomed from the start.
One might attempt to specify how much deeper down this equivalence must hold, though our authors do not. For example, one might locate the level of the sentence or proposition as the basic unit of meaning, and then claim that since belief states are intentional, and we want the robot and human to mean the same thing and have the same beliefs, equivalence must be at least at this deep a level. This sounds promising, though it is no guarantee that consciousness will still arise in the robot. But at what level do meaning and intentionality arise? The sentence, the word, or something more primitive or larger (a sentence and a context)? Even if we can decide this and define what we want in a computer, how can we locate this level in the human brain? It is not clear how representation works in the brain, or in what sense it uses symbols at a deep level. It might be difficult to determine exactly what level of human brain activity should be considered the locus of meaning and representation here. If Fodor is right, somewhere we could at least theoretically find the level of Mentalese in the mind, but where would this be in the brain? Recall the debate mentioned earlier between implementationalism, eliminativism, and moderatism. And if Churchland is right instead of Fodor, then there is not even any hope of finding Mentalese.
So where are we in the attempt to determine how much equivalence we want for consciousness? The worst case for strong AI, and for our authors, would be if equivalence must occur in the sense of the same kind of tissue. Then the robot brain would just not be equivalent. If equivalence must instead be only at a very low level of same structure and algorithms, then the robot brain envisioned by our authors may still not be suitable. (It is complicated by the fact that we could be talking of the brain or only part of the brain.) Only if the equivalence necessary for conscious awareness is allowed to be at a very high level of functional sameness in terms of same electrical output for same input overall, or in terms of same high-level overall structure or algorithms, could we hope to have equivalence of a robot brain with an actual human brain. And they have not demonstrated this or, to my mind, even begun to adequately address the question.
The above discussion assumes that the relevant output is electrical activity of the type produced by the human brain. If the robot body is made to respond to some other kind of output, and the robot brain made to produce it, we get even farther away from understanding in what sense the robot brain would be equivalent to the human brain, and even further from having any reason to believe that this equivalence will be associated with the production of conscious awareness.
There are other complications to point out before I move on. The discussion of building a robot brain by sort of creating an electronic copy of a human brain seems to assume that what we want to wind up with is a robot only as smart as a human. But part of the excitement of the extraordinary future is that the robots are supposed to be a lot smarter than humans. But how will they get that way? Just making them out of different materials would seem to bring about some additional smartness due to increased speed, but this assumption may not be correct. Increases in intelligence may be more tricky than just faster processing, or through the same overall processing speed but with some of the circuits running faster. Who really knows? Building robot brains that are equivalent to human brains might allow the robots to be only as smart as humans, if we really get strict about equivalence.
We have to go beyond equivalence in order to get a radically smarter robot, but the more changes we make to get a smarter robot brain the farther we get from it being equivalent to the original human brain, which was the one that we knew had the right stuff to provide for consciousness. We might not want to run the risk of becoming radically smarter but permanently unconscious.
The more changes we make the more likely the robot will seem less like us too. The personal identity issue will have to wait to a later chapter for a full discussion, but consider that it might involve some kind of continuity of dispositions, character, etc. If it does, and not just continuity of memory, then what happens to such continuity when the robot is radically more capable than the original human? It's not like what happens when we grow up and gradually learn more or gradually get smarter, or our personality gradually changes over time. Here the situation is a radical, sudden altering of our whole way of being. On some mind transfer scenarios, intelligence increases by enormous orders of magnitude, and one gains all of human knowledge, in an instant. What would that do to psychological connectedness of the type other than memory? The resulting entity will feel different, its consciousness might be different (assuming it has any), and one could imagine that on some theories of personal identity, this difference might call into question whether it is even the same person.
There is also the matter of the fact that our brains change as we learn and age. Experiences create changes in connection strengths and the very connections themselves. If we create a robot brain, it has to have the ability to grow 'on the fly,' as it were, if it is to match this. So it's not just a matter of creating a robot brain that is equivalent to a human brain at a moment in time. I have not seen this topic addressed by our authors with more than a mere mention.
We turn now to our third question. Assuming the notion of equivalence can be established in the relevant sense and specified adequately so we can try to create it, how can it be tested for in a particular mind transfer situation? Here I think our authors are really overly optimistic about the ability of humans or human-engineered robots to write perfect code with no real testing.
Almost every piece of software ever written has taken numerous attempts before it compiled perfectly and did everything it was supposed to do. Many pieces of software never make it to that stage because they just won't compile! Those that do run and produce something like the desired output usually undergo some kind of testing process, whether formal or informal, along with associated rework before they are considered finished. The idea of a significant software program compiling and running perfectly on the first try would be considered completely unrealistic by anyone who has ever been involved in the production of commercial software.
But consider what is supposed to happen in the imagined scenarios. Human or robot programmers are supposed to analyze a particular human brain or take the results of such an analysis, from which some kind of extremely detailed model has been developed (including information pertaining to many billions of neurons and many trillions of connections). From this the software program that runs the brain is determined or 'reverse engineered.' This may be the exact code or an equivalent--though we have raised concerns that if it is merely a 'functional' equivalent in the sense of giving what appears to the outside observer as the same output for input, it may yet not produce the same phenomenal output to the human whose brain is running it. How much testing is available to verify that this code is correct? The few discussions among our authors of any relevant kind of testing involve only the substitute robot device--not the original code. Of course if the substitute robot device doesn't work correctly, then perhaps the interpretation of the original code of the brain is not right--or maybe the interpretation is right but the device's code is not an exact equivalent, or just plain faulty. It might be hard to determine where the problem is. In any event, with this brain code, an equivalent code for a robot imitation is produced. How much testing is there of the original interpretation? How much of the equivalent? How can the equivalent code for the robot device even be tested prior to the robot device being produced and used?
Testing of the robot device, and along with this implicitly the code running it, would seem to have to occur after the robot device is produced and installed. Moravec actually does seem to envisage just this sort of thing. The robot surgeon takes a portion of your brain (it is not clear how much this is), analyzes it and reproduces (reverse engineers) its code, writes the equivalent code for the matching robot brain part, and then apparently runs your brain's electrical input and output for that brain part not through the original but through the replacement in the robot. In this way, presumably by having you agree or not agree that everything seems to be the same, the robot surgeon 'fine tunes' the code of the robot brain part. This is done systematically for all parts of the brain. But wait a minute! Why should the robot surgeon have to fine tune the code for the robot brain part? Is he/she/it not smart enough to get it right the first time? Apparently it will not be possible to know whether the code is adequate on the first try. So much for perfect knowledge on the part of the robot surgeon. It must be that the robot surgeon will have to subject you to a variety of stimuli in order to observe that the robot brain part is functioning in an equivalent fashion. But if this is the case, why would the robot surgeon not have to do the same sort of process in order to be sure that it has written the correct code for your original brain part in the first place? And how much testing would sufficiently establish this? As we cautioned above, one can subject a black box to any number of inputs, observe the outputs, and attempt to write the code (or function, or algorithm) that would have produced those outputs from those inputs, but one will never be sure that additional different inputs would have produced outputs not predicted by the code (or function, or algorithm) that one had deduced and assumed to be the one actually operating in this case. If one looks inside the black box (changing the testing to 'white box' testing), as the robot presumably could do, perhaps one could nail down the definitive code, if in fact the black box were a modern digital computer. We have cautioned about the tremendously difficult undertaking this would have to be. But here the black box is human brain wetware, and we don't really know what hardware states map to what software code lines and procedures as we might in the case of a modern digital computer. With this piece of the puzzle missing, we might never be sure we have the correct code, no matter how much testing we do.
As argued above, it might be that no matter how much testing were to be done, we could not be sure that we had the correct code in the case of the human brain, or in the case of the robot brain replacement. It might be objected that our standard is too high. Surely there would be no need for perfect code--a guarantee of correctness. Why couldn't we make do with something less?
We might very well be able to do so, though if you are risking your life on porting yourself to a robot brain and body you might not be so eager to port yourself over to something that runs only as well as Windows 3.0. But to establish even a reasonably good approximation in the robot replacement might require the human subject and the robot replacement to undergo an extensive testing process--for each brain part and for the overall system. Since stimulation and agreement on the part of the human subject would be involved, it is not as if we could speed this up faster than the human subject could operate. And therefore it might be asking too much of the human subject to require him or her to undergo a testing process that would take days, weeks, months, years--who knows how long? In short, give credit to Moravec for at least conceding that fine tuning would be required, but I suggest he may have grossly underestimated the difficulty and length of this process.
Any reasonable amount of testing introduces complications that our authors do not seem to have taken into account. As mentioned, while the robot surgeons operate at great speed, if you have to involve the human, the testing slows down to human speed, and this holds for any testing involving human cooperation. The above scenario has the testing occur while the human is on the table and under the knife, so to speak. Any long process of testing is likely to prove too much for a human to take. Even scenarios that have the robot created all at once rather than piecemeal would have to involve a human cooperating over an extended period of time in an exhausting way. And how is it supposed to work? Apparently we are to imagine the following directions to the person undergoing the mind transfer. 'Meet Mr. Robot here, we are going to subject you to a variety of input/output tests over a period of weeks, and then see if Mr. Robot can match the results. If you become exhausted, drink more coffee. If we become convinced that Mr. Robot really is equivalent to you, we will shut you down and you will be him. But we might need to make adjustments over an extended period of time, so we really can't be sure how long it will take. Can you get more time off from work?' Does this sound realistic?
And let's be realistic about the possibility of producing relatively bug free software. Despite spending millions of dollars, Denver was unable to get their new airport baggage handling system working on time or anywhere near on time. Years later, at this point I think they have abandoned the original plan in favor of an only partially functional system. The problem apparently is that the programmers couldn't get the software correct. The brain is obviously more complicated than an airport baggage handling system, but we are asked to believe that writing correct code for untold numbers of human brains, and for their robot brain replacements, with minimal testing of each, will be routine. This seems optimistic, to put it mildly. I can almost guarantee that if you present this whole human-computer mind transfer depiction to a group of experienced commercial software development project managers the main objection will be that we will never be able to get the code right.
Even bringing in robot coders may not solve the issue. Current mechanical means of writing software, such as in the use of CASE tools, while in some cases speeding up the software development process, and easing the burden on human programmers, do not have the reputation for producing perfect code. In many cases the code generated by computers has been considered very poor. Perhaps we shouldn't blame it on the machines but the humans who built the programs, but with robot coders we are in the same boat. So whether in the form of programmers writing application code, or in the form of programmers writing code for machines to write code, it seems there is a great possibility of far less than perfection.
I raise one additional issue that might be raised about testing, one having to do with the possibility of zombies and 'zombie-robots' (or 'robot-zombies'; I don't care what you call them). We need to briefly recap to lead into this problem. We remarked that when creating robot brain parts as replacements for human brain parts, the requirement is that the robot parts at least be equivalent to the human brain parts. This holds irrespective of the mechanism of mind transfer; I use 'replacement' loosely. The equivalence we want is ultimately for the whole robot brain as a functioning system to be equivalent to the replaced particular human brain, and this might involve lower level equivalence for particular brain parts with respect to both hardware and software. We also discussed various notions of equivalence and noted that our interests might determine the relevant definition or understanding of equivalence required. Now with respect to the equivalence desired for human-computer mind transfer, we need equivalence to include same output (response) for a given input (stimulus). But, as mentioned above, this 'output' should include not only that observable to an external observer, such as behavior of the person or production of the relevant electrical signals (or their equivalent!) by the robot brain. Since we have remained open to the perspective that there is something appropriately called 'conscious awareness,' 'phenomenal consciousness,' 'qualia,' and so forth, it is absolutely crucial that the type of equivalence required here include the production, causation, or accompaniment (however qualia arise from human brain matter, assuming they do) of such conscious awareness (we have even not used 'output' to refer to this). This conscious awareness is not going to be directly observable to third parties but only to the person who has that brain (whether human or robot).
Given this notion of what equivalence includes, how can we test for it? In the type of gradual substitution epitomized by Moravec's scenario of the robot surgeon, it seems that the individual undergoing the mind transfer might be asked about what he or she is experiencing as replacement parts in the computer are substituted for parts of his or her human brain. This might be part of the 'fine tuning' process. I do not see how it can be left out of the testing process, at least for initial mind transfer cases. That is, it would somehow have to be established that humans do continue to experience conscious awareness, particular kinds of qualia, etc., when human brain parts are replaced by robot brain parts. And it would seem that the only way to establish that would be to ask human subjects who were undergoing testing of some sort that involved such replacement, whether temporary or permanent, whether they still continued to experience such awareness and qualia under specific replacement. One might conjecture that once this had been definitively established for the human brain, each individual subject undergoing mind transfer would no longer have to go through such a testing process. On the other hand, to really be sure, and if variances among individual human brains were such that no such assurance could be established for human brain part replacement in general, then perhaps all individuals undergoing mind transfer would have to go through such a testing process.
Now the additional issue I wish to raise here is whether we can take the subject's word for it when he or she claims that the replacement of the human brain part with the robot brain part leaves conscious awareness or the experience of particular qualia unchanged or intact, and so establishes that equivalence in this sense has been achieved. The problem has to do with the fact, believed by some, that if you ask a zombie whether he or she is experiencing such qualia you will receive an affirmative answer! Of course, the argument goes, the zombie is going to say he or she has conscious awareness, because the zombie's understanding of this is purely psychological in the functional sense. To the zombie, conscious awareness just means responding to input by a certain kind of publicly observable output (or by the production of other brain events that are externally observable via a scanner, etc.). But here the zombie's claims should not be taken at face value, because in a crucially important sense the zombie literally does not know what he or she is talking about.
The key to resolving this issue, with respect to testing, may be to test while gradual substitution takes place rather than whole brain replacement all at once (or mind transfer all at once, to adopt that description of the same event). Suppose Searle is right about consciousness as we know it being tied up with the causal powers of human brain wetware, and suppose robots really turn out to be nothing more than zombies because their brains lack the relevant causal powers (in this case, computation alone doesn't provide for consciousness). If you transfer your mind into a zombie-robot, and the transfer includes all of your prior memories including those of qualia such a pain and other states of conscious awareness, will the zombie (which is now 'you') be able to compare its present experience with its 'remembered' experience and come to the knowledge that it really doesn't have conscious awareness anymore? In other words, will you be able to come to the knowledge that you are now missing something that you once could experience, back when you had that good old human brain and body? Does anybody really know the answer to this question? Most discussions of zombies discuss the situation of a person or robot being a zombie from the start, as it were, rather than coming into zombiehood via mind transfer. So they assume that the zombie will say 'yes' to questions about conscious awareness when the answer really is 'no.' But this is not the same as human computer mind transfer, which might be a transfer into zombiehood from non-zombiehood.
One might conjecture that the best chance of a robot-zombie coming to the realization that he or she no longer experiences qualia, etc. would be via a gradual brain replacement (or gradual mind transfer) rather than a brain replacement all at once. This way, the zombie would not have to rely on memories of qualia, etc. which he or she no longer has the ability to experience at all. The human person being transformed into a zombie-robot could then say, for instance, 'Hey, I can still feel phenomenal pain in my foot, but I no longer feel such phenomenal pain in my arm, even when the tester is jabbing it with a pin and I am wincing and jumping away!' Or one might say 'I see red, but not like when I see green, which now causes me to react in a certain way at a traffic light but which no longer is accompanied by the same type of quality'' I'm not sure these scenarios are intelligible, but if they are, their type might offer the only hope of really establishing whether human-computer mind transfer maintains the relevant kind of equivalence between human mental and brain states and robot mental and brain states.
This is all I have to say about equivalence. It sure needs to be more thoroughly addressed by our authors.
Further Issues
I now turn to brief discussion of some other relevant ethical and metaphysical issues that might arise about the mechanism of the transfer.
First the ethical issues. The ethical issues center on the notion of human rights. I distinguish between moral and legal rights. We commonly take it that a human has the right not to be murdered, tortured, etc. These rights seem to hold irrespective of whether the actual laws of the particular country provide a legal basis for these rights. So they seem to be moral rights, whether or not they are legal rights. I am not trying to argue for these rights but merely observe that we do commonly recognize humans as having them. Also, I am not trying to provide a thorough analysis of the nature of such rights, as kinds of entitlement, or as correlative with negative duties of other moral agents, etc.
We talk of them as human rights but it may be that we would wish to extend them to other types of beings. Our society seems to be ambivalent with respect to non-human animals having them, for example. On the one hand, most people believe that killing a non-human animal to eat it violates no rights of the animal, though some believe this practice does violates the animal's right to life. But even people who eat such animals or condone the practice might believe that the animal has a right not to be tortured, for example. On the other hand, some might argue that a human being does not always have the rights we think a human does. For example, a fetus seems to be a human being, but some hold that prior to achieving a particular stage of development or being born, the fetus has no right to life because the fetus is not a person. So it may be that what is crucial about humans that forces us to acknowledge certain of their moral rights is that they are persons, not (just?) humans.
If a human person transfers his or her mind to a robot brain and body, and if we assume that such a transfer really does maintain personal identity and allow continued existence as the same conscious person, then we probably will believe the rebodied person still maintains their 'human rights' even though we may no longer consider that person to be human. Whether this person is still human is another issue; I am claiming here only that we will likely believe that since the individual is still a person the rights remain. But what of the robot that donated body and brain to the transfer? Did this robot have a right to life that was violated?
If the robot existed as a person prior to the transfer and the transfer annihilated this existence then it would seem the transfer resulted in something equivalent to the murder or manslaughter of that person. This would be the case under certain scenarios. For example, all those scenarios in which a robot is created and given a life prior to the transfer would be included here. If the robot were created piecemeal and 'on the fly' as the human person's mental states were being transferred, the robot would have no independent existence prior to the transfer and so would suffer no death in the transfer process. I do not know what to say about the case of a robot created to be conscious but not 'activated' prior to the transfer--would this have the same status a fetus has to many observers, namely a potential person but not an actual person? If so, and since potential persons don't have the right to life that we accord actual persons, it might have no rights to violate by transferring someone into it.
On the other hand, if a purported mind transfer resulted in the existence of the human person no longer as a person but as only a resultant zombie-robot, then perhaps manslaughter of that human person has occurred. One can imagine a future wrongful death civil lawsuit against a mind-transfer operator arising if it becomes confirmed that those so-called mind transfers being offered really kill the persons undergoing the transfer. Such a suit might be brought by the victim's family or by the resulting zombie-robot, who would have to be convinced that someone had died even though personally believing he felt fine (without really knowing 'feeling fine' in any phenomenal sense)!
This thesis is not really about the above types of issues, which would take us far afield into a discussion of ethics. I mention them only to show that future human-computer mind transfers, if they ever really happen, might raise more than sheer technical problems.
On the other hand, certain metaphysical issues concerning personal identity might arise with respect to the mechanism of transfer that are germane to this thesis. They involve the same questions raised elsewhere during the discussion of personal identity in the following chapter and will not be treated here. I do want to mention that such problems might arise with some mind transfer mechanisms but not with others. For example, in any mechanism in which the mind is duplicated prior to the transfer, the question arises of in what sense a 'transfer' has taken place rather than a duplication. Creating a robot equivalent of me, with my memories and mental features, etc., while allowing the continued existence of me raises the issue of whether I have become two people or if not, which one of the resulting people is really me. If the human version is then killed, the question arises of whether my identity suddenly has jumped to the survivor. As well, of course, there is the ethical issue of killing that human in the first place.