Return to MODULE PAGE

The Extraordinary Future: Chapter 5

Winfred Phillips: Author

Introduction

The vision of the extraordinary future includes the claim that by porting themselves (transferring their minds) to computers, humans will be able to achieve personal immortality. In earlier chapters we examined whether computers will be smart enough, whether such a transfer will be possible, and whether such computers will be conscious persons. For the sake of argument, we will assume that these questions have been answered affirmatively. The remaining question, then, is whether the post-transfer computers will be the same persons as those who undertook the transfer. Will the identity of the person be maintained?

Clearly the vision of the extraordinary future includes the belief that it will--otherwise there would be no immortality of the person undergoing the transfer.

Identity in the Extraordinary Future

Obviously it's going to be a disappointment to everyone if your friends go to all the trouble of building a robot and you transfer your mind to it, only to discover that it is no longer you who survives the procedure. The assumption in all of these human-computer mind transfer predictions is that one's personal identity is maintained through the transfer. That is, the person before the transfer is the same as the one after the transfer, though the brain and body are different numerically and may be very different in appearance and ability. Paul and Cox call this 'The Key Assumption.' If the situation is more properly described as that of one person dying and another person coming into existence as a result of the alleged mind transfer, this will not be immortality for the original person.

For the most part Kurzweil believes that personal identity survives mind transfer, though he sometimes seems unsure about particular scenarios. Kurzweil asks us to consider a future man named 'Jack' who receives a cochlear implant to correct a hearing problem. (This is possible today.) In the future, suppose the man also receives 'phonic-cognition' circuits, which when switched on bypass Jack's own neural-phonics cells. He then gets image-processing implants to improve his vision and memory implants to help his failing memory. Is Jack still the same person after each (and all) of these implants/replacements? Kurzweil thinks we would allow he is. What would we say if Jack had his whole brain and neural system scanned and replaced by faster electronic circuits? We might think it still was the same person. What if Jack had all the changes at once, rather than gradually over time? Jack has a complete brain scan done, with the information from the scan installed in an electronic neural computer. Jack at the same time apparently receives a body upgrade (Kurzweil doesn't specify exactly what this involves besides the earlier mentioned changes, but he seems to consider it to be a complete revamping, since he later talks of Jack's 'old body'). Kurzweil asks rhetorically whether it is still the same Jack, and he seems to think it is. So here we have a case of a typical human-computer mind brain transfer with personal identity preserved. But Kurzweil also notes some cause for concern. In the case of a noninvasive scan, the old brain and body are still Jack too, so now we have two Jacks, which is an oddity. If instead a destructive (invasive) scan were used, we could consider it a case of transferring Jack to his new body and brain. So it seems that with either a noninvasive scan or a destructive (invasive) scan we would have a case of human-computer mind transfer with preservation of personal identity, though with the noninvasive scan we have the oddity of both post-transfer Jacks being the same person as the pre-transfer Jack. (At least immortality has been accomplished.) But wait a minute--Kurzweil thinks that the destructive scanning scenario is the same as noninvasively scanning Jack and then killing the old body and brain as the new one is created, and this case we might consider as the death of the old Jack and the creation of a new Jack not the same person (Kurzweil, 1999, pp. 52-54). Clearly if this is the proper analysis then it seems the survival of Jack as the same person has not been accomplished. Kurzweil does not resolve this problem.

Human-computermind transfer means a human person may take on a robot body and brain. The question arises about whether, or for how long, this new being will remain human. We mentioned earlier that Moravec thinks that humans living with virtual bodies would eventually no longer be human if they were able to shed all connections to real or virtual bodies. Kurzweil raises similar questions about the humanity of persons after a transfer. He thinks that in one sense, before the end of the twenty-first century human beings will not be the most intelligent or capable type of beings on Earth. The qualification is needed, he thinks, because it depends on how one defines 'human.' The major philosophical (and political) issue of the next century will turn out to be that of 'what constitutes a human being' (Kurzweil, 1999, pp. 2, 315).

Whether or not the resulting beings would be human or not, Kurzweil has expressed the opinion that they should be considered the same person. How can this be, if there has been replacement of the entire body and brain? This is because in ordinary life personal identity seems to be not a function of the continued possession of specific particles of our brains and bodies because our particles are constantly changing. Over a period of several years we change most of our cells (not brain cells though). Change is much faster at the atomic level and does include new atoms in our brain cells. Our material content is changing, with the 'patterns of matter and energy' what is semi-permanent. This suggests that 'our fundamental identity' should be associated with the 'pattern of matter and energy that we represent.' But, Kurzweil notes, if the old Jack is still around to protest that the new Jack is not really him, and we accept this, then this suggests that the 'identity from pattern' theme, though promising, is not quite right (Kurzweil, 1999, pp. 54-55).

Subjectively, what will the experience be like for the scanned and recreated consciousness? Will it seem to him or her that he or she is the same person? Yes it will. And the theory that holds that personal identity consists in patterns would hold it was the same person (Kurzweil, 1999, pp.125-126). But we have seen that Kurzweil realizes that the personal identity issue is a little more complicated than this, especially when two simultaneously-existing Jacks both claim to be the same person as the original.

Paul and Cox clearly believe that personal identity can be preserved in human-computer mind transfer because they believe personal identity consists of preservation of memory. The authors use the phrase 'self-conscious identity' to refer to personal identity. Having a sense of identity is dependent on conscious awareness of one's self in the present and past. The latter requires a memory of one's past. But this need not be complete or even continuous. Brain injuries, natural forgetting of some things in the past, and periodic unconsciousness through sleep do not preclude a sense of identity (Paul & Cox, 1996, pp. 164).

One must be conscious at least sometimes to have a self-conscious identity, but something could be conscious without having a self-conscious identity. A woodchuck may be conscious to some rudimentary degree and yet not react to its reflection in a pool. An animal who consistently ignores their image in a mirror or treats it as that of another animal is not likely to have a self-conscious identity. If it treats the mirror image as of itself, it probably has a self-conscious identity. For example, an animal with a white spot on the forehead will try to touch the mirror image, indicating it doesn't know it is seeing itself in the mirror. The one big exception among non-human animals comes with the great apes, such as chimpanzees (Paul & Cox, 1996, p. 163).

The Key Assumption, recall, is that a conscious identity can be transferred from one machine, natural or otherwise, to another with preservation of identity. Paul and Cox think that different machines (including a human machine) can 'run' the same identity as long as the memories are transferred 'reasonably intact.' The example of ordinary human experience shows personal identity is preserved if even a portion of a person's memories can be recalled. Every human has already experienced a subtle form of mind transfer that shows this. During growth from infancy through childhood and adulthood, synapses have changed and atoms of the brain have been replaced as part of normal changeover and repair in the cells. Yet you are the same person through the transfer of your memories during this process (Paul & Cox, 1996, pp. 184-185). Clearly Paul and Cox take some form of memory continuity to be the criterion of personal identity, yet they too are at a loss to say what happens to identity in a Star Trek-like example of having someone scanned and two exact copies made. Their remarks imply that they think there would now be two persons (Paul & Cox, 1996, pp. 185-186).

Moravec really seems more interested in the place of independent robots in future society than in human-computer mind transfer, but he does think that human-computer mind transfer will occur too. In Robot, Moravec alludes to his earlier work's discussion of transforming humans by having them replace their body parts by robot parts and gives no indication of abandoning his earlier view. But replacement has limits. Moravec thinks that some humans could become too dangerous if allowed to become as powerful or as smart as robots, so they would have to be constrained to improve themselves within bounds. If they could not agree to these restrictions, these 'Exes' (ex-humans) would have to be banished to outer space. These Exes, as 'postbiologicals,' will come in many sizes, with a millimeter probably the lower limit on size and the largest not likely to exceed a hundred-kilometer asteroid in mass. Some may resemble starfish or bushes in appearance, or have a trillion fingers. They will reconstruct themselves as needed (Moravec, 1999, pp. 142-154). Like I said, Moravec has quite an imagination.

Eventually Exes will become obsolete, or at least the notion of individual Exes will. Moravec's argument seems to be as follows. Exes will expand into outer space by turning inanimate matter into machinery for further expansion. On this expanding frontier Exes will compete with one another, but behind this frontier each Ex will be sort of landlocked by the other Exes and have to convince other Exes to merge or be taken over. Generally the most powerful minds will have the advantage in this competition. The landlocked Exes will have no use for powerful physical shapes and instead focus on restructuring for greater computing power, intelligence, and thought. They will arrange spacetime and energy into those forms best for computation. Inhabited parts of the universe will thus be transformed into cyberspace, in which overt physical activity is dwarfed by computation. Old bodies of Exes will be refined into 'matrices for cyberspace' and then interconnect as physical boundaries become irrelevant. Identities will be as patterns of information flow in the cyberspace, and the minds of Exes will migrate among the interconnected bodies at will as pure software. Eventually the growing cyberspace will engulf even the physical frontier of Exexpansion (Moravec, 1999, pp. 163-165).

The Ex wavefront of coarse physical transformation will be overtaken by a faster wave of subtle cyberspace conversion, the whole becoming finally a bubble of Mind expanding at near lightspeed. In this bubble, boundaries of personal identity between transformed Exes will be fluid and ultimately arbitrary and subjective. Some boundaries will exist due to choice, distance, or incompatible ways of thought, and Darwinian evolution will continue among these larger entities. As cyberspace matures, every bit of matter becomes involved in computation or storage (Moravec, 1999, pp. 165-166).

If boundaries between individuals disappear, and identities merge, I'm not sure that we would have personal immortality anymore. But Moravec thinks that, before they became the outlaw Exes described above, many individuals on Earth might have abandoned their human bodies in order to take advantage of virtual reality. Sensory and motor nerves can be connected directly to electronic interfaces, making most of the body unnecessary. Eventually the human brain too will have to be replaced by a superior electronic brain, and then migrate into other hardware. Our essences will become patterns that can at will migrate the information networks, and we might ourselves be distributed over many locations (Moravec, 1999, pp. 169-170).

So we see that our authors believe some sort of personal immortality is possible through human-computer mind transfer. I went on at great length describing Moravec's comments about the future because they contain his belief that identities will merge and flow in strange new patterns. But we also notice that our authors dimly discern that there may be confusing issues of identity involved in different mind transfer scenarios. The possibility of multiple copies, multiple transfers, and time lapses during the transfer make it confusing to sort out who will be who in a post-transfer world. Their inadequate attention to investigating personal identity theories leaves our authors easy prey for someone like Searle, who in his review of Kurzweil's most recent book, jokes about who should get possession of the drivers license if a person's mind is transferred into multiple copies (Searle, 1999)!

Let's turn then to some of those theories of personal identity to see if we can help out our authors. The issue of personal identity is a very old philosophical controversy. Over the years several different basic theories of personal identity have been developed, but no theory seems to be free from all problems or objections. To examine this issue and apply it to the scenarios of the extraordinary future, we will first clarify some terms. Next we will describe the basic philosophical theories on personal identity and some problems with each. Finally, we will show how these theories apply to the issue of human-computer mind transfer.

First, then, some terms need to be mentioned. Personal identity uses the concept of identity, which is considered to be transitive. Thus if X is identical to Y, and Y is identical to Z, then X is identical to Z.

It has become common in discussions of personal identity to distinguish between persons and person-stages. One can think of a person's life as divided into an arbitrary number of stages, each stage occupying a 'moment' of time. Each person-stage has distinct properties characterizing the state of the body and so forth at that time, and because people change over the course of their lives the person-stages are not going to have all of the same characteristics. For example, at an early person-stage the body will weigh ten pounds, while at the later one it will weigh a hundred pounds. The person's body will be at physically different places during their life (no one remains at one place their whole life). Relying on the notion of person-stages, some writers depict these graphically by thinking of a person as a spatio-temporal 'worm,' that is, thinking of each stage as a slice placed next to the other slices in a sequence representing the change over space and time.

Numerical identity must be distinguished from qualitative identity. X and Y are numerically identical if they are one and the same thing, while X and Y are qualitatively identical if they are exactly similar, that is, if their intrinsic properties and qualities are exactly alike (Baillie, 1993, p. 5). A perfect copy of something is qualitatively identical to it but not numerically identical. When discussing personal identity we are mainly interested in numerical identity--the sense in which you as a person at time t1 are one and the same person as you as a person at time t2. But we have already seen that some authors dealing with extraordinary future seem to think that keeping copies of you around can ensure personal survival. These copies may be qualitatively identical to you, but it would require further argument to establish that they will be numerically identical to you as a person. On the face of it, they wouldn't seem to be numerically identical to you as a person, since they may exist while what you ordinarily think of as you continue to exist, and 'copies' is plural.

Synchronic identity must be distinguished from diachronic identity. X and Y are synchronically identical if they are numerically identical at a particular time (time t), while they are diachronically identical if numerical identity characterizes them over time (each at a different point in time; this does not mean numerically identical at many points of time). So stages or 'time-slices' of the same temporally-enduring object are diachronically identical but not synchronically identical (Baillie, 1993, p. 5). In personal identity we are mainly interested in diachronic identity--the sense in which your person-stages are all of the same person, though these person-stages are at different points in time.

It is also useful to distinguish between the metaphysical issue of personal identity and the epistemological issue of personal identity. Metaphysics has to do with what exists and epistemology has to do with whether and how we know about what exists. The metaphysical issue is one of what it is that makes a person-stage A at t1 and a person-stage B at t2 stages of the same person. This is about the criterion or criteria of what constitutes personal identity. The epistemological issue is one of what we take as evidence that A and B are of the same person. Here we are concerned with the criterion or criteria of what counts as personal identity, whether applied to oneself or to other people. In this discussion we are mainly concerned with the metaphysical issue, though sometimes the epistemological issue creeps in as well. Primarily, we want to know what makes someone the same person from one moment to the next rather than how we know they are the same person. But if a theory of what constitutes personal identity leaves it a great mystery how we would ever know whether two person-stages belong to the same person, then this will be counterintuitive, since we commonly assume that we can know we are the same persons as yesterday, for example.

I will consider five basic theories of personal identity. With respect to these theories, there is sometimes made a distinction between reductionist and non-reductionist theories. Reductionist theories see the person or self as nothing more than their person-stages, where person-stages are seen as the mental and physical states arising from the brain and body. The fact of the person's identity through time consists just in the holding of other facts about the physical or psychological continuity among these states. Once you've adequately characterized the relation among these states, there is nothing more to be said about the person or their identity. Persons are not separately existing entities over and above their experiences. On the other hand, non-reductionism resists the tenets of the reductionist position, and claims that persons are separately existing entities apart from the body, brain, and mental and physical states of these, and that the fact of personal identity is just not reducible to facts about the body, brain, etc. (Baillie, 1993, pp. 7-9). A non-reductionist view will hold that a person is really a soul or mental substance not reducible to mental states, so this is an obvious choice for the substance dualist. Reductionist views are very common nowadays among philosophers and scientists and certainly among those like our authors holding to the extraordinary future. But such views were not popular historically and are probably not popular among ordinary people ('the man in the street'), if such people have views about this matter at all.

Five basic theories of personal identity are: (a) the body identity theory, (b) the brain identity theory, (c) the soul theory, (d) the mental state identity theory, and what I'll call (e) Parfit's theory. (Parfit really wants to throw out the notion of personal identity and replace it with something more useful). Within each of the basic theories I may make some distinctions among variants when helpful. The soul theory is a non-reductionist theory, with all the others can be interpreted as reductionist theories. The body identity theory is also called the 'bodily identity' theory.

Body Identity

If you ask someone how they know you are the same person as yesterday, they might say they can see you have the same body, in other words, the person-stages are characterized by the same body. (Of course they won't use the phrase 'person-stage.') And isn't this the epistemological criterion we all use everyday for identifying other people? No one goes around saying 'Well, it looks like Harry's body over there, but you never know, it could be with a different person today!' We assume the same body means the same person. This theory is known as the body identity theory or the bodily identity theory.

Wait a minute--how do we know it is the same body? It might seem easy to specify the criterion of identity of a physical body. Two body stages are of the same body ifexactly what? One is tempted to say one of two things. Either two body stages are of the same body if they are contiguous in space for successive moments of time or two body stages are of the same body if they share the same parts over time. 'Contiguous in space for successive moments in time' characterizes how we think physical objects behave. A rock on the ground stays on the ground, until somebody throws it, but even then at successive moments in time the rock-stages occupy adjacent physical locations. We ordinarily take material objects to persist over a duration of time even though parts might change. For example the White House is the same building when Clinton became President as it was when Bush was President and when Roosevelt was President. It is the same building even though there may have been changes made to it--rooms redecorated, exterior surfaces repaired, etc. One might say the same holds for human bodies. My body's a little older and heavier these days, but it's not a different body (like that of another person). Or, we might say, such continuity is not as important as whether the two stages have the numerically same parts. I'm me, and not the same person as my clone (who would be more like my twin), because my body-stages have the exact same arms, legs, etc. over time, rather than having merely similar (qualitatively identical) ones.

There are at least two problems that might arise with body identity as the constitutive criterion of personal identity: the difficulty of achieving a consensus on body identity and imagined scenarios involving body switching and brain transplants.

To tackle the first issue, consider that we seem above to have two theories of body identity--spatio-temporal continuity and sameness of parts. Well, which of these two criteria is really constitutive of the identity of any material object? This is a thorny issue. A famous way to see this is in the example of the 'Ship of Theseus.' Here's one version of that story. A ship (call it X) undergoes gradual replacement of some its parts. During such repairs, we would say it is the same ship. How can it be the same ship even though some of its parts have been replaced? It must be that the identity of the ship consists in maintaining a spatio-temporal continuity. Throughout any process of replacing parts, the general spatial position has remained relatively constant from one moment to the next, and any changes in the spatial position of the ship or of its parts in relation to one another have been close to the immediately preceding spatial positions. Throughout any changes the ship maintains its overall shape and functionality. This would seem to hold no matter how many parts had to be replaced over the years. (Likewise, if over the years the parts of my watch are gradually replaced by a jeweler endeavoring to keep it running, until eventually it no longer has any of the original parts, I do not complain that the jeweler is each time failing to return to me my watch but trying to foist upon me a different one.) But consider another ship, call it Y, that is caught in a fierce storm and winds up smashed into many pieces on the beach. The surviving sailors reassemble the ship to sail once more. Is the ship that sails anew really ship Y, the same ship that crashed in the storm? Everyone would say it is, for of course it consists of the same parts. It has just been taken apart and reassembled. In this case, there has been a break in the spatio-temporal continuity of the ship, since there was a significant time when the pieces did not really form a ship. But since we would say it is the same ship after reassembly, we must be using a criterion of identity of identity of parts. (Likewise, if my watch breaks and in fixing it the jeweler is forced to completely disassemble it and then put it back together again with the same parts, I do not complain that the jeweler is giving me a different watch back.)

So for these two criteria of identity, spatio-temporal continuity and identity of parts, it seems sometimes we rely on one and sometimes on the other as constitutive of identity. All well and good, but what would we say about the following scenario? Ship X undergoes repairs over a period of time, and eventually every single piece of wood (or other material) on the ship is replaced, though during this time the whole maintains spatio-temporal continuity. Let's call the resulting ship 'Y.' Meanwhile, some enterprising fellow has been collecting the discarded, replaced pieces and reassembling them into the old ship shape (we'll call this 'Z'). Eventually when the refurbishing is complete, we are left with two ships, each seeming to have a legitimate claim to be the original ship. Ship Y seems to be the original ship X, since it has maintained spatio-temporal continuity. But ship Z also seems to be the original ship X, since it is just the old ship X parts reassembled much like what happened after ship X crashed in the storm. We have a problem: clearly ship Y and ship Z are now two different ships, though both seem to be the original ship X. Which is the real ship X? We can't easily say, since our two criteria of identity now seem to conflict with one another.

One might say that there is no mysterious fact here that we somehow can't find out, for it's a matter of convention whether we think of ship Y or ship Z as the original ship X. The identity of the ship is not always determinate. It calls for a decision, not a discovery. If we say this about ships, are we ready to say it about our bodies? Maybe. We ordinarily grant that our bodies are the same bodies even though during normal cellular repair and maintenance the cells change. Spatio-temporal continuity holds. But sameness of parts also seems to hold--parts such as my arm and leg are the same parts over time. But what about the criterion of identity of my arm as a part? We just bring the question down to a further level and say it is the same arm since it maintains spatio-temporal continuity, at least, and maybe sameness of parts if the parts are the various muscles and bones, etc. Eventually we get down to a cellular or even molecular or atomic level in which the parts do change through spatio-temporal continuity. But what if some enterprising scientist was reassembling the parts into a new arm, and a new body? Would that body be my body?

This question of the identity of ships and bodies shows a key distinction between those who believe that identity is always metaphysically determinate and those who believe it can in some cases be indeterminate. In the case of the ship of Theseus scenarios, the latter holds we just have to accept that, even given our common sense use of terms and so forth, there just is no fact of the matter about the identity of the ship. We need to decide, not discover, what we should say about whether it is the 'same' ship or not, etc. This view is that of reductionism (in this context). On the other hand, one might claim that, no, there is a fact of the matter here about whether it is the same ship, which is the real ship, etc. This position will be held by nonreductionists. The same debate between indeterminacy and determinacy might arise in cases of personal identity. While it is relatively easy to allow for indeterminacy of identity in the case of material objects such as ships, it is more difficult for most people to allow this in the case of the identity of persons. To many people, who I am in the future does not seem to be open to be indeterminate. Indeterminacy about personal identity might seem to be disturbing not only to the substance dualist but even to materialists, functionalists, property dualists, and our authors.

Let's assume that we can resolve the seeming conflict of the two criteria of body identity and agree on when we have the same body and when we don't. Is body identity the proper criterion of personal identity? Some thought-experiments about body switching and brain transplants suggest that it might not be. Thought-experiments are commonly used to consider these issues. The basic idea is that one comes up with an imagined scenario that is logically possible though usually not technologically possible given current technology. Then you ask 'What would I say in this scenario about personal identity?' The belief is that we all know what personal identity is, but in considering ordinary, everyday life situations only, we have trouble getting to these 'intuitions.' So we can better unearth them by working through nonstandard scenarios.

For example, consider what would happen if you were in an auto accident and your body was mangled beyond repair, but doctors saved your brain and transplanted it into a different body (whether artificial or natural). Now of course the brain is part of the body but here we distinguish it from the rest of the body. Many people would hold that the person waking up in that new body and remembering the horrible accident would in fact be you. If this is so it belies the claim that sameness of body is what constitutes sameness of person. One might imagine the hospital presenting the bill not to the mangled corpse but to the body in the hospital bed, which, on this view, is where you would be. If you would agree with this analysis of the situation, then clearly body identity can't be the proper criterion of personal identity.

Brain Identity

The above criticism of the body identity theory claims that we would believe two person-stages were of the same person if they had the same brain, even if the body had been replaced. This naturally suggests that the criterion of personal identity should be not body identity but brain identity. On this view, then, two person-stages are of the same person if and only if the person-stages have the same brain.

A preliminary formulation of the brain identity theory might be stated as: 'X at time t1 and Y at time t2 are the same person if and only if X has the same brain at time t1 as Y has at t2.' This initial formulation may need to be modified to clarify situations in which a person loses part of his or her brain in an operation. For example, occasionally a significant portion of a child's brain may be removed to stop incapacitating seizures that the child would otherwise continue to experience. One can imagine a bizarre scenario in which a portion of someone's brain was removed and placed in another person's body--who now has the same brain, either body or both? So an amended version of the brain identity theory might be: 'X at time t1 and Y at time t2 are the same person if and only if enough of the former's brain survives at time t2 to support consciousness and be the brain of Y, and if there is no other person Z existing at time t2 who has enough of X's brain to support consciousness' (Baillie, 1993, pp. 9-10). In this and other possible scenarios of brain changes, we may be plagued by wrangling over whether sameness of brain is constituted by spatio-temporal continuity or identity of parts. But as in the case of our consideration of the body identity criterion let's assume at this point that these have been resolved and in any test case we can agree on whether or not we have the same brain.

Brain identity is a more popular theory of personal identity than body identity, but as in the case of the consideration of body identity, there are a variety of science fiction-like thought-experiments that have been purported to cast doubt on the plausibility of brain identity as the criterion of personal identity. In such scenarios, through teletransportation (as in Star Trek) or advanced medical techniques one might have the information from a diseased or damaged brain transferred to a new brain which is then transplanted into one's body or a different body. Clearly something analogous to this is what our authors think will happen in human-computer mind transfer. Many people would say that the person with the new brain and body is the same as the one undergoing the procedure in the first place: after all, the person with the new brain would remember going into the transporter room or hospital to have it done, so wouldn't this indicate the same person? Such an analysis would seem to belie the claim that brain identity is what constitutes personal identity.

Consider an example that Baillie borrows from Parfit. This is intended to cast doubt on brain identity as the criterion of personal identity. I have been diagnosed as having a fatal brain disease, and offered the possibility of having my neurons replaced by healthy ones either in a series of operations (1% at a time) or all at once. Full psychological continuity will be maintained in either case. In the case of the series of operations, each new chunk of replacement neurons becomes structurally and functionally integrated with the original brain and thus can be regarded as part of it and as preserving my identity (on the brain identity criterion) through the part-replacement. But the situation is different in the case of the all-at-once operation, where my brain is (seemingly) destroyed. On the criterion of brain identity, I survive in the first case but not the second. But how can this be correct? Many people would say I survive in both cases. Since full psychological continuity is maintained in both cases, surely the difference in how this is achieved cannot mean that I survive in one case but not in the other (Ballie, 1993, p. 24). Because I seem to survive in both cases, and the brain identity theory entails that I survive in the first case but not the second, the brain identity theory cannot be correct.

Note that the argument above relies on the view that the all-at-once operation entails the destruction of my brain. Baillie notes that a different analysis, suggested by Brennan, is that such an operation does not really entail the destruction of my brain, despite appearances, since my brain survives to a high degree in its replica. On this view, my old brain survives since the replica was causally related to the prototype in the copying process (Baillie, 1993, pp. 24-25). This is apt to get very confusing. While I might say on this theory that my brain can survive to a high degree in its replica, this does not mean that I have to hold that my old brain and the new brain will be the same brain. I could say that here we have brain identity, or I could say that we have some other relation between the two brains that could be considered some form of survival without identity. This latter view would be analogous to what Parfit winds up developing with respect to persons. So the analysis of this type of situation is not clear cut, and while some might think it shows brain identity is not the criterion of personal identity, others might object.

Soul Identity

The body identity theory and the brain identity theory are both reductionist in that once you've established the relation of sameness of body or brain among body stages or brain stages (diachronic identity), there is nothing more to say about personal identity. Being the same person just is having the same body or brain through the stages. But note that one can be a materialist and yet not buy into the body identity theory or the brain identity theory of personal identity. Saying that all we are is just matter is not the same as saying that we have to remain a particular piece of matter, that our identity as the same person over time is bound up with just this particular hunk of matter (assuming that could be established though all the molecular changes, etc.)

In contrast to the above reductionist view, a nonreductionist would claim that establishing a sameness of body or brain, or maybe even of mind, still does not establish personal identity. One clearly nonreductionist theory might be called the soul theory or soul identity theory. The soul identity theory holds that person-stages A and B are of the same person if and only if the soul associated with A is that associated with B, that is, a person is the same person if the person-stages share the numerically same soul. Obviously to hold this view one would have to believe in the soul, as the substance dualist might.

What is the soul? The general public, and much religious thought (though not all), seems to hold vague notions that we are or have souls. The average person might claim that over and above a person's body (including the brain) there is an immaterial soul. This is what makes us different than the other animals. When we die physically, our soul goes on. What is left unclear is the relation of this soul to the mind--are we talking of two things or one?

On a traditional Roman Catholic perspective (such as from Aquinas), the soul is taken to be an immaterial substance that is indestructible (by any natural means), simple (not having parts), unchanging, and given to each individual by God. God associates the person's soul with the person's body, and the soul survives any possible natural destruction of the body. Since the soul has no parts and is not a spatio-temporal object, normal criteria of identity would seem not to apply. The authors championing the extraordinary future obviously do not hold this type of view, even making fun of it. We have covered some issues relating to such a view in our discussion of substance dualism.

Here I use the phrase 'soul identity theory' to really cover what might otherwise be distinguished as several theories or variations, because it is not always clear what "soul" refers to. Sometimes the soul is considered to be an immaterial substance not identical to the mind. At other times 'soul' is taken to refer to the mind (including the emotions and spiritual side), but even when this happens the understanding is that the mind is a distinct substance (as in substance dualism) rather than identical to the brain, etc. (as in the type-identity theory). So on this view two person-stages are of the same person if the persons in the person-stages have the same mind or mental substance.

Among modern thinkers the most visible philosophical defender of a soul identity type theory is Richard Swinburne, whose view is called 'the Simple View.' Swinburne holds that questions of personal identity have (metaphysically) determinate answers in all cases even if we cannot know what they are. Among his reasons for holding such a non-reductionist view is that he claims to be able to coherently imagine continuing to exist in a disembodied state. In this state he could operate on and learn about the world without having to use a particular body ('chunk of matter') for this, and he could shift the focus of his knowledge and control simply by choosing to do so. Since this is logically possible, and since it would be impossible for a purely physical being to do this, he must be more than a purely physical being. He also claims his existence over time to be a datum of experience, as is the experience of oneself as being the common object of simultaneous experiences. Others dispute whether Swinburne can imagine what he thinks he can, or whether his imagining it shows it to be possible for him personally, or that his knowledge of his existence over time is not derived from other experiences (Baillie, 1993, pp. 46-54).

I mentioned that one version of the soul theory takes the soul to be distinct from the mind. On this view, a person's thoughts and personality are associated with the person's mind and not the soul, and so it becomes difficult to see what the soul has to do with personal identity. If what I take as my person-stages have the same mind, brain, and body for my whole life, then am I not the same person no matter how many souls I have or whether I have one at all? If it is claimed that Teddy Roosevelt and Julius Caesar have the same soul, and are hence the same person, one can object that it is not obvious why having the same soul makes the same person of what seem to be two distinct human beings. If it is claimed that a human being may have several souls during the course of his or her life, and hence that, for example, George Washington was actually several successive persons, one can object that it is not clear that George Washington is more than one person. We are looking for the criterion of personal identity that would make George Washington the same person throughout the life of what we commonly agree are all the George Washington person-stages (cherry tree incident, Valley Forge, President, etc.). If it is claimed that each human being has an indwelling soul that starts with birth and ends with death (or doesn't end), and which therefore makes him or her a person, one can wonder why a mind and body without such a soul is not a person. So if the soul is not the mind, then the onus is on the defender of this view to specify why and how the soul has anything to do with personal identity.

But even if we grant that the person's identity goes with the soul, we then need to consider the question of the criteria of identity of the soul; that is, what makes two soul-stages be of the same soul? We need to consider it if we are ever to understand what it means for two person-stages to share the same soul and thus be the same person. Unfortunately, usually no criteria are offered, since it is commonly claimed that we have no access to the soul. In this case the metaphysical question of personal identity becomes replaced by an epistemological one. We commonly take it that we can in ordinary cases tell whether two stages are of the same person, but if it is the soul that determines this, and we have no access to the soul, then we can never know whether the humans we observe are becoming different persons every day. Take a current person-stage of me or you, and compare it to what we all assume is another person-stage of me or you from yesterday or even five seconds ago. Are these two stages of the same person or not? We don't know. So you can see why this theory does not have much appeal among philosophers, though as long as it is kept vague it seems to have appeal among some religious people. I am not saying the theory is false or incorrect, only that it makes personal identity matters difficult to work with.

What of the theory that the soul is really an immaterial mental substance (identical to the mind)? If the mental substance is just reducible to mental states, then we are no longer discussing the soul theory but have abandoned it for a reductivist position (see the mental state identity theory below). If we are to remain nonreductivist, then the question of the identity of the person seems to be merely thrown back one level further: what is the criterion of mental identity? Non-reductivist answers to this question are hard to come by and it seems we are close to the problem we faced above in trying to determine the criteria of identity of the soul. And though the metaphysical issue and the epistemological issue are distinct, we really do need to answer the metaphysical question of mind identity if we are not to be left with an intractable epistemological question; if I can't answer this question then surely I won't know not only whether other people's person-stages belong to the same person, but also whether what I ordinarily think of as my person-stages really belong to the same mind and therefore person. If one claims that you can never know whether two person-stages have the same mind, you are back to the epistemological question that we faced in the other soul-theory above. Thus a problem with this view is that it replaces the question of the identity of the person with the question of the identity of the mind, and either we take a non-reductivist position that leaves matters mysterious and leaves us with an intractable epistemological problem, or we wind up reductively using one of the other criteria suggested for the identity of the person as the answer to our new question about the identity of the mind.

No two ways about it: the soul theory just leaves the issue of personal identity in a muddle. It raises a lot of important issues that, unless we are God, we can't seem to answer.

Mental State Continuity

Perhaps even more than in the case of the other theories, the phrase here denotes a variety of related positions on the issue. The term "Mental State Identity Theory" refers to any reductivist theory that holds that two person-stages are of the same person if and only if there is some sort of continuity of consciousness between the two. Versions of this criterion are called the memory theory, psychological continuity theory, psychological state theory, and the like.

Defining exactly what this continuity of consciousness must be has proven tricky. I can make a basic distinction between two types of mental state identity theory. One type claims that some sort of continuity of memory is the key and fully accounts for personal identity. Locke is known as a traditional proponent of the memory criterion. The kind of remembering Locke has in mind is not remembering particular facts that one has learnt (such as the multiplication table) but what have been called 'experience-memories.' These are memories of experiences that happened to you in an earlier person-stage. The other type of theory allows that memory is important but wishes to supplement this continuity by some other sort of psychological continuity. Thus on this view, other psychological facts and abilities need to be considered in accounting for personal identity. For example, candidates are the relation between an intention and the resulting act, or the persistence of a belief or desire. Or, moving from conscious to unconscious links, we can include the link between childhood experiences and adult character traits, fears, and prejudices. The similarity of mental or psychological characteristics or abilities may be relevant.

Recall that this theory holds generally that mental continuity is what defines personal identity. A typical statement of this criterion is: 'X at time t1 and Y at time t2 are the same person if and only if X is psychologically continuous with Y.' Bizarre scenarios can be created in which X is continuous with Y at time t2 but also with another Z at time t2, and so to preclude this one might add on: 'and X is continuous with no other Z at time t2' (Baillie, 1993, p. 11). More on this last clause later; it is a type of non-branching restriction that to some seems ad hoc and counterintuitive, while to others it seems necessary as the only way to prohibit bizarre outcomes.

What does psychological continuity mean? It is understood in terms of psychological connectedness, and here I quote directly from Parfit. Connectedness involves a direct link, for example specific memories of past experiences ('experience memories'). A later person-state of me is connected to an earlier state if I at the later state directly remember having the experiences had at the earlier state. Parfit notes that there are other kinds of direct psychological connections, such as the relation between an intention and acting upon it, or beliefs and desires that continue to hold over time. We have already seen above that on some theories these other types of connectedness are as relevant as memory. Parfit also notes that connectedness comes in degrees, depending on the number of the direct connections between X and Y, for example. Strong connectedness occurs if the number of connections over any day is at least half the number of direct connections that hold over every day in the lives of actual people. Psychological continuity, on the other hand, is the presence of overlapping chains of connectedness (Parfit, 1984, pp. 205-206).

Overlapping chains of memories gets around some quandaries posed as objections to this type of theory. For example, if memory is the criterion of identity, then if a young adult remembers childhood events he is the same person as the child, and if as an old adult he remembers young adult events then he is the same person as the young adult. But if as an old adult he can no longer remember childhood events, he is not the same person as the child, which is absurd and shows the memory criterion must be wrong. (This criticism of Locke's theory was first made by Thomas Reid.) The above formulation of the criterion gets around this by its use of overlapping chains. That is, psychological continuity is what allows me to be the same person as earlier stages that are now lost to my memory, because I might be connected to a stage that could remember that earlier stage even if I cannot now do so. This overlapping of several instances of connectedness is captured in the notion of continuity.

We can't get into all of the variations possible on the basic mental state continuity family of theories. It has proven to be the most popular type of theory but various objections to it have been made. For example, it has been charged that the memory criterion is circular. Basically the charge is something like the following. If one claims that personal identity consists in my remembering my past experiences, I have already presupposed identity in using the phrase 'my past experiences.' Ability to remember past experience cannot constitute identity if it already presupposes such identity already. Consider the fact that we distinguish between veridical and apparent memory--people sometimes recall doing things they didn't. A famous example is when old King George IV of England 'remembered' his leading troops at the Battle of Waterloo, though he wasn't present on that battlefield. But how do we distinguish between real and apparent memory except to say that real memories are memories of things you really experienced, which presupposes we already have settled the matter of personal identity (Noonan, 1989, p. 13)?

There have been formulations of the memory criterion that try to avoid any such circularity, and similarly Shoemaker introduced the concept of quasi-memory (Q-memory). Parfit's version is that I have a quasi-memory of some experience e if and only if: (a) I seem to remember having e, (b) someone had e, and (c) my apparent memory is causally dependent, in the right sort of way, on this past experience of e. The right sort of way is then spelled out in a way that is not circular. Ordinarily in memory the past experience causally affects the neurons of a brain to record the trace as a memory trace. This would be included as a possible way. But there might be other causal routes that we would allow. For example, if the memory trace of having e were transferred from A's brain to B's brain, then we might hold that B could now quasi-remember e. Ordinarily we think that we can remember only our own experiences, but it might be that in this way we could quasi-remember the experiences of others (Baillie, 1993, pp. 40-45). Obviously deciding which way we want to go on the matter of allowable causal routes is going to be crucial to our decisions about maintaining identity through some of the bizarre thought-experiments we come across. Some of this might be very relevant to questions about identity in the extraordinary future. Requiring strict kinds of causal routes might preclude identity where allowing looser kinds of causal routes would maintain it. It does seem unsatisfying to rely on one's 'intuitions' on such a matter but I don't know that there is anything like a consensus on just what kind of causes are going to be permissible to maintain identity. This means that our authors might have available a variety of options.

Even if we clean up the memory criterion to meet the objections above and reach some kind of agreement on it, some bizarre thought-experiments have been proposed which to many seem to call into question this criterion. One major problem with the memory criterion--or variants--has to do with such matters as fission and branching. (In our earlier suggested definition we noted a possible non-branching clause to keep us out of branching difficulties.) I don't know that we need to be too precise about these terms, but fission has to do with splitting, as in the case of bisection of a brain and transplant into two bodies, and branching has to do with that and other kinds of transfer of minds into more than one body. If there were two individuals in the future remembering being me (after brain fission, etc. of me and their getting my brain halves, etc.), I can't have both of them be me without them being each other, which to most people they are clearly not (recall the transitivity of identity). As an example, consider a situation occurring through any number of logically possible scenarios (a brain is split in two and transplanted, a mind is downloaded into two newly created, empty brains, etc.), such that A's memories wind up in both a B-body and a C-body. After waking up from the procedure, the person in the B-body recalls experiences of A, and so does the person in the C-body. So each is psychological continuous with A, and therefore the same person as A. But it does not seem plausible to say that they are the same person as each other, and so the transitivity of identity is violated (here A=B, and A=C, but B does not equal C).

Supporters of the psychological continuity theory could use 'non-branching' psychological continuity as the criterion, as we already have seen above in our suggested definition clause 'and X is continuous with no other Z at time t2,' but this may not be the answer. First, to some thinkers it just seems too arbitrary and ad hoc. Second, in the above example, if branching of identity is precluded but the procedure were to occur, then it would seem to mean the death of A, which is not obvious to some thinkers. If neither B or C can be A, what happened to A? The fellow must have died during the transplant. Not everyone thinks this is how we should describe the situation. Third, it seems to violate what has been called the 'only x and y rule' (Baillie, 1993, pp. 30-31). I think the rule originally came from Bernard Williams, but the following characterization of it is from Parfit. This rules claims that a theory of personal identity has to satisfy two requirements. The first requirement is that whether a future person will be me depends only on the intrinsic features of the relation between us and not on what happens to some other people. The second requirement is that since personal identity has great significance, it cannot depend upon some trivial fact (Parfit, 1984, p. 267).

Now the reason that a non-branching psychological continuity theory violates the rule is that it violates the first requirement. But why should the existence of a C-body individual have any affect on whether the B-body individual is A or not? In a transplant scenario, if A's memories wound up in only a B-body, we would grant that in the B-body was now the original A-person. The problem comes if A's memories were also somehow transplanted into a C-body, as in the above example. On the psychological continuity theory without the branching restriction, the C-body person would also seem to qualify to be A, which creates the problem. The 'non-branching' restriction tries to get us out of the difficulty by claiming that A's identity couldn't allow such branching. If this C body is around in this fashion then neither B nor C can be A. But recall that the first requirement of the 'only x and y rule' is that whether a future person will be me depends only on the intrinsic features of the relation between us and not on what happens to some other people. So whether the B-body person is A should depend only on the relation between A and B, irrespective of any other relation involving C.

To many people this requirement of the 'only x and y rule' seems plausible. But so does the non-branching restriction. So we seem to wind up with no satisfactory resolution. If we don't use the non-branching restriction, we wind up with violations of transitivity. But if we allow the restriction, we wind up violating the intuitively appealing 'only x and y rule.'

To finish my discussion of theories of personal identity, I'll discuss two more theories. It is debatable whether either one of them allows a satisfactory resolution of our feelings of conflict over whether to prefer the 'only x and y rule' or the non-branching requirement. It is also debatable whether either one fits our intuitions about just what kinds of cause we wish to allow in maintaining memory links between stages of the same person or one person surviving in another. We may not even have clear intuitions about such matters, so there is no guarantee of a resolution of our puzzlement with or without these new theories. Parfit's theory in particular is meant to rid us of the bafflement we feel about personal identity in bizarre thought-experiments, but to some thinkers his theory has such counterintuitive implications as to leave us as puzzled as ever.

The first theory is that of Nozick and known as the closest-continuer theory. The second is that of Parfit, who actually thinks personal identity is not the relation we should be interested in.

Nozick's theory violates the 'only x and y rule' but Nozick would rather give that up than throw out the transitivity of identity. The 'closest continuer' theory is the following. 'Whether X at t1 is identical to Y at t2 will always depend on who or what else is present at t2: X at time t1 and Y at time t2 are identical if Y's properties are causally dependent on X's, and if no other Z at time t2 stands in a closer (or as close as) relation to X.' Likewise, the 'closest predecessor' relation is that 'X and Y are identical if X is Y's closest predecessor; Y at t2 cannot more closely continue some other Z at t1 than does X.' The theory seems to specify a necessary but not sufficient condition of identity, because Y can be X's closest continuer but not close enough for the identity relation to hold. But the theory does not specify how to measure closeness. In other words, if the theory were applied to the case of the ship of Theseus, the theory would not tell us which ship was closest to the original (Ballie, 1993, pp. 15-16). Nozick obviously rejects the 'only x and y principle' because he is concerned about that 'other Z.' To illustrate this theory, consider that if X at t1 has his brain-states captured and programmed into a clone body, then this clone Y is not identical to X since X's closest continuer at t2 is himself. But if X were to die during the brain-state transfer, then Y would be X's closest continuer because of the psychological continuity between them, and so Y would be identical to X (Ballie, 1993, p. 17). This is the kind of arbitrariness that many thinkers, including Parfit, object to. I guess intuitions on this are just going to vary.

Nozick's theory's potential weakness is perhaps more fully exposed in cases of a tie. If Y and Z both seem equally close continuers of X, Nozick's theory says that in this case neither is the closest continuer and X has ceased to exist. But to use Noonan's example, consider that on Nozick's theory, in cases of my brain's fission and its transplant into two individuals, I do not survive; this is because since there are two equal candidates, I have no closest continuer. Noonan notes however that I can ensure my survival by convincing someone to destroy one of the brain halves before it goes into the recipient body. I then survive because I then do have a closest continuer. But this means that my survival is dependent on the non-existence of someone who would not be me were he to exist! Noonan asks, 'But how can my survival be thus logically dependent on the non-existence of someone else?' (Noonan, 1989, p. 17) Of course Nozick is not convinced by such an example, and he seems to say as much when he claims false the principle 'if there could be another thing so that then there would not be identity, then there isn't identity, even if that other thing doesn't actually exist' (Nozick, 1981, p. 32).

Likewise, Baillie asks us to consider your concern for the transplant recipients if your brain were to be removed, divided, and transplanted into twins (he mentions triplets, but the point is the same). On Nozick's theory, neither one is you, since there is no closest continuer. But if you were told that the twins would soon be tortured, would you feel no special concern? Baillie thinks you would feel special concern; you would be about as concerned as you would if told you would soon be tortured, which he thinks shows Nozick's theory can't be right, since on his theory neither twin is you (Ballie, 1993, pp. 17-18).

Another potential problem with the closest continuer theory is that of overlap. Suppose one of my cerebral hemispheres is transplanted into a clone body, and I retain the other one. My body doesn't give out until later, so I don't die at the time of the transplant. So there is a partial overlap in our (the clone's and my) lifetimes. If I had died immediately at the time of the operation the clone would be my closest continuer upon my death and thus at that time become identical to me. But Nozick seems to allow that the clone can be me at my death even if there is a little overlap during which we both exist prior to my death. Nozick places a restriction on how much time can elapse with both of us surviving for the clone to become me. If there is too much overlap, then the clone does not become my closest continuer upon my death and I do not survive. For example, if I had survived for several more years in my original body, then I in my original body would have been my closest continuer for too long and the clone would not become identical to me after my death. Too much overlap and it's not me, just the right amount or less and it is me. How could we possibly agree on how much overlap is allowable here? To some thinkers, the idea of a dividing line specifying the amount of overlap allowed seems counterintuitive anyway, even were we to agree on what it would be (Ballie, 1993, p. 19). Nozick allows that one minute of overlap would not preclude identity for the clone, but three years would be too much. But we are not going to be able to remove this puzzle; it remains a quandary intrinsic to the nature of identity over time (Nozick, 1981, pp. 44, 47).

Parfit's Theory

You step into a machine and momentarily lose consciousness. The body in the teletransporter room vanishes. An instant later an individual appears in a distant location with your appearance, memories, etc. He has even the memories of going into the teletransporter at the original location.

Is that individual the same person as you? Did you really make the journey? Opinions might be divided; undoubtedly many would say yes. Perhaps some others would think you had died and it was a new individual created in the distant location, even though he recalls being you. But consider that one day you go into the teletransporter room and when the button is pushed you don't lose consciousness. Instead you feel a twinge of chest pain. Meanwhile, the other body winds up in the distant location as usual. Is it you? You can talk to that individual, so how could it be you? But why not if it was in the first case? You are told that the teletransporter is broken. The replica has been created as usual in the distant location, but the heart in your original body is damaged. Your original body will die in a few days. Will you die then, or will you go on living as the replica (Parfit, 1984, pp. 199-201)?

Someone seeking to understand personal identity is likely to become very confused by all the thought-experiments put forward, and this one is just as confusing. These scenarios are supposed to flush out our intuitions about personal identity, but after following all the possible variations of fission and fusion one hardly knows what to say. Parfit thinks that part of the problem is that we are focusing on the issue of personal identity, when we should be focusing on something else, which he calls Relation R. We also assume that personal identity is determinate, and that in every thought experiment about you there must be an answer about who you will be. But Parfit thinks personal identity is indeterminate; in some cases there just isn't an answer, and the question is empty.

Parfit's view is tricky to understand. To explain it, I'll summarize some of his comments and throw in some examples he uses.

We have already been introduced to the distinction between psychological connectedness and psychological continuity. Parfit thinks this distinction is important to understanding personal identity. Connectedness involves direct links, and continuity involves overlapping intervals or 'chains' of connectedness.

Recall that personal identity is thought to be transitive. Parfit agrees, and since strong psychological connectedness is not transitive, he thinks that can't be what personal identity consists in. Psychological continuity, on the other hand, is transitive. What Parfit calls the 'psychological criterion' of personal identity holds that X and Y are the same person if and only if X is psychologically continuous with Y, the continuity has the right kind of cause, and there is no different person who is also psychologically continuous with Y (Parfit, 1984, pp. 206-207). Again, we have seen such a definition already, which drew upon Parfit's writings on this topic. So Reid's brave officer as an old man may be connected to himself as a brave officer, and the brave officer connected to himself as a child, without there being any direct connection between the old man and the child. There is however the overlapping chain of the two connections that provides the kind of continuity we think makes for personal identity.

Let's focus on the notion of the 'right kind of cause.' Ordinarily what we consider the right kind of cause has to do with the fact that the present memory is related to my body, and I have the same body now as what originally experienced the past event in question. So the normal cause of continuity is provided by this body being the same. This normal cause occurs when I seem to recall having an experience, I did in fact have it, and the apparent memory is caused by that past experience in the normal way, such as aspects of the remembered event affecting my body at that time to register in my memory, this memory continuing to exist in the same brain, it giving rise to my present memory, etc. Or in the case of other kinds of continuity, such as sameness of character, any changes in my character are brought about by normal processes of aging, having certain significant experiences in life, etc. (Parfit, 1984, p. 207).

Why are we confused about what to say in the above teletransporter case? We are trying to analyze the situation in terms of personal identity, when Parfit thinks we should focus on something else. Parfit notes that opinions vary about what is happening in the first story. Some believe they would be so transported, while others believe that they would have been killed and a new person created. This second type of opinion is easier in the case of the second story. Parfit calls the example of the dying, original me in the second story the 'branch-line case.' The replica has traveled on the 'main line' but you are doomed to stay on the 'branch-line.' The normal view is that existence as this branch-line individual is no good, or as good as death, since he will soon die. It doesn't matter that the replica will survive, because it seems natural to think that this individual is not you (Parfit, 1984, pp. 200-201).

Let's go back to the notion of the right kind of cause of psychological continuity. In ordinary life this is provided by having the same body throughout one's life. But in the teletransporter cases, the argument could be made that the replica does not have the original body. There would be psychological continuity, but it would not be provided by the normal kind of cause. Does this mean we would not have the presence of the right kind of cause? Is the normal kind of cause the only right one? Parfit notes that we can talk of the right kind of cause in a narrow sense, referring to only the normal kind of cause. But we can note another kind of cause we might consider to be the right kind of cause. A slightly wider sense of the right kind of cause sees it as any reliable cause. The cause of the continuity between the original me and my replica is reliable, though unusual. On a narrow cause understanding of the psychological criterion my replica would not be me, but on this wider understanding of the right kind of cause it would be. So maybe the replica would be me, as we thought in the first case (Parfit, 1984, p. 209).

But Parfit is still not satisfied with the analysis. Why are we not as happy with the outcome in the second case as we are in the first case? It must be because we are focusing on personal identity. We think personal identity is maintained in the first case but not the second (though we might be confused about both cases), and we think what matters to us is maintenance of personal identity in such scenarios. But Parfit thinks we are focusing on the wrong relation (Parfit, 1984, p. 201).

Parfit holds that personal identity is not what matters but rather something else, which he calls 'Relation R,' which is psychological connectedness and/or continuity with the right kind of cause. By the 'right kind of cause,' Parfit allows an even wider sense than the narrow sense above (normal cause) and the wider sense above (reliable cause). Parfit happens to think the right kind of cause could be any cause (Parfit, 1984, p. 215). We'll call this the 'widest' sense of cause.

Now consider the branch line case. If what matters is personal identity, Parfit thinks, then the fact that my replica will go on living when I die means essentially nothing to me. If what matters is Relation R, on the other hand, then the fact that my replica will go on living is 'about as good as ordinary survival' (Parfit, 1984, p. 215).

Let's step back and see if we agree with Parfit. He seems to be saying that we are wrongly concerned about personal identity, when we should be concerned about Relation R, which is about as good as personal identity. But what happens if even after hearing about Relation R, and how it is maintained in the replica, we are still unhappy with the second case? Does this show that even though Parfit thinks we should not be, we still really are concerned about personal identity? Apparently we live under some kind of 'false consciousness.' If so, we need to be convinced by Parfit that what we would get out of Relation R really is about as good as what we would get out of personal identity. That might convince us to stop being concerned about personal identity.

How can Parfit convince us to stop being concerned about personal identity? He could give us analogies to show us that something produced by any cause might be about as good for what we want as something produced by the normal cause. He could argue that we assume it is (metaphysically) determinate when in fact it is indeterminate. He could also argue that the notion of personal identity leaves us perplexed about puzzle cases in a way that Relation R does not. Perhaps a little more explanation would help. These tasks he undertakes. I'll summarize remarks he makes on these issues.

Let's see Parfit's analogy. Suppose a normal person with normal vision glances at something. What enables him to see what he looks at has to do with his eye causing certain things in his brain. Thus we might think of 'seeing' as occurring when there is this normal cause at work. But what if the individual in question underwent eye transplants, involving electronic eye devices, images from those eyes, etc. but resulting in the production of the same type of visual images in his mind. Such causes wouldn't be normal, but would this preclude this individual from seeing? Parfit thinks they would not. These might be reliable, but even if they were not, then when the eyes were working the individual would still be seeing. For Parfit, any cause would be good enough when it was working (Parfit, 1984, 208-209).

Normally what makes me the same person from one moment to the next is psychological continuity caused by normal means having to do with my body, experience, and later memories. But for Parfit, what matters is that the future stage be psychologically continuous with me now in the sense of that later stages' memories being caused by anything whatsoever, whether or not the cause is reliable or normal. Psychological continuity without its normal cause is just as good as ordinary continuity (Parfit, 1984, pp. 208-209).

How does this apply to the puzzle cases of the thought-experiments? We assume that what matters in considering future stages is personal identity--that the future stage be the same person as I. If no future stage will be identical in person to me now, then it means I will have died. But to Parfit what matters in such a case is not whether any future stage is an identical person to myself. What matters is that a future stage be R-related to me. Relation R is psychological connectedness and/or continuity with the right kind of cause, and as mentioned, the right kind of cause for Parfit can be any cause, not just the normal one or a reliable one (Parfit, 1984, p. 215).

But, we say, will the replica really be me or not? No matter what happens to brains, bodies, and memories in these bizarre scenarios, we say, there must always be an answer to the question of whether or not some individual will be identical to me. Parfit disagrees. Maybe the concern for personal identity is based on the false assumption that personal identity is always determinate.

It is natural to assume personal identity is determinate. Under this assumption, there is a fact of the matter about whether some future stage would be me or not. It either is or it isn't. The future stage must either be me or someone else. If the destruction of that stage's body is immanent, the question 'Am I about to die?' has a definite answer. Another natural view to take is that personal identity is what matters in considering the future. So, for example, in considering two outcomes, the one that has you living another forty years is preferable to the one that has you dying next week. But these views seem to assume that personal identity is something over and above the relation among individual person-stages, as if there is a further fact of the matter over and above the description of all the particular facts of the stages and their relations (Parfit, 1984, pp. 213-215). Nonreductionists who believe in a substantial soul might hold this view, but apparently Parfit thinks there is no reason for reductionists to hold it. Instead they should see that personal identity is indeterminate. After all facts about the person's stages and the relations among these have been specified, there are no further facts about personal identity left to discover. You should be concerned about the future, but for those stages R-related to you and not just those stages identical to you.

We are not separately existing entities over and above our bodies and interrelated physical and mental events. Our identity over time just consists of Relation R. Furthermore, it is not true that our identity is always determinate. In some cases the question about whether you are or are not going to die with the destruction of a particular body is empty. Personal identity is not what matters but rather Relation R with any cause, whether or not this is accompanied by personal identity (Parfit, 1984, pp. 216-217).

Parfit believes that personal identity is not what matters partly because no criterion of personal identity can satisfy two requirements that it must. The first requirement is that whether a future person will be me depends only on the intrinsic features of the relation between us and not on what happens to some other people. The second requirement is that since personal identity has great significance, it cannot depend upon some trivial fact (Parfit, 1984, p. 267). This is the 'only x and y rule' we mentioned before!

What should matter in the relation between myself and future stages? Parfit considers several types of cause that might be suggested: (1) physical continuity, (2) Relation R with its normal cause, (3) Relation R with any reliable cause, and (4) Relation R with any cause (Parfit, 1984, p. 283).

The first possibility, physical continuity, cannot be the right kind of cause. It should allow transplants, but why cannot everything, including the brain, be transplanted? Those who object that part of the brain must remain do so because they think it provides for psychological continuity, but then they are really using that criterion, not physical continuity. But this means that the second possibility is not right either. Physical continuity is part of R's normal cause, and if physical continuity is not what matters, then the normal cause of R is not what matters. Having my brain and body is normally what ensures psychological continuity, but it is that continuity that matters, not what normally provides it. I could have a different brain and body and still have the right kind of psychological continuity (Parfit, 1984, pp. 284-285).

Furthermore, the cause of the Relation R need not even be a reliable cause. Parfit asks us to consider the case of an unreliable treatment for a disease. Ordinarily we would want the treatment to be reliable, so that we could be assured of being cured after treatment. But if we were cured by the treatment, though it normally does not work, we would not object since what matters is the effect. The effect is "just as good" even though the cause happens to be unreliable. This shows that what matters is not the cause of Relation R but Relation R itself with any cause (Parfit, 1984, pp. 286-287).

If Relation R is what matters and not personal identity, what are we to make of the possibility of branching and even fusion, in which identities seem to get split, joined, and in general too confusing to trace? Can Relation R account for what is happening in such cases? Parfit develops a way of talking about such replicas that are sort of descendents of oneself, though one is not personally identical to any of them. He calls them descendent selves, and they would look at you as an ancestral self. Parfit notes that though all of one's descendent selves will be psychological continuous with you, the farther away they get the less psychological connectedness there will be. How should one view these descendent selves? How close should we feel to them? Are they like sons and daughters to us, or something closer? Parfit seems to think that in this respect we may vary in our opinions of the relative importance of the two factors of continuity and connectedness (Parfit, 1984, pp. 300-303).

The situation gets more complicated if not only branching among descendents but fusion is allowed as well. The relations between generations gets too complicated to use terms such as "descendent self." The picture becomes one of a honeycomb rather than a branching tree. One can even imagine creatures who change appearance over time, and with this gradually change psychological connectedness between successive states of themselves. They are in a sense immortal. Psychological continuity pervades the entire chain, though connectedness occurs only among adjacent parts. Here the distinction among successive selves may be one of degree, since connectedness comes in degrees too (Parfit, 1984, pp. 303-305).

Are you convinced that personal identity is not always metaphysically determinate and that what we should be concerned about is Relation R rather than personal identity? Parfit's theory represents an ingenious attempt to get us out of the quandaries wrought by the bizarre thought-experiments characterizing discussions of personal identity, but not everyone has been convinced. It seems a lot easier to be convinced that the notion of personal identity is inadequate than to believe that Relation R is just as good. And for many it is just very hard to accept that their future personal identity can be indeterminate, that there might be no fact of the matter of whether they continue to exist or not, of whether they have died or not. Relation R does seem to ensure a kind of survival, but not the kind we normally think of, which would preserve our existence as the same person. It seems to be somewhere between surviving as the same person and the notion one hears of sometimes that one "survives" through one's children.

So Parfit is up against powerful intuitions against parts of his theory. Consider his claim that what matters to us (though we don't realize it, apparently) is not that we exist in the future or the continuation of our personal identity, but instead that in the future there be people related by certain links of psychological continuity to our present selves. If this is true, then we should have no reason to prefer a future that includes our continued existence to a future without us but with another person who is thus psychologically related to us. This claim seems counterintuitive to many people; our interest in personal identity seems to be different than the type of interest we have in other things (Noonan, 1989, pp. 23-24).

What Can We Expect from Our Authors?

This completes our survey of issues and theories pertaining to personal identity. I now wish to make several points about our authors and this topic. After discussing these points, I can proceed to consider concrete options for them with respect to personal identity in the extraordinary future.

  1. It is too much to expect authors depicting the extraordinary future to have "solved" the problem of personal identity when professional philosophers have reached no consensus on this matter.

  2. Yet we may expect our authors to discuss the issue with more awareness of the subtleties of the possible positions and in a less cavalier fashion.

  3. Our authors need to clearly distinguish between the issue of human identity and the issue of personal identity.

  4. To some extent the viable options for personal identity for them will be limited by the particular view they take on the nature of mind and the mind-body problem, as well as on the mechanism of mind transfer they envisage.

  5. They should not expect resolution of this issue but rather show that on popular positions on the issue of personal identity no insurmountable hurdles are created by the issue of human-computer mind transfer.

As we have done previously, to discuss the topic of this chapter we must make some assumptions that have been questioned in earlier chapters. So note once again that for the sake of argument we will assume that in the next century computers have become as smart as humans, that they have sophisticated robot bodies, that they are capable of being persons, including having phenomenal consciousness, and that a viable mechanism of transfer of mind from human person to robot person exists.

Let's get to the first claim above. It is too much to expect authors depicting the extraordinary future to have "solved" the problem of personal identity when professional philosophers have reached no consensus on this matter.

The proverbial "man on the street" does not possess the technical or philosophical concepts needed to fully discuss the issue of personal identity, yet that individual has a gut sense that he or she is the same person from one moment to the next. That person also can generally follow examples from "Star Trek," "The Six-Million Dollar Man," the remake of "The Fly" and other science fiction scenarios. In these stories people are beamed around the universe without loss of personal identity, have major body parts replaced and still maintain identity as the same person, and have their identities fused with those of others. However, common sense has its limits in analyzing this topic. When the stories get too bizarre, with multiple versions of yourself lurking around every corner or your genes becoming fused with those of a very different being (like Jeff Goldblum in "The Fly" fusing with the fly and then even the telepod itself), the common person chuckles and does not even try to sort out the relevant issues. After the above discussion, it's pretty obvious he or she wouldn't be able to anyway. Perhaps nobody can.

I suggest it is too much to ask our authors to be able to sort out all of the relevant issues either. Professional philosophers have baffled one another for decades with bizarre thought-experiments that leave just about everybody perplexed about what to say. It seems as if there should be a "fact of the matter" about personal identity (I don't choose to be the same person, I just am and discover that fact). Yet on some of the scenarios, it seems almost an arbitrary decision about whether and how identity is preserved. No wonder Parfit thinks that identity can be (metaphysically) indeterminate and that we need a new way of looking at the problem.

In this light Searle's (1999) satirical comments against Kurzweil (made in Searle's review of Kurzweil's new book) are a bit of a cheap shot. With reference to the Kurzweil's view that we will be able to create multiple versions of ourselves, Searle rhetorically asks "Who will get my driver's license?" Searle insightfully criticizes many aspects of Kurzweil's views, but not here. If Searle has an answer to such a question, one that would convince all other professional philosophers and eliminate future debates about personal identity, let's see him put it on the table and resolve the issue.

On the other hand, Searle's cheap shot does highlight the fact that Kurzweil, representative here of our other authors, does not seem to realize that he is treating this issue in much too cavalier a fashion, which is my second point above. If the program is one of human-computer mind transfer as a way to achieve personal immortality, then the issue of personal identity needs to be clearly stated, options explored, and plausible arguments given about how and why personal identity would be preserved in various scenarios presented. Our authors to varying degrees seem to think that if something relevant to personal identity is preserved in the transfer, then this is sufficient to maintain identity through the transfer. But they nowhere undertake the project of spelling out necessary and sufficient conditions for personal identity and then, proceeding through the various scenarios, show when and how personal identity is maintained. They ask some questions about personal identity, minimally state what they think it seems to consist in, show that we can become baffled about it, and then say in effect, "But don't worry, your identity will be preserved in the transfer."

Paul and Cox label the view that personal identity would be maintained through human-computer mind transfer the "Key Assumption" but do not explore it in nearly enough detail to show why we should follow this assumption. It is as if they think that by being candid about their treating it as an assumption, rather than a proved claim, and noting that personal identity seems to involve continuity of memory or self-conscious awareness, they have been sophisticated enough about it. But this is not enough--we need at least an attempt to sort through how such a criterion is supposed to work or not work in various possible scenarios presented. We need a consideration of the plethora of scenarios possible with the future, not just a few old transplant scenarios from the present. I think that, armed with the relevant distinctions, this task of exploring the options would be quite easy for our authors as long as they made no claims to be able to decide the controversy over personal identity. As it turns out, the problem might be not be too little of you after the transfer but too much! But this does not undermine the project of human-computer mind transfer as would the opposite. It is almost an embarrassment of riches.

Before we proceed to a fuller discussion of options for our authors, I point out that our authors do not provide an adequate treatment of the distinction between human identity and personal identity. This may be in part because they have never clearly considered the difference between the concept of a human and that of a person. I discussed this in an earlier chapter. If the concept of a person is not the same as that of a human (and it is clear that it is not), then there could be nonhuman persons. If this is so, then it seems possible for one's identity as a person to be maintained in the human-computer mind transfer irrespective of whether one's identity as a human has been preserved.

Our authors seem dimly aware of the distinction between human and person. Kurzweil thinks the big question of the next century will be one of how to define what is human. The implication seems to be that after we accomplish human-computer mind transfer, we will wonder whether the resulting creature is still human. It's true we might wonder that. We might also wonder whether the new creature is a person and further whether it is the same person. These might even be more important questions, though Kurzweil doesn't seem to realize it. Maybe he thinks the question of how to define what a human is is the same as how to define what a person is.

I do not need to decide here the issue of human identity, though it may be of some interest to explore it. For example, one could hold that human identity is preserved if one maintains one's human genetic material. Human genetic material, after all, is what makes each of us develop into creatures who appear the way humans do rather than creatures who look like members of some other species. Yet one wonders if possession of human genetic material is a necessary condition. What if, while I were sleeping, an evil demon changed all of my genetic material into that from another creature but also changed the laws of physics such that that other creature's genetic material resulted in human-like characteristics? We commonly take it that a human genotype results in a human phenotype, to use the biological terms. The inner genes express themselves in human external characteristics. But what if I suddenly had a different species' genotype but still a human phenotype? Would it be true that I was no longer human? Or consider the opposite--what would I be if my phenotype were changed overnight even though I still had the human genotype? (Gregor Samsa of Kafka's The Metamorphosis turning into a cockroach?) It may not be obvious what we should say.

The view of essentialism, in this context, holds that I have certain properties or features without which I would no longer be the same particular person. Without these essential properties, I would no longer be the same "I." Two essentialists could of course disagree over what those features were. So for example, if I changed my hair color, almost everyone would allow that I would continue as the same person, since hair color would not likely be considered an essential property. But what if I took on the body of a bird or lizard, with a correspondingly lower level of intelligence? Could "I" still have the body of a bird and still be the same person? Here opinions might differ. Obviously this is related to the issues discussed above about personal identity. If psychological continuity is what matters, then perhaps I could change many features of my appearance and yet be the same person. Could I still have the same memories if I were a bird rather than a human? It seems doubtful. Birds aren't very smart (hence the expression "bird brain" is considered derogatory rather than a complement). We don't seem to have to worry about this because no mind transfer scenario has us becoming less than human, intelligence wise. We always anticipate becoming smarter than humans. So it may not matter with respect to essentialism, personal identity, and human-computer mind transfer if after the transfer I no longer am human, as long as I am as smart as a human.

On the other hand, if one's theory of personal identity stresses other psychological factors besides memory, and these other factors include psychological characteristics present in humans but not in the new robots, then perhaps personal identity would not be preserved if I were no longer human. If a robot could not feel the way a human does, or its body were so different than that of a human that its way of interacting with the world were radically different, or if it no longer even had a recognizable body, then one might argue that crucial psychological continuity would have been lost and along with it one's identity as the same person. So these robots really might have to have typically what humans have psychologically even if they are smarter. I am just suggesting that while not every human characteristic may be required for personal identity, perhaps a good case can be made that more than just comparable intellect is. It depends on one's understanding of "psychological continuity."

If essentialism is involved in maintaining personal identity, then Gardner's discussion of the many kinds of intelligence may be even more relevant than it seemed before. Considering what we need to build into a robot in the way of intelligence to make that robot able to receive a person's identity, then, we may need to take into account the full breadth of intelligence of the person transferring in. It may be that if the robot cannot match the musical intelligence of the person, for instance, and thus the alleged transfer turns a virtuoso into someone with almost no musical ability, we would consider this a case of an unsuccessful mind transfer. This point would still hold even if one disagrees with Gardner that such a capacity should be considered a distinct kind of intelligence. But note that even if we do not take such an essentialist position, and still consider this scenario to maintain the same person after the transfer, we would probably not consider it to be a desirable outcome. So perhaps there is a real sense in which we would have to worry that a mind transfer scenario would have us becoming less than we were before, intelligence wise, unless our authors take Gardner seriously and try to provide the robot with the full breadth of intelligence, or capacities, possessed by the pre-transfer human.

As enumerated above, my next point is that to some extent one's view on personal identity will be limited by one's view of the mind or the mechanism of mind-transfer envisaged. I don't want to go into all the possible combinations here, but an example will suffice to illustrate the point. One is not likely to claim that personal identity consists of body identity (sameness of body) if one's view of the mind-body problem is some form of substantial dualism. Substantial dualism holds that the soul or mind is a substance distinct from the substance of the physical body, and one is likely to think that identity will track the presence of that soul or mind rather than the identity of the body. Thus, for example, if it turns out that certain religious perspectives are correct, and a future resurrection of some people occurs but with a "spiritual body" of some sort rather than a resuscitation of the worm-eaten corpse in the ground, then one might claim that due to sameness of soul the person's existence will continue though the original body doesn't. Here one's view of the mind (as a distinct mental substance that really makes the essence of the person) seems to limit what one holds about personal identity (namely that it won't consist of body identity). Or to go the other way, if one thinks that body identity is the criterion of personal identity, one is not likely to be a substance dualist. If one is a thoroughgoing materialist, on the other hand, one might be open to body identity as the criterion of personal identity. All I am saying on this point then is that optimally there will be some sort of consistency of metaphysical commitments among the views held on the various issues involved.

These preliminary points out of the way, let's consider what options are available for our authors. We have already presented their views on personal identity, though these will need to be clarified in the context of our examination of the basic theories and other issues discussed. But besides considering what positions they do hold, we will also want to consider what they could and should hold with respect to such issues. It should be remembered that I am not trying to resolve the debates about personal identity, nor am I suggesting that our authors could do so.

When we consider the human-computer mind transfer scenarios in the context of the basic theories of personal identity, we get a variety of possibilities, perhaps too many to consider. For example, consider some of the views that might be adopted if just mental state continuity in some form is selected as the correct theory:

  1. A copy of you is you irrespective of the number of other copies existing.

  2. A copy of you is not you as long as you in the original body is still around.

  3. If the position immediately above holds, then when the original body is destroyed, identity shifts to one of other copies (the closest in some way--historical or otherwise).

  4. As above, but identity shifts only if the destruction is done gradually (as in a transplant of part for part).

  5. As above, with identity allowed even if the shift is not gradual, but identity shifts only if there is no large temporal and/or spatial gap (such as a week, a year, etc.)

  6. Same as any of the above scenarios, but identity is not maintained during fusion.

  7. Same as any of the above scenarios, but identity is maintained during fusion.

  8. Personal identity is not preserved during many scenarios of human-computer mind transfer. But this does not matter as long as survivors are created.

  9. Personal identity is not preserved during any scenarios of human-computer mind transfer, and neither are there any survivors of the person transferring.

It should be easy to recognize some of the crucial issues and positions discussed earlier playing out in the above depictions.

What should be the fundamental criterion on whose basis we can classify the options? We could proceed by taking each basic theory and seeing how it fits. Or we could take the types of transfer scenarios and see which personal identity theories fit each of these. Or we could consider first single survivor scenarios, then multiple survivor scenarios, and finally fusion scenarios and see how each might be possible on various personal identity theories. Or we could do something else. With so many options, there are numerous ways to proceed.

Let's just try to keep this as simple as possible. I'll start by figuring out what theory our authors do seem to hold. Then we'll consider each of the basic personal identity theories as discussed above and determine whether there would be any possible mind transfer scenarios to fit it. We may not go through all the possibilities but we'll probably cover the important ones.

Concerning the View of Our Authors

Although it may seem hard to believe, I am not really sure exactly what our authors think personal identity consists in. This is because they seem to have two different views operating. What seems to be their main view will be called the "first view" below. But they can be interpreted, at least, as having a variation on that, or maybe even a different view, implicit in some of their remarks, and I will refer to this as the "second view" below.

When they adopt the first view it is from the perspective of the identity of existing human beings. Our authors here think the proper theory of personal identity involves some sort of psychological continuity. It is not rigorously specified exactly what this continuity consists in, but various comments give us a good idea. Sometimes it is depicted as remembering. At other times it is seen as a "self-conscious awareness" that involves experience-memories and other types of psychological connectedness, such as that between intention and act. They do not always describe it in the clearest terms, but this looks to be a standard sort of mental continuity theory of personal identity. Clearly here history is important to maintaining connectedness and continuity, for such things as experience-memories are involved.

When they adopt this perspective, they can account for ordinary human personal identity well enough, but then they show signs of bafflement when considering some of the puzzle cases involving duplicates and replicas. So, for example, Kurzweil talks of Jack maintaining his memory and so forth through the various transplants he undergoes. Kurzweil sees the patterns of matter and energy that remain constant from one moment to the next in us as providing for personal identity. (I interpret "patterns of matter and energy" as meaning memory traces and traces of psychological traits in the brain, but even here he may be shifting to the second view.) They are only "semi-permanent," implying they could change over time. Something like psychological connectedness seems to be involved, along with the recognition that something like psychological continuity might be the real key to account for changes in connectedness (Kurzweil doesn't use these terms, but this may be what he is trying to say). Historical connections are important in that states of the self are able to remember earlier states. When he considers some puzzle cases as mentioned earlier in this chapter, Kurzweil wrestles with the problem of duplication of this pattern in more than one individual and professes he is perplexed. For instance, when "Jack" is scanned and then given a new brain and body, Kurzweil doesn't know what to say if the old Jack is murdered at the same time. If the old Jack is still around, it seems the new Jack is really Jack and the old Jack is Jack too, but if the old Jack is killed, the new Jack is a different person.

It is a little odd that Kurzweil reverses the usual interpretation of such a scenario. For example, on Parfit's example, killing the old Jack as the new Jack is created is what allows Jack to travel via the teletransporter. If the old Jack is still around, Parfit thinks, this is a case in which we don't think the new Jack is the old Jack. But I don't want to highlight this difference with Parfit's intuitions but rather the fact that multiple copies baffle Kurzweil here. As well they might baffle everyone, though maybe in Parfit's way and not in Kurzweil's way. Historical connections are still important because the new Jack, when he is considered the same Jack, has all the memories of the old Jack, and the cause was that these were put into him based on the contents of the brain of the old Jack.

Similarly, Paul and Cox think the key to personal identity is the conscious awareness of one's present and past. Consciousness of self seems to be the key. Memories are involved. They recognize that memory may be incomplete, etc. in a way that shows they are struggling to come to terms with the fact that connectedness is relevant to personal identity, but continuity is really the key to making a person the same though they might forget some earlier matters. And whether your own mind continues through changes in human brain chemistry or moving from a human brain to a robot brain, what matters is that continuity of memory is maintained. But Paul and Cox too profess to be baffled when it comes to the case of two copies being made of one individual being scanned. They seem to think there would be two persons, though they are not sure, and we get no resolution of the transitivity of identity issue.

Moravec seems to hold this first view in that he thinks personal identity involves consciousness and memory. But he is hard to pin down on consciousness, as we have seen. And the part that interpretation plays is unclear. Does my personal identity depend on who I interpret myself as being? On who others interpret me as being?

So on this first view of personal identity of our authors, some type of mental state continuity theory of a fairly standard sort is operative. It takes historical connections to be important, and has trouble dealing with duplicates and replicas. On the whole this view seems very reasonable.

But there seems to be another theory of personal identity lurking around that is not exactly the same. Moravec displays this one more often, but even Paul and Cox and Kurzweil show they hold it when they make off-the-cuff remarks at various times. This theory emphasizes other psychological traits holding constant more so than memory and historical connections. This view also seems to have more to do with the view of the mind as software. The software that is a particular person's mind seems to include the data as well as the program, of course, but on this view the data does not seem all that essential. On this view, personal identity seems to consist of maintenance of the same program in the sense of the same algorithms and so forth that define a particular person's mind, mental abilities, psychological characteristics, etc. If the mind is nothing more than a piece of software, then as long as there is some running of that software, we are dealing with the same person, even if there are large gaps in memory or we have multiple versions of the person accumulating different memories.

This view is not explicitly presented by our authors as a theory of personal identity but it seems operative in some of their comments. On this view, duplication or replication is not seen as a problem or source of bafflement like it is on the other view. Just the opposite--duplication is seen as an advantage and as a means of ensuring survival. This view comes out in comments made by all authors at one time or another that in the extraordinary future we should keep multiple copies of ourselves stashed around the universe in various places. The metaphor here is that of keeping backup copies of a file to ensure that you will be able to recover it if the original is destroyed. For our authors it is more than a metaphor. They really think that we will literally keep backup copies of our selves around. Of course when they advise keeping multiple copies around, they don't necessarily have in mind keeping functioning duplicates--multiple walking, talking robots, though that is not ruled out and occasionally mentioned as a way to ensure survival also. Rather the usual way of talking has it that copies of your code are what is stored around. In the event your main body is wiped out before you can port yourself to a new one, some kind soul could take the backup code copy and activate it in a new body, and thus you will find yourself still alive. In this sense your personal identity consists of the code. Two stages are of the same person if the code algorithms are qualitatively, though not necessarily numerically, identical.

This view does have some place for historical connections in that the copies of the code kept around will have some memory data, etc. in them. They will have whatever was in you as your program when the backup was made. And causal connections are involved because the backup will be directly causally linked to your original software self through the copying process. (Though there is no indication that this is essential. If you were lucky enough that lightning somehow created a complete copy of your code somewhere, that might be good enough.) The usual assumption is that somehow the copy will be based on some original scanning of you in the first place. But history will be less relevant than in the first view of personal identity above. Since you will change after the backup, there will be problems keeping the copies current with those changes. In fact, practically speaking, it will be impossible. Our authors apparently think that if you have to be recreated through activation of one of those backup copies, the fact that this new you will be out of sync with the you who was destroyed will not prohibit the new you from being the same person.

Clearly these two views are not in total conflict. They both see personal identity as some kind of continuity involving mental processes. But there does seem to be a difference that is seen in the completely different perspective taken on the issue of duplicates. The second view is much more positive on you existing as duplicates, but it does so by seeming to confuse the relation between two person stages of the same person with the relation between two person stages of two clones (who are not the same person). The first view does not do this. On the second view, multiple copies of the same software program are considered the same self, and too bad for the transitivity of identity.

This confusion between two stages of one self and two stages of two different clones does seem to call into question the plausibility of the second view. Consider an example. You go into a computer store and purchase a disk containing a piece of software. When you get home, you inadvertently damage the disk. So you go back to the computer store, pick up another disk of the same piece of software and proceed to walk out without paying for it. What do you reply when stopped by store security? Our authors, when holding to the second view of personal identity, would have you reply that you are really not stealing anything since you already bought the software. Store security would object that while you bought one instance of that software, you are trying to steal a different instance of it. Our authors would have you reply that on the two disks we really have the same piece of software since the algorithms, etc. are the same. Likewise, they think, your backup copy, once activated, really is you. But as in the case of the example, one could argue that this is based on a confusion.

Though our authors clearly think of the human mind as a program, and do at times seem to assume something like the second theory above holds, I will interpret their "main" position to be that of the first theory above, a fairly standard version of a mental state identity theory.

Options for the Physical Theories

I now consider whether our authors can provide for the extraordinary future by using any of the basic theories we explained above. Let me get the soul theory out of the way at the start. Our authors have no interest in the view that the mind is an immaterial substance, so they similarly have no interest in any claim that personal identity consists of sameness of soul. However, if it were to be the case that humans have souls, and were such souls to transfer over to robots in the human-computer mind transfer, I cannot see that our authors would face any new issues in providing for sameness of soul over time. So if personal identity consists of sameness of soul, and we find some intelligible way to interpret this and some practical way to identify when it occurs, then the same could apply to the robots humans port themselves to.

Our real interest, however, is in seeing how the extraordinary future might accommodate a physical theory and a psychological theory of personal identity and also Parfit's view. We'll consider physical theories first.

The two physical theories are body identity and brain identity. If personal identity consists of body identity, then we probably are not going to maintain personal identity in the extraordinary future scenarios of mind transfer. No one thinks that we will retain our old human bodies after the transfer. The whole idea is to replace our easily damaged bodies with ones far more durable. One could however imagine some attempt to say that a gradual replacement of parts of the human body with robot parts would retain sameness of body through spatio-temporal continuity. Pursuing this line of reasoning, one might argue that personal identity could be maintained through human-computer mind transfer. However, if sameness of body means sameness of parts, then this argument would not provide for preservation of personal identity. There just seems no scenario in which you would have the same parts of the human body present after the mind transfer. Theoretically, perhaps, one could try to put a robot brain into the old human body, and in this case maybe you would have sameness of body parts and therefore maintenance of personal identity even on this criterion. But this is departing pretty far from the vision of the extraordinary future. So unless your going to alter the scenarios, there's no need to consider body identity further.

I would much rather pursue this line of reasoning with respect to the brain identity criterion. This is because, as we have seen, there is a scenario of human-computer mind transfer through neural part transplants. There are so many scenarios thrown around by our authors that I am not sure any one has a claim to being the dominant view, and so the transplant scenario has as good a claim to be part of the extraordinary future as any other.

The argument that brain identity could be accommodated in a mind transfer scenario would go as follows. As we have seen, the identity of a material object could occur through spatio-temporal continuity or through sameness of parts. Spatio-temporal continuity is clearly preserved in a scenario that envisions piece by piece gradual replacement of parts of the human brain with their electronic (or whatever) equivalents. On one view the person could remain conscious during the operations. The view here is that since no one thinks personal identity is not preserved through operations such as heart transplants, even if an artificial heart is used, why should it not be maintained through transplant of some piece of the brain? But then if it is preserved through one such operation, it seems arbitrary to say it is not preserved through other similar operations.

One is tempted to say that personal identity is preserved in such a scenario because the person has the same memories and continuity of consciousness, even though he or she winds up with a new brain. But this is to change the picture. What we are trying to argue is that it is the same brain after the transplants, and for this continuity of memory and consciousness is irrelevant.

That said, to be fair to this criterion, I think that proponents of brain identity might prefer the criterion of sameness of parts in some fashion as the preferred criterion of sameness of brain. If this is the case, the argument above fails to provide for preservation of personal identity through human computer mind transfer.

One can imagine a Brain of Theseus example showing the two criteria of brain identity in conflict. A person goes into the transplant lab to have his brain replaced by a computer brain. As the parts are replaced, over an interval of time, some enterprising fellow is collecting the old parts and reassembles them as the old human brain. Putting it in an available body, he announces that this in fact is the original person and the one with the new brain parts is not. Or have the scenario replace old electronic replacement parts with new ones, and the reassembly take place from the discarded old electronic parts. If one feels no qualms about violating the transitivity of identity law, then perhaps one would argue that we have two versions of the same person, etc.

Options for the Psychological Theories

As I mentioned above, the remarks of our authors indicate that at least sometimes the theory of personal identity they assume is a fairly standard mental continuity theory. In recent centuries this type of theory became the most popular theory among philosophers too. So our authors would seem to be taking no great risk in holding it.

The work they ask it to do, however, may be beyond what this theory can handle. The type of puzzle cases that the traditional mental continuity theories founder on are analogous to the mind-transfer scenarios our authors depict as part and parcel of the extraordinary future. We have already described some of the problems that arise. On Parfit's example, we are not sure what to say about teletransportation on the main line, though the traditional mental state continuity theory would hold that the replica is the original person. But then in the branch line scenario, it becomes hard to believe that the replica is the original person, and the transitivity of identity is violated. In other scenarios of brain bisection and fission, and other kinds of branching, we wind up perplexed about what to say as the transitivity of identity is violated once again. These cases all have analogues in mind transfer. Analogous to the main line teletransportation story is a mind transfer scenario in which a robot replica is created of me, with the robot brain a copy of my brain or somehow the equivalent, including storage of my memories, etc. My brain is destroyed as the robot is activated. The branch line type of case occurs in a mind transfer scenario in which my brain is not destroyed at the time of the "transfer." Mind-transfer scenarios of branching are similarly easy to imagine and leave both Kurzweil and Paul and Cox wondering what to say.

Some of these problems might be overcome were our authors to adopt a version of Nozick's closest continuer theory. This would allow them to declare which continuer, the new replica or the original body individual, was the closest and thus identical to the original person. Puzzling cases of branching could be addressed in the way Nozick does, declaring conditions under which there is and isn't a closest continuer.

But our authors, and anyone accepting this way of accounting for personal identity in the extraordinary future, would have to be willing to live with what seem to some to be weaknesses in Nozick's theory. We have already gone over these above.

Options for Parfit's Theory

The final option to consider is for our authors to adopt a version of Parfit's theory. If ever there was a theory that seemed suited for human-computer mind transfer, this is it. It's a pity our authors seem never to have heard of Parfit, or at least never investigated his views, because they really might be able to use such a theory.

As in the case of Nozick's theory, much of the ground has already been covered in the basic explication of the theory. The theory at first seems to fit well with many mind transfer scenarios. Many types of mind transfer might allow identity, such as those analogous to traveling on the main line. But for cases of branching, identity need not be pursued. Relation R holding between the original person and the robot person will allow the original person to be concerned with the well-being of the robot person as if they were identical. Here Parfit would argue that the relation is as good as identity. The robot would not be the original person, strictly speaking, but the person's survivor or descendent self. All those copies that our authors want to stash around the universe could be treated likewise as descendent selves rather than literally identical to the original. Even the cases of fusion described by Moravec could be accommodated in this sort of theory, as we have seen from Parfit's remarks.

However, though Parfit's theory shows great promise, our authors would have a colossal repackaging job on their hands if they were to adopt it. They have marketed human-computer mind transfer as a way to personal immortality. Though Parfit would argue that what his theory provides is as good as personal immortality, it would be a hard sell to the general public. Though there are various ways to characterize the implications of Parfit's theory, in a branching mind-transfer scenario what it amounts to is that you literally sacrifice your life to provide minds to other persons. The seemingly counterintuitive nature of Parfit's theory might mean that, even if Parfit is correct, unless one were converted wholly to Parfit's position, one would choose mind-transfer only in situations in which identity were maintained. This would preclude branching scenarios, for instance. And if this were the result, then Parfit's theory would have no advantage over the traditional mental state continuity theory, for violations of the transitivity of identity would not occur if branching was not pursued.

With respect to the future of human-computer mind transfer, it may not matter which theory of personal identity is correct. Consider the above-mentioned fact that if no one is convinced that Parfit is correct, then even if he is, no one will pursue just those cases where his theory has an advantage over the traditional mental state continuity theories. Likewise, this may hold for the traditional mental state continuity theories as well. If people are convinced that body identity or brain identity is crucial to personal identity, then no one will pursue mind transfer on even the most simple mind transfer scenarios (no branching or other tricks) if one's original brain or body remain intact during the procedure, because it will look like no transfer is taking place. The old brain or body is clearly still around, so it will look like a new person had been created as a robot, rather than that a person's mind had been transferred to the robot brain. A transplant scenario might be the only option allowed by the frightened person contemplating a prospective mind transfer. And even then the person would have to be convinced that via piecemeal brain part replacement, for instance, he or she was really keeping the old brain even though the parts were being replaced.

It may seem unlikely that Parfit's, Nozick's or the traditional mental state continuity could be true and yet people not volunteer for mind transfer because of their gut feeling that something like brain identity is necessary and/or sufficient for continued identity. But a particular case can be described in such a way that brain identity seems correct, even if one agrees that the psychological theories better fit a variety of mind-swap scenarios. Bernard Williams describes a situation that seems to indicate that brain identity is really what we do presuppose to be the right criterion. Suppose you are told that you are going to undergo a bizarre process. First your own memory and other mental states will be wiped out, then your original mental states (now wiped clean) will be replaced by some similar mental states identical to those of some other person, and then you will be tortured. How will you feel about this impending fate? It seems likely that you will be quite apprehensive about what you will view as your future torture. If this is what you feel, then you must be presupposing brain identity as the criterion of personal identity, because on the mental state continuity theory what we have described is actually nothing more than the transfer of another person's mind into your old body. If you really believed that that transfer did bring with it the identity of the other person, then why would you feel in your gut that it was you who was about to be tortured (Ballie, 1993, p. 14)?

I have presented a variety of options for our authors that would allow personal identity to occur through human-computer mind transfer. Even if it would not occur, say on Parfit's theory, something just as good might occur. Given that none of these theories has won universal consent, and that what sounds convincing to one person might sound counterintuitive to the next, perhaps our authors should not tie their human-computer mind transfer scenarios too closely to any one theory.

But I have also suggested that theories like those of Parfit, Nozick, and even the traditional mental state continuity theory might sound convincing intellectually but seem counterintuitive to the average person, or go against long held prejudices all of us have. In that case, even if such theories are correct and would provide for personal identity or something just as good in human-computer mind transfer, such prejudices might preclude mind transfer from becoming as widespread as our authors assume.