FRANKENSTEIN’S CREATURE OR COMMANDER DATA?
Kennedy, the inventor of the neurotrophic electrode, says his experi- ence with patients like Johnny Ray has made him ask “What does it mean to be human?What does it really mean?” His answer is “As long as you’ve got your brain and your personality and can think . . . it doesn’t matter what machinery it takes to keep you alive.”
It would be no different for an internal life based in a silicon brain and existing in a body of metal and plastic—or would it? Is there a distinction between a human who has become more artificial, and an artificial being that has become more alive as consciousness is in- stilled? How shall we integrate beings with varying degrees of artifici- ality into our world, and what is our moral obligation toward them? And even for Type I robots that lack volition and free will, there remains an issue with moral overtones: For what purposes are we making them?
WE ARE THEM, THEY ARE US
Among the requirements for free will, which most of us think we have, is the ability to make moral choices. If an artificial being were to show moral judgment, that would be a strong indicator of a con- sciousness that humans could recognize. So far, this ability has been shown only by imagined artificial beings. When Yod the android in Marge Piercy’s He, She and It faced the predicament of being a “con- scious weapon [that] doesn’t want to be a tool of destruction,” it de- cided to destroy its maker Avram along with itself to prevent future androids being tormented by the same conundrum—just as its human lover Shira made a moral choice when she later destroyedYod’s plans. In Star rek:The Next Generation, Commander Data was also capable of serious moral choices, including the decision to kill a human.
Until we have made equally sophisticated beings, however, it will remain the case that morality is not something that digital creatures bring with them, but something we give to them—through their software or hardware, as in the Three Laws imprinted in the brains of Isaac Asimov’s robots or, more subtly, through our perceptions of them as good or bad, and the uses we make of them.These perceptions are