Tuesday, March 20, 2012

2012 Reading 3: Do Androids Dream of Electric Sheep?

Did I miss the point? Knowing only that Do Androids Dream of Electric Sheep? by Philip K. Dick was a relatively early book on artificial intelligence, I expected a thoughtful exploration of what it means to be human. Here's what I got: The main character Deckard, an android bounty hunter, starts out believing that the ability to empathize is what separates humans from androids. After conversations, confrontations, and even sex with androids he eventually concludes that... yep, the ability to empathize is what separates humans from androids. Seriously... that's it?
For background, Dick's androids are nearly indistinguishable from humans. They are made of flesh and blood like humans, speak and move like humans, die like humans. The only way distinguish them while alive is to test their reactions to emotional stimuli. Since they lack the ability to empathize, their emotional responses are faked and slightly slower than a human's. The only way to distinguish them when dead is to test their bone marrow.
Dick touches on but doesn't explore a number of interesting issues:
1. The morality of slavery: is an androids in any way justified in killing their master in order to escape to freedom? What if there is no way for them to gain freedom without killing?
2. Mood control: if you could dial up any possible mood on a machine, would you? Joy, rapture, perfect contentment, at the touch of a button. Would you ever, like Deckard's wife, dial up depression?

3. Empathy: Is the lack of an ability to empathize automatically make a person dangerous? Can it be expected that since androids can't empathize they will eventually do harm?

Any thoughts?

1 comment:

  1. My quick after-work thoughts:

    1. This clearly depends on the circumstances. If the androids are being enslaved against their will, and are sentient beings (which I define as roughly being able to feel pain and/or emotion), then I believe they would be justified from their point of view in escaping their slavery by any means necessary. It doesn't make it morally correct, exactly, but if there is no other option but to physically harm someone else, I'd say they could probably stun their captors to escape. If they need to kill them for some reason (they are bound to the owner's life-force?) then I suppose that would mean they need to kill to be free.

    After all, their creators chose willingly to create a sentient being, then enslave it. That doesn't seem like an upstanding or moral course of action.

    2. Yes, I'd probably experiment with it. I'd have to say that also, I'd likely want to experience both positive and negative spectra of emotion, since without one you don't have a frame of reference for the other. I can imagine, though, after a time, that it would become very difficult to feel authentically emotional.

    3. No - lack of empathy may make someone's interactions with others quite difficult to figure out, but it doesn't imply that they must be dangerous. Their actions may be more likely to cause harm inadvertently to others - but then again, there are plenty of people who cause harm intentionally who *are* empathetic with others. I don't think it can be expected that androids will eventually do harm, that's up to how they act on their feelings, just like any other beings.

    ReplyDelete