One of the interesting things that emerged from the two conferences (SPT2011 and CEPE2011) I attended in the US in late May was that there were a lot of talks and discussions on “moral” or “ethical” robots. For those of you who are not in the know, robots now are much more sophisticated and much more advanced. The US military has been developing “killer robots” for some time now and it is common practice now for the military to send unmanned airplanes to target and bomb their enemy positions. Development of soldier robots is also in the making. The idea is to develop robots which can function much like a soldier, and in combat with the enemy the robot can of course shoot and kill. Quite a terrifying aspect.
Robots are not only being developed to shoot and kill. At the opposite end of the spectrum, there are robots that act as companions for those who need them but cannot find one with flesh and blood. Robots are now replacing humans as companions or the elderly in nursing homes. At least this is happening in the West. Instead of having human companions, the elderly (and in fact not only them) are being provided with “companion robots” which look like either humans or cute pets, and are supposed to be tender and gentle. We can certainly imagine human-look-alikes that can talk and show (semblances of) emotions on their faces in nursing homes, providing the elderly with round-the-clock care and attention, much more readily that a human ever could.
These situations call for ethical reflection. A question that was raised during the discussion on caregiving robots was: What does this signify about our own situation? If we are to give our parents and grandparents caregiving robots, what does this tell us about ourselves? But there was another question. Imperfect as the robots are, they are still better than nothing. That is, if there is no one around to care for the elderly, then at least the robots can fill in the void.
I have written many months ago that a Japanese professor had already developed a robot replica of himself. He also created a robot girl that looked uncannily similar to a real girl. This of course gives rise to the topic of robot sex. Many have taken up this topic and discuss whether it is good or bad for a human being to have a robot as their companion and sexual partner. Does having sex with a robot essentially the same as masturbation, or is it in the same league as having sex with a real human partner?
This may depend on whether robots can be self aware and conscious. They are not capable of doing that now, or so it seems, but the harder problem is that we humans do not even have complete understanding of our own self-consciousness. We are still debating on what it actually is, and according to the Churchlands we are essentially deluding ourselves when we think that there is actually such a thing as self-consciousness, or consciousness for that matter. But if the Churchlands are right, then we are also deluding ourselves when we ask of robots whether they can be self aware or not. They can’t, because even we ourselves cannot, and in fact no being ever could.
Even if the Churchlands are wrong, we still have problems explaining self consciousness, so presumably we would have problems explaining why we seem to believe that robots can’t become conscious too.
Actually the problem whether robots can become conscious does not have to concern us here. What is more pressing is that robots are already around and they are working as soldiers or caregivers and many other things. What should we do with them? Is it possible to install some kind of “ethics algorithem” into their “minds” so that they become ethical? So a very interesting question is: Can robots become more ethical than us? If so, then what is left of us human beings?