/ Robot ethics: "Technology is never neutral"
/ nic.at News - 30.03.2020 13:04
Robot ethics: "Technology is never neutral"
Dr. Janina Loh is a philosopher of technology and media at the University of Vienna, and specialises in the ethical challenges involved in dealing with robots. During an interview at Domain pulse in Innsbruck, she talked about vacuum cleaner robots, values, and female voices.
Mrs. Loh, why do we need robot ethics?
Technology is never neutral, because it is a product of human action. No human action is neutral, so products of human action are never neutral either. Values always flow into technologies. A vacuum cleaner robot, for example, is made to vacuum. The scientist Oliver Bendel designed a vacuum cleaner robot that is able to recognize and stop ladybird-like objects. This has an ethical implication: the robot is told to look out for ladybirds - but not for spiders, for example. Another example is gender stereotypes in robots. There are so many assistant robots that are equipped with a female voice. This says a lot about the values we represent in this society. So we do not need a new ethics for robots, but a critical awareness of ethics in the construction and handling of robots.
How is it different from other technologies?
Robots are autonomous, they can perform tasks without direct external influence. This autonomy makes them more independent than other technologies. A machine gun has many ethical implications, but must be controlled by a human being. This is different for military robots, which are built, for example, to identify enemy targets. It must therefore be decided beforehand how the robot distinguishes the enemy object from a non-hostile object. And another question is:
Should the robot be autonomous enough to not only define the target, but to be able to take out the target on its own? Robot ethics is thus concerned not only with questions associated with the construction of robots, but also with the extent to which robots can be regarded as independent moral actors. Another area of robot ethics deals with the question of the extent to which robots demand moral action from us.
In other words, should we treat robots well?
Exactly. The scientist Kate Darling proposes that so-called social robots - those who work in the care sector, for example - should have rights, despite the fact that the robots are not sensitive to pain and have no awareness of when they are being wronged. But Kate Darling takes her cue from the philosopher Immanuel Kant, who said: "One should not treat animals badly. And not because animals are sensitive to pain, but because it reflects on us humans. Kant therefore meant that we humans morally degenerate when we treat animals badly. Kate Darling shares this view in relation to robots. In her opinion, we should not treat robots badly, because it reveals our bad character.
Can robots themselves act morally?
Not at the moment. At least judged by the usual criteria. To act morally, a being needs judgment, which cannot be simulated artificially to this extent.
What basic moral principles do you have to programme into a self-driving car?
The famous "trolley case scenario" serves to illustrate this topic wonderfully. It deals with the fifth stage of autonomous driving: You get into the car, enter the destination, and can no longer intervene in the driving process. Now a dangerous situation arises, and the car has to decide: Does it steer into the ditch and endanger the driver, does it swerve towards a group of children, or does it steer towards two older people? In a dangerous situation, do we accept the death of a few people in order to save many? But there is no single solution to this thought experiment, because we have different ethical systems. In 2015, an ethics commission for autonomous driving was established in Germany. Two years later, a catalogue of principles was published which says, among other things, that in a conflict situation the car must not decide on the basis of criteria such as gender, number or age. This is a rejection of utilitarian ethics, according to which, for example, the lives of small children are more important than the lives of older people. The commission opposed this approach, but has not said how the self-driving car should behave.
Do you have a suggestion?
We have a principle in our legal texts which shows us an interesting way forward: "Human dignity is sacrosanct.” Every human being has an infinitely high value because of their dignity. Infinity cannot be added mathematically. Two people are worth as much as ten people. If one were to follow this principle, one could say that in a conflict situation the autonomous car must follow the previously programmed path. No matter who, or how many people, are injured. Of course I realize that this approach would be difficult for many people. But I personally see no other way to solve the problem. Therefore, I think it is somewhat unlikely that we will actually get to the fifth stage of autonomous driving. But I am a philosopher, not a prophet.
In 2019 you published the book "Robot Ethics. An Introduction" in 2019. Is there a growing public interest in the subject?
Every year I receive over 200 requests for panel discussions, interviews and lectures, including at schools. These are events that are often intended for a wider public. So an awareness of these issues is being created In German-speaking countries. That was not the case a few years ago. But still, more needs to be done.
For example?
The topic needs to be discussed even more broadly in public; and in school lessons, children can be taught that a moral valuation plays a role in every technological development. Ethics courses are also needed in the training facilities for the engineers of tomorrow. There is also a need for ethics committees that deal with specific issues. At present, such commissions are far too general.
Our tip
These articles could also be interesting for you: