Implications Of Autonomous Robots For Ethical Theory

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Implications Of Autonomous Robots For Ethical Theory as PDF for free.

More details

  • Words: 4,283
  • Pages: 5
An Ethical Theory for Autonomous and Conscious Robots Soraj Hongladarom Department of Philosophy Chulalongkorn University [email protected] AUTHOR'S NOTE: ROUGH FIRST DRAFT – PLEASE DO NOT QUOTE. THANK YOU. ABSTRACT

The main question of this paper is which ethical theory is most suitable as a foundation for equipping autonomous or conscious robots with a system that enables them to perform ethical action or to avoid unethical action. The two main theories in existence, namely the deontological and the consequentialist ones, are algorithmic in the sense that both rely on a set of a priori rules that are intended to define ethical decision making. In doing this both theories claim to be universally applicable. However, there are a number of deficiencies in both theories when it comes to designing an ethical robot. First of all, both theories disregard the clear role that context and embodiment plays in how decisions are made. Secondly, by being algorithmic, both theories do not seem to be able to cope with changing circumstances. On the contrary, a kind teleological theory might do a better job, since it relies on the very design and function of a particular robot as a necessary ingredient in ethical decision making. I

Autonomous robots have made themselves present due to today’s advancing technology. Many have found applications in a variety of fields, such as space exploration, cleaning floors, mowing lawns, and waste water treatment. According to the article on the topic in Wikipedia.org, autonomous robots are capable of the following: (1) gaining information about the environment; (2) working for an extended period without human intervention; (3) moving either all or part of itself throughout its operating environment without human assistance; and (4) avoiding situations that are harmful to people, property, or itself unless those are part of its design specifications (Wikipedia.org 2009). Thus autonomous robots are machines that are capable of working independently with minimal human supervision. That would indeed be necessary if robots, for example, are to explore space or distant planets, or in environment where humans are not able to explore due to safety Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. AP-CAP 2009, October 1–2, 2009, Tokyo, Japan. Copyright 2009 by Author(s)

reasons or others. Many scholars have thought about the ethical implications of autonomous robots. Firstly, if these robots are autonomous, then what would guarantee that they will not run amok and create harms to human beings? Autonomous robots have made much inroad to the military, and the ethical implications are obvious. Military autonomous robots are extremely powerful, and since they by definition operate with no or very minimal human supervision, then the problem is how to control its lethality. Hence many scholars have called for ethical safeguard to be installed into the design of autonomous robots in the first place (E.g., Arkin and Moshkina 2009). Not only in the military, but the use of autonomous robots in many civilian fields is also fraught with potentially disastrous consequences that require close ethical guidelines and supervision. These attempts to install ethical guidelines in order to create ethical and autonomous robots are commendable, but that is in fact not the main concern of this essay. A deeper question concerns which sets of ethical guidelines should be installed in the robots. It can’t be assumed that everyone agrees on every aspect or every detail of such guidelines. It is clear in any case that any guideline should take priority on the safety of human beings. Any ethical robots need first of all to ensure the safety of human beings and to protect them in any way they can. But other than that it seems an open question what kind of ethical considerations or guidelines an autonomous robot should take. That much seems to depend on the function of each particular robot, including its mission, the location that it finds itself in, the time of its operation, and so on. The discussion so far has focused exclusively on unconscious robots. Although autonomous robots are being researched on and developed at a fast pace, theorists have only talked about conscious ones. The difference between an autonomous robot and a conscious one is quite simple. The former may be able to perform tasks assigned to it with minimal or no human supervision, but it is the latter that is capable of being fully autonomous in the philosophical sense. That is, a conscious robot would be able to think to itself and to understand meaning. That has been the holy grail of artificial intelligence even since computers were invented many decades ago. Perhaps the most serious stumbling block against developing a conscious robot is that even

now consciousness is a very poorly understood phenomenon. Scientists and philosophers are debating whether consciousness can be located specifically in a region of the brain, or whether it is something emerging out of collaboration of various parts of the brain, or, in the case of the dualist, whether it is another kind of phenomenon altogether. Such lack of understanding naturally leads to a difficulty in developing a conscious robot. Nonetheless, the picture is not hard to imagine. A conscious robot would be one, like the famous Hal in the novel 2001: A Space Odyssey (Clarke 1968) which (or who?) is capable of ruminating to itself (himself), pondering the various pros and cons of its future decisions. In short, a conscious robot would be one that is capable of representing reality to itself through a system of representation, most like a natural language. If there are indeed conscious robots (and let us presume that actual development of these robots is not too distant in the future), then a whole host of philosophical novelties will have emerged. Foremost among them would be the question of moral agency and moral autonomy. Ken Himma has argued that moral agency presupposes consciousness, in other words consciousness is a necessary condition for there to be any kind of what he calls “artificial moral agency,” which is the kind of moral agency for artifacts such as robots (Himma 2009). Thus, if the development of conscious robots lies not too far ahead in the future, then we can anticipate many conceptual problems that will arise through a reflection on what kind of ethical theory would be most appropriate to deal with these issues. What, in other words, should an ethical theory be like in order for it to help us best comprehend the phenomena of moral agency of autonomous and conscious robots? Having stated the main question of the paper, I would like to argue that it is a teleology-based theory focused on development of moral character that is best suited for the emerging trend of autonomous and conscious robotics. The main competitor to this type of theory, namely the Kantian, deontological theory, fares less well. A reason is that the latter theory does not take much into consideration the important fact that our bodies and concrete situations in which we find ourselves are integral to our beings and our identities. Secondly, robots, including the autonomous ones, are different from human beings in one very significant aspect: They are artifacts in the way that we human beings are not. If it is the case that concrete situation or ‘lived’ bodies are inseparable or cannot be consistently considered apart from one’s rationality, then to consider the robots and humans to be on a par in terms of autonomous rationality alone seems quite deficient. II

As mentioned before, the difference between an autonomous and a conscious robot is not as great as it might appear. The difference can be found also in the biological world. No one doubts that microbes are not

conscious organisms, but they are certainly autonomous (in the engineering sense, but, of course, not in the philosophical sense) because they are capable of functioning without any supervision. A microbe moves about and when it finds its food it proceeds to digest it, and when the time comes it divides itself, and so on. Thus it functions much like a robot nowadays. On the other hand, no one doubts that humans are conscious beings, so conscious robots would perform much like a typical human being does (witness the various depictions of conscious robots in science fiction movies). Let us assume that both autonomous and conscious robots do exist. The question then becomes what kind of ethical theory is best suited to explain the moral behavior of these robots. This question is also important in the case of autonomous robots in the engineering sense because there is an obvious need to have a clear idea as to the ethical guideline for these robots, and as we have seen at the beginning of this paper there is a lack of theorization on the specific content of such ethical guidelines and its justification. As most works on ethics of autonomous robots deal with arguing for a need to have ethical system installed, there is then a lack of works that discuss what such ethical system needs to consist of and what are the reasoning behind that. This gives rise to a question which is clearly specific to autonomous (and by extension to conscious robots): Since it appears that autonomous robots did not exist in nature before they were invented, are there then any specific ethical requirements that are unique to them? What exactly are those requirements? And the last question which is most pertinent to this essay is what kind of ethical theory is best suited to deal with the situation. At first sight it might appear that the deontological theory might do a better job at explaining the moral behavior of conscious or autonomous robots. This is because these robots would be capable of thinking and reasoning for themselves. According to the deontological theory, then, moral reasoning follows a universal logic that is integral to rationality itself. For Kant, one knows how one should decide to act in any situation because one follows a set of maxims, which inform one of what one should be doing in a particular situation. The maxims do not tell the subject what kind of specific action she needs to do, but it gives a general guide as to how the subject should make the decision. Since this arises out of pure reasoning, then the outcome of the reasoning should be the same for every rational being, just as the outcome of reasoning in ordinary logic should be the same no matter what the context is. Thus, if a robot is really autonomous and conscious (for Kant, by the way, one cannot be autonomous unless one is conscious, which is contrary to the sense of autonomy in the engineering sense outlined here), then the robot should be able to make its own ethical reasoning. A problem with this approach is that it does not give serious attention to the role that the context plays in forming particular ethical decisions. According to Kant, lying is wrong no matter what. But at least we could

imagine situations where ‘lying’ is not only permissible but encouraged, such as when one is tortured and knows that the torturer will not be able to uncover the lie for a certain amount of time. In separating ethical thinking from the context, Kantians or deontologists believe that they can get to the essence of ethics, namely the pure logic of ethical deliberation, without having to contend with the changing contexts. But let us imagine what it would be like for a robot to be installed with the deontological ethical system. Presumably the robot would have to be able to think of maxims that it has to follow in every situation it needs to make an ethical judgment. This could be practically cumbersome. Furthermore it might not furnish the robot with a decision making system that is appropriate for all tasks. Talking about maxims, one is reminded here of Asimov’s well known three maxims for robots, but of course situation has become much more complex and robots are involved in much more varied tasks than only killing or not killing humans. The typical Kantian maxim—Act only in accordance with that maxim through which you can at the same time will that it become a universal law—would mean that the robot has to formulate a maxim every time it reaches a decision and has to calculate whether that generated maxim could become a universal law. As such the Kantian maxim is a second-order maxim, that is, maxim used for generating lower-order maxims, which then govern action that the robot will undertake. The key component in the Kantian maxim is the injunction that the maxim must be able to become a universal law. But robots and humans are fundamentally different. For one thing robots are designed to serve certain purposes, whereas human beings have evolved naturally from other species of organisms (and the Asimov maxims are reflections of this fact). The ‘universality’ in the Kantian maxim then becomes problematic because it is unclear whether this is universal only to human beings or not. If the putative universal law here is intended to cover all rational beings, and not only human beings, then one would have to fact the conceptual difficulty of accounting for the fact that conscious, autonomous robots and (conscious, autonomous) humans are fundamentally different. One would then have to abstract away all the differences and focus only on the putative sameness, such as the ability to reason. However, it is possible that robots that are being developed or will be developed in the future serve a large variety of purposes, some of which may require the robots to be conscious or thinking in one way (such as robots who know how to tie up shoe laces) rather than another (such as robots playing chess). Kantian ethics assumes that these specific functions (tying shoe laces or playing chess) are only instances of the same ethical reasoning system, but it seems that these activities at least have some role to play in the kind of ethical system that is relevant. For example, in tying shoe laces, perhaps the ethical system would tell the robot to do it in the way that would make the result look best or most beautiful (if we can say that such result can be beautiful). And in playing chess, it would imply that the robot plays it to the best of

its ability and importantly will not cheat. So it seems that different designs of robots could imply at least different details of ethical judgment system. This attention to the needs of different designs or different functions of robots is lacking in the Kantian system. Another aspect of the Kantian system is that it tends to reduce the agent or the moral deliberator to only the essence of the agent as a rational being only, as the contexts that surround the agent are not relevant to the act of ethical reasoning. Thus the Kantian system would be usable for human beings, conscious, thinking robots, or any other thinking beings including abstract ones. This makes it difficult to be of real use in concrete situations because our own ethical judgments are very much shaped and informed by the contexts and even the physical bodies that we find ourselves in. For example, robots that are only capable of talking and being conscious but have no means to perform any action (perhaps consisting only of a computing unit, a speaker and a microphone that allow it talk and listen) might presumably be conscious and ‘autonomous’ in the sense that it thinks for itself. The Kantian universal law would then have to cover this robot too. Nonetheless, it’s hard to imagine what kind of ethical system should be installed in the robot since it is not capable of performing any physical action. The only action it can do is verbal, so if there is a need for an ethical system for this robot, then it will only be needed for preventing verbal abuses by the robot. Thus this shows that an ethical system needs to be tailored to the design and the function of particular robots, again something the Kantian system is rather ill equipped for. If the Kantian, deontological system is perhaps inadequate as a theory for designing ethical autonomous or conscious robots, then how about its main rival, the consequentialist one? The two theories, even though they are different and opposite in many ways, in fact do share one important feature in common in that both are rule dependent. The Kantian system is obviously rule dependent since it depends on the maxim and the universal law. The consequentialist or the utilitarian theory is also rule dependent since it relies on the rule of maximization of utilities as the principle for ethical judgment and decision making. The well known ‘maxim’ for the utilitarian theory is “Greatest happiness for the greatest number;” that is, any action that gives greater happiness to the greater number of people is to be preferred over other action that gives less. To install this system in an autonomous or conscious robot would mean that the robot has to be able to calculate the possible or probable utilities of its action. Hence the situation would not be too much different from one installed with the Kantian system. In the case of the Kantian system, the robot would need to know the extend of ‘universal’— whether it covers only human beings, or both human and conscious robots, or humans and all robots no matter they are conscious or not, or human, robots and animals, or other possible scenarios. In the case of the utilitarian theory, the robot then has to calculate utilities, but to do that it has first to be able to interpret what counts as

utilities. And since different beings (robots, humans, animals) might perceive different things to be beneficial to them, then the task becomes more complicated than it first appears. The rule dependence of either Kantian or consequentialist theories implies that both theories are algorithmic in ethical reasoning. The problem with this is that it discounts the role of the embodiment of the robots (or humans for that matter) as well as the contexts that surround it. As designs and functions of robots seem to matter very significantly in how the robot is expected to behave and how its ethical action should be, then both Kantian and consequentialist theories seem to be deficient since they tend to discount these factors. III

The deficiencies of the two leading ethical theories show that an alternative is needed. A theory that is suitable for installing in autonomous or conscious robots should be teleological in character. This kind of theory contrasts with the two leading theories in that it is not algorithmic. That is, the teleological theory does not depend on formulating and following strict rules when it comes to ethical decision making. Realizing that contexts and situations are always varied, the theory takes into account these contexts and situations and formulate what should be done in accordance with those circumstances. A leading theory of this kind is, of course, virtue ethics theory derived from Aristotle. What is distinctive about this theory is that it emphasizes development of a set of virtues or qualities that are desirable in an ethical being or organism so that the being can make its own decision on the spot when a situation arises what it should be doing at that particular instance. This may sound difficult for a robot designer, for it might seem as if a list would be required that provides all the possible scenarios that the robot might have to face and instruct it to act accordingly. But to do that would simply be nothing more than providing the robot with an algorithm for making decisions. The whole idea of the teleological ethical system is that the end result, what is desirable, in robot’s ethical decision making is that the robot can make a decision by itself what it should do at a particular moment or when it faces with a need to make a decision. For Aristotle and the virtue ethicists, this means that the robot has to be installed with a set of ‘virtues.’ In fact the word ‘virtue’ came from Latin virtus meaning moral perfection, which ultimately came from the word vir or man. This shows that the root meaning of ‘virtue’ is derived from characteristics that best define a human being. So the task for designers of ethical and autonomous robots is that they should search for the set of robot virtues and install the set into the robot so that the robot become virtuous. Defining such a list in any substantial detail would easily entail another paper or a whole research project, but at least we can begin by pointing out some obvious ones. In any case, the idea is to start from the defining characteristics of robots

themselves. As human virtues have their basis on what constitutes a human being, and the typical Greek mentality would think of an ideal human, a model of perfection that ordinary humans should strive to reach. Taking this cue from this ancient Greek thought, designers might want to consider conceptualizing a model of perfection for ethical robots based on the ontology of the robots itself. Here the role of function of particular robots plays a crucial role. A robot that does everything it has been designed for, and does it well, is one that approaches its model of perfection. The model of perfection will then be a basis on which lies the ethical judgment system of the robot. Since functions and designs of different robots vary, basing ethical judgments so that it varies according to their own functions and designs would then solve the problem of algorithmic ethical decision making that we have seen earlier in the case of the two main ethical theories. To imagine a concrete example, we might think of autonomous and perhaps conscious robots working on a remote planet. Communication with earth would be obviously difficult due to the vast distance and other factors. So the robots have to work independently and autonomously. Imagine further that the robots are designed to explore the geological and other features of the planet and to work together as a team. Here ethics become clearly relevant. Firstly the robots should perform their function well, as they have been designed to do. Furthermore, there might arise some unexpected circumstances, such as sudden dust storms, where the robots are expected to know how to fend for themselves. Here ethical robots would seek to protect themselves from the storm so that they live to perform their tasks the other day. Then they have to report the data accurately. It might be too much now to think of robots which are capable of deliberating making false reports (that may be too far into the future), but when the robot will be autonomous and conscious and when its function is to give accurate reports back to earth, one of its virtues then needs to be honesty, or in more concrete terms ability to give out surveyed data accurately. Furthermore, when the robots work as a team, ethical values needed for successful teamwork (such as coordination, ability to communicate and understand one another, etc.) become important. Hence these abilities becomes virtues for these robots. One possible objection against the teleological theory concerns situations when the robot encounters situations which it has not been designed for. Since the algorithmic theories are supposed to be universal, in principle robots installed with ethical system based on these theories should know how to make decisions when they are faced with novel, unexpected situations. However, since these theories are too abstract and operate more on the metalevel, it becomes difficult to conceptualize how this could be realized in practice. The teleological theory, on the other hand, equips the robots with a set of robot virtues so that, even in novel and unexpected situations the robots should perform

satisfactorily well as they are designed through the ethical judgment system to handle these situations by themselves based on their original design. Hence, if the robots performing geographical survey of a distant planet were to encounter a situation that lies totally out of range of what has already been envisaged for them to encounter, their set of virtues, especially one that gives them an option on how to react in such a way that is appropriate for the situation, could be of much help. IV

In conclusion, I have tried to argue that the teleological ethical theory can be more suitable for designing ethical autonomous (and also conscious) robots. A lot of tasks still remain concerning spelling out in detail how such ethical design according to this principle is like in practical detail. In the end, robots that are capable of thinking for themselves, relying not on algorithmic rules but on their own judgments, should prove to perform better and approach the model of perfection for robotkind better than their algorithmic counterparts. REFERENCES

Arkin and Moskhina. 2009. Lethality and autonomous robots: an ethica stance. Available at http://www.cc.gatech.edu/ai/robot-lab/onlinepublications/ArkinMoshkinaISTAS.pdf. Retrieved September 15, 2009. Clarke, Arthur C. 1968. 2001: A Space Odyssey. New American Library. Himma, Ken. 2009. Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?. 2009. Ethics and Information Technology 11: 19-29. Wikipedia.org. 2009. Autonomous robots. Available at http://en.wikipedia.org/wiki/Autonomous_robot. Retrieved September 18, 2009.

Related Documents