Artificial Moral Agents
Nayef Al-Rodhan
Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations.
The inevitable next step, therefore, would seem to be the design of “artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.
However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.
The most critical of these dilemmas is the question of whose morality robots will inherit.
Functional morality involves robot responses to scenarios unanticipated by the programmer, where the robot will need some ability to make ethical decisions alone. Here, they write, robots are endowed with the capacity to assess and respond to “morally significant aspects of their own actions.” This is a much greater challenge.
The attempt to develop moral robots faces a host of technical obstacles, but, more important, it also opens a Pandora’s box of ethical dilemmas.
Moral values differ greatly from individual to individual, across national, religious, and ideological boundaries, and are highly dependent on context … Even within any single category, these values develop and evolve over time.
Uncertainty over which moral framework to choose underlies the difficulty and limitations of ascribing moral values to artificial systems … To implement either of these frameworks effectively, a robot would need to be equipped with an almost impossible amount of information. Even beyond the issue of a robot’s decision-making process, the specific issue of cultural relativism remains difficult to resolve: no one set of standards and guidelines for a robot’s choices exists.
For the time being, most questions of relativism are being set aside for two reasons. First, the U.S. military remains the chief patron of artificial intelligence for military applications and Silicon Valley for other applications. As such, American interpretations of morality, with its emphasis on freedom and responsibility, will remain the default.
Source: Foreign Affairs The Moral Code, August 12, 2015
PL – EXCELLENT summary of a very complex, delicate but critical issue Professor Al-Rodhan!
In our work we propose an essential activity in the process of moralizing AI that is being overlooked. An approach that facilitates what you put so well, for “AI to interact with humans as partners.”
We question the possibility that binary-coded AI/logic-based AI, in its current form, will one day switch from amoral to moral. This would first require a universal agreement of what constitutes morals, and secondarily, it would require the successful upload/integration of morals or moral capacity into AI computing.
We do think AI can be taught “culturally relevant” moral reasoning though, by implementing a new human/AI interface that includes a collaborative engagement protocol. A protocol that makes it possible for AI to interact with the person in a way that the AI learns what is culturally relevant to each person, individually. AI that learns the values/morals of the individual and then interacts with the individual based on what was learned.
We call this a “whole person” engagement protocol. This person-focused approach includes AI/human interaction that embraces quantum cognition as a way of understanding what appears to be human irrationality. [Behavior and choices of which, from a classical probability-based decision model, are judged to be irrational and cannot be computed.]
This whole person approach, has a different purpose, and can produce different outcomes, than current omniscient/clandestine-style methods of AI/human information-gathering that are more like spying then collaborating, since the human’s awareness of self and situation is not advanced, but rather, is only benefited as it relates to things to buy, places to go and schedules to meet.
Visualization is a critical component for AI to engage the whole person. In this case, a visual that displays interlinking data for the human. That breaks through the limitations of human working memory by displaying complex data of a person/situation in context. That incorporates a human‘s most basic reliable two ways of know, big picture and details, that have to be kept in dialogue with one another. Which makes it possible for the person themselves to make meaning, decide and act, in real-time. [The value of visualization was demonstrated in 2013 in physics with the discovery of the Amplituhedron. It replaced 500 pages of algebra formulas in one simple visual, thus reducing overwhelm related to linear processing.]
This kind of collaborative engagement between AI and humans (even groups of humans) sets the stage for AI to offer real-time personalized feedback for/about the individual or group. It can put the individual in the driver’s seat of his/her life as it relates to self and situation. It makes it possible for humans to navigate any kind of complex human situation such as, for instance, personal growth, relationships, child rearing, health, career, company issues, community issues, conflicts, etc … (In simpler terms, what we refer to as the “tough human stuff.”)
AI could then address human behavior, which, up to now, has been the elephant in the room for coders and AI developers.
We recognize that this model for AI / human interaction does not solve the ultimate AI morals/values dilemma. But it could serve to advance four major areas of this discussion:
- By feeding back morals/values data to individual humans, it could advance their own awareness more quickly. (The act of seeing complex contextual data expands consciousness for humans and makes it possible for them to shift and grow.)
- It would help humans help themselves right now (not 10 or 20 years from now).
- It would create a new class of data, perceptual data, as it relates to individual beliefs that drive human behavior.
- It would allow for AI to process this additional “perceptual” data, collectively over time, to become a form of “artificial moral agent” with enhanced “moral reasoning” “working in partnership with humans.”
Click here to leave a comment at the end of this post