ACM Comm 2018 03 How Can We Trust a Robot (Notes)

From University
Jump to: navigation, search
"A Programmable Programming Language" CACM March 2018

How Can We Trust a Robot?
by Benjamin Kuipers, p.86-95

How Can We Trust a Robot?

"If intelligent robots take on a larger role in our society, what basis will humans have for trusting them?"
Can a robot reason in an ethical manner consistent with human society norms?
Robots can be hardware or software. If the robot's future behavior is unknown, or unknowable, within limits, where is the basis for any trust?

People

Ideas

  1. Trust is essential to cooperation, which produces positive-sum outcomes that strengthen society and benefit its individual members.
  2. Individual utility maximization tends to exploit vulnerabilities, eliminating trust, preventing cooperation, and leading to negative-sum outcomes that weaken society.
  3. Social norms, including morality and ethics, are a society's way of encouraging trustworthiness and positive-sum interactions among its individual members, and discouraging negative-sum exploitation.
  4. To be accepted, and to strengthen our society rather than weaken it, robots must show they are worthy of trust according to the social norms of our society.
  5. What is trust for?
  6. Self-driving car kills woman in Arizona.[1]
  7. Robots need to follow, and understand, social norms to earn trust from human society.
    1. This means a separate robot society, with its own rules, will form. The differences will be interesting.
  8. When do robots participate in society and not just run errands?
    1. Its a complexity issue. When robot behavior is complex enough that any average human treats it as a human. Then robots are participating in human society.
    2. Her [2013][2]
  9. The performance requirements on moral and ethical social norms are quite demanding.
    1. (1) Moral and ethical judgments are often urgent, needing a quick response, with little time for deliberation.
    2. (2) The physical and social environments within which moral and ethical judgments are made are unboundedly complex.
    3. (3) Learning to improve the quality and coverage of moral and ethical decisions is essential, from personal experience, from observing others, and from being told.
  10. Complicating this even more, robots can share ethical situations along with their judgement in perfect detail with other robots or future robots.
  11. Three major philosophical theories of ethics
    1. Deontology[3]
    2. Utilitarianism[4]
    3. Virtue Ethics[5]
  12. “Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another.” D.M. Rousseau.
  13. In the Prisoner's Dilemma the single Nash Equilibrium represents the worst possible outcome for each prisoner and the worst outcome for society as a whole.
  14. In the Prisoner's Dilemma cooperation produces a better outcome for the prisoners and society, but is not a Nash Equilibrium.
  15. The Public Goods Game[6] An \(N\) person variant on the Prisoner's Dilemma.
  16. How can a robot be punished for violating norms, morals or laws?
  17. Understanding the whole elephant
  18. The Blind Men and the Elephant[7]
  19. The design focus for selfdriving cars should not be on the Deadly Dilemma, but on how a robot’s everyday behavior can demonstrate its trustworthiness.

References

  1. Artificial Intelligence (AI)
  2. Terminator 2: Judgment Day [1991][8]
  3. SkyNet[9]
  4. Robot & Frank [2012][10] In order to promote Frank’s activity and health, an eldercare robot helps Frank resume his career as a jewel thief.
  5. Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei, N. and Walsh, T. Ethical considerations in artificial intelligence courses. AI Magazine, Summer 2017; arxiv:1701.07769.[11]
  6. Arkin, R.C. Governing Lethal Behavior in Autonomous Robots. CRC Press, 2009.[12]
  7. Leyton-Brown, K. and Shoham, Y. Essentials of Game Theory. Morgan & Claypool, 2008.
  8. Russell, S. and Norvig, P. Artificial Intelligence: A Modern Approach. Prentice Hall, 3rd edition, 2010.
  9. The Stanford Encyclopedia of Philosophy[13]
  10. Deadly Dilemma[14] If a robot car kills a pedestrian to save itself and its passengers, why let it on the road? If a robot car destroys itself and its passengers to save a pedestrian, why buy it? But this dilemma is rare.
    1. The Near Miss Dilemma is more common. Reasoning actors, human and robot, usually find some third or forth way to avoid an outcome that produces casualties.
  11. Game Theory[15]
  12. Nash Equilibrium[16]
  13. Prisoner's Dilemma[17] See Figure 1.
  14. Deviance and Social Control: A Sociological Perspective, 2nd Ed. by Michelle Inderbitzin[18]
  15. Isaac Asimov's Three Laws of Robotics[19]
  16. I, Robot[20]


  1. valence[21] new vocabulary word.

Figures


Figure 1 Figure 2 Figure 3

Internal Links

Parent Article: Reading Notes