Moral Machines - From Value Alignment to Embodied Virtue

Class: PHIL-282
Author: Wendell Wallach and Shannon Vallor
Title: Moral Machines: From Value Alignment to Embodied Virtue


Introduction: Engineering Moral Machines

13.1 Moral Machines and Value Alignment

13.2 Core Concepts in Machine Ethics

13.3 Values, Norms, Principles, and Procedures

13.4 Top-Down, Bottom-Up, and Hybrid Approaches to Moral Machines

13.5 The Limitations of a Hybrid Approach

13.6 Virtue Ethics & Virtuous Machines

13.7 Virtuous Agents: Four Key Capacities

For an AI to be truly virtuous, it would need analogs to four sophisticated capacities that humans possess. The practical obstacles to engineering these are immense.

  1. Moral Understanding:
    • A holistic, integrated awareness of moral life that comes from embodied engagement with the world, not just a stored "map" of it.
    • Why it's hard for AI:
      • Human understanding is semantic (based on meaning); machine learning is symbolic (it tracks data patterns without grasping what they mean).
      • Humans have a rich set of embodied and affective capacities (empathy, hormones, physical attunement) that provide a massive flow of morally salient data. An AI without a body is at an "immense informational disadvantage".
  2. Moral Perception:
    • The ability to detect and track specific morally important features in an environment, especially novel ones.
    • Why it's hard for AI: It relies on the full range of embodied and affective channels to intuitively "sense" a moral situation, which current AI lacks.
  3. Moral Reflection:
    • The ability to evaluate one's own moral values from a higher-order perspective—to genuinely want to be morally better than you are.
    • Why it's hard for AI: A machine that can do this could correct its own bad training. But this requires the machine to want to want something different, a capacity rooted in our embodied, reflective desires.
  4. Moral Imagination:
    • The ability to project moral understanding into possible futures and feel the moral weight of different choices.
    • Why it's hard for AI: Human moral motivation seems to require affective projection—the ability to feel what moral failure would be like. This capacity depends on our embodied, affective nature, which machines lack.

13.8 Virtue Embodiment

13.9 Summary