AI Ethics
Class: PHIL-282
Citation: Coeckelbergh, Mark. 2020. AI Ethics. Cambridge, MA: MIT Press.
Notes:
Chapter 5: The Technology
-
Artificial Intelligence (AI)
- Definition 1: Intelligence displayed or simulated by code (algorithms) or machines.
- Definition 2 (Human-like): "the science and engineering of machines with capabilities that are considered intelligent by the standard of human intelligence" (Jansen et al. 2018).
- Neutral Definition: Many researchers prefer a definition independent of human intelligence, listing cognitive functions like learning, perception, planning, natural language processing, reasoning, decision making, and problem solving.
- Cognition: Margaret Boden states AI "seeks to make computers do the sort of things that minds can do," clarifying that information processing isn't exclusive to humans. General intelligence isn't necessarily human; animals are intelligent, and transhumanists envision non-biological minds.
-
History of AI
- Roots: Connected to computer science, mathematics, and philosophy, with conceptual origins in early modern thinkers (Leibniz, Descartes) and ancient stories of artificial beings.
- Discipline Birth: Began in the 1950s after the invention of the programmable digital computer (1940s) and the discipline of cybernetics.
- Cybernetics Definition: The scientific study of "control and communication in the animal and the machine" (Norbert Wiener, 1948).
- Key Moments: Alan Turing's 1950 paper, "Computing Machinery and Intelligence," introduced the Turing test and speculated on learning machines. The Dartmouth workshop in 1956, where John McCarthy coined the term "AI," is considered the birthplace of contemporary AI. Early AI aimed to simulate (not re-create) human intelligence.
-
Types of AI (by Capability)
- Strong AI (General AI)
- Definition: AI capable of performing any cognitive task that humans can do.
- Status: Not yet achieved, and its future realization is uncertain; not currently on the horizon.
- Weak AI (Narrow AI)
- Definition: AI that can perform tasks only in specific domains (e.g., chess, image classification).
- Status: This is the type of AI that exists today, is becoming more powerful and pervasive, and is the focus of current ethical and policy discussions.
- Strong AI (General AI)
-
AI as Science and Technology
- As a Science: Aims to explain intelligence and cognitive functions, helping us understand humans and natural intelligence. It is linked to cognitive science, psychology, data science, and neuroscience.
- As a Technology: Aims to develop tools for practical purposes, "to get useful things done". These tools create the appearance of intelligence by analyzing data and acting with autonomy.
- Interdisciplinary Nature: AI relies on and is linked to mathematics (statistics), engineering, linguistics, cognitive science, computer science, psychology, and philosophy. Frankish and Ramsey define AI as "a cross-disciplinary approach to understanding, modeling, and replicating intelligence and cognitive processes by invoking various computational, mathematical, logical, mechanical, and even biological principles and devices".
- Focus: This framework emphasizes AI as a technology due to its ethical and societal consequences.
-
Forms of AI Technology
- Components: AI typically takes the form of algorithms, machines, or robots, and is part of larger technological systems.
- Hardware/Software: AI can be software running on the web (e.g., chatbots, search engines) or embedded in hardware devices (e.g., robots, cars, Internet of Things applications).
- Cyber-physical systems: Devices that operate in and interact with the physical world, like robots.
- Embodied AI: AI integrated into a robot, directly influencing the physical world.
- Interdependence: AI software requires hardware and physical infrastructure, and cyber-physical systems need software to be considered "AI". Users often perceive hardware and software as a single device (e.g., a robot or Alexa).
- Social Robots: Often incorporate AI to participate in daily human social life as companions or assistants.
- Algorithm: The fundamental basis of an AI's "intelligence" is software, specifically an algorithm or a combination of algorithms.
- Definition: A set and sequence of instructions, like a recipe, that tells a computer or device what to do to solve a problem, producing output from available input. Understanding algorithms is crucial for AI ethics.
-
Different Approaches and Subfields of AI
- Symbolic AI (GOFAI - Good Old-Fashioned Artificial Intelligence)
- Description: Dominant until the late 1980s, relies on symbolic representations for higher cognitive tasks like abstract reasoning and decision-making.
- Method: Uses decision trees—models of decisions and consequences, often with conditional "if...then..." statements—to create deterministic processes.
- Expert System Definition: Draws on human expert knowledge databases to make expert decisions or recommendations. These are transparent and interpretable.
- Connectionism (Neural Networks)
- Description: Developed in the 1980s, based on interconnected networks of simple units ("neurons") that proponents suggest are similar to how the human brain works.
- Machine Learning Connection: Often used for machine learning; becomes deep learning when neural networks have multiple layers.
- Impact: Enabled progress in machine vision and natural language processing.
- Black Box Issue: Neural networks can be "black boxes," meaning that while the architecture is known, the precise workings of intermediate layers and how decisions are made are unclear.
- Embodied and Situational Approaches: Focus on motor tasks and interaction with the environment, rather than symbolic representations.
- Evolutionary AIs: Can evolve and even change themselves using genetic algorithms.
- Subfields: Include machine learning, computer vision, natural language processing (NLP), expert systems, evolutionary computation, among others.
- Symbolic AI (GOFAI - Good Old-Fashioned Artificial Intelligence)
-
Applications and Impact of AI
- Broad Domains: Industrial manufacturing, agriculture, transportation, healthcare, finance, marketing, entertainment, education, social media.
- Specific Examples: Recommender systems for purchase decisions, targeted advertising, social media bots (software-powered user accounts). Used in healthcare for patient data analysis and expert systems, and in finance for market analysis and automated trading. Powers autopilots, self-driving cars, employee monitoring, video game characters, music composition, news articles, voice mimicry, and fake videos.
- Pervasive Impact: Transforms fields like security (predictive policing, speech recognition), urban planning (self-driving cars), financial markets (algorithmic trading), and medical diagnostics. It can help scientists discover new connections in big data across natural, social sciences, and humanities.
- Societal Effects: Influences social relations, privacy, can exacerbate bias and discrimination, cause job losses, transform economies, and increase inequality. Military applications include automated lethal weapons. Also carries environmental impacts such as increased energy consumption and pollution.
- Positive Effects: Can create new communities, reduce repetitive/dangerous tasks, improve supply chains, and reduce water use.
- Stakeholders: Impact varies significantly for different stakeholders (workers, patients, consumers, governments, investors, enterprises) within and between countries. Questions arise about who benefits from and has access to AI technology.
- Context: AI is part of a broader landscape of digital technologies; some ethical issues are not unique to AI but are shared with other automation or connected technologies (e.g., social media and privacy). AI is often invisible in everyday life and is always embedded in other technologies and practices. AI ethics should therefore connect to broader ethics of digital technologies.
- Human Element: AI is not just technology but also concerns human use, perception, experience, and embedding in social-technical environments. The current "hype" is not new, paralleling past advanced technologies, and will likely deflate as AI becomes more integrated into daily life. This context helps in understanding AI, but does not diminish the need for ethical evaluation.
Chapter 6: Don't Forget the Data (Science)
-
Machine Learning (ML)
- Definition: Software that can "learn". It is a statistical process based on statistics, not true cognition or similar to human thought.
- Underlying Task: Often pattern recognition.
- Functionality: Algorithms identify patterns or rules in data, then use them to explain the data and make predictions for future data. This happens autonomously, without direct instruction or rules from the programmer.
- Distinction from Expert Systems: Unlike expert systems, where human experts provide rules to programmers, ML algorithms find their own rules based on a given objective or task.
- Process: Programs optimize for prediction accuracy on datasets (e.g., cat images). Humans provide feedback but not specific rules. The computer creates its own models from data, making data "active" rather than "passive".
- Training: Researchers train algorithms using existing data (e.g., old emails) to predict results from new data.
- Data Mining: Sometimes used synonymously with identifying patterns in big data, but the term is misleading as the goal is pattern extraction/analysis, not data extraction itself.
-
Types of Machine Learning
-
Supervised Learning
- Definition: The algorithm focuses on a particular variable designated as the target for prediction, where categories are already known.
- Process: The programmer trains the system with examples and non-examples (e.g., high/low security risk individuals), and the algorithm learns to predict category membership for new data.
-
Unsupervised Learning
- Definition: Training is not done with known categories; algorithms create their own clusters or categories based on variables they select.
- Functionality: Can identify patterns that human domain experts may not have found. The categories might seem arbitrary to humans but are statistically identifiable, and sometimes yield new knowledge.
-
Reinforcement Learning
- Definition: Requires feedback on whether the output is good or bad, analogous to reward and punishment.
- Process: The program learns through an iterative process which actions yield reward, adapting its predictions based on feedback.
-
Human Involvement: Despite the autonomy, humans are involved in all three types of machine learning in various ways.
-
Accuracy: There is always a percentage of error; the system is never 100% accurate.
-
-
Big Data
- Description: Refers to large amounts of data, the availability of which, combined with increased (cheaper) computer power, has driven interest in machine learning.
- Generation: We all produce data through digital activities like social media use or online purchases.
- Collection: Commercial entities, governments, and scientists gather, store, and process this data easily due to the digital environment, online applications, cheaper storage, and powerful computers.
-
Data Science
- Definition: Aims to extract meaningful and useful patterns from (large) data sets. Machine learning is key to automatically analyzing these large datasets.
- Basis: Like machine learning, it is based on statistics, which involves moving from particular observations to general descriptions and finding correlations. Statistical modeling identifies mathematical relationships between input and output.
- Scope: Involves more than just data analysis; it includes data collection, preparation, and interpretation of results.
- Challenges: How to capture, clean, get sufficient data, combine, restructure, and select relevant data sets.
- Human Role: Humans remain crucial at all stages, including problem framing, data capture and preparation, algorithm selection, result interpretation, and action decisions. Human expert knowledge, collaboration, and interpretation are vital, as AI lacks understanding, experience, sensitivity, and wisdom.
- Spurious Correlations
- Definition: A problem in statistics where variables appear causally related but are not, often due to an unseen third factor. Examples include divorce rates and margarine consumption.
- AI's Role: AI may find such correlations, but humans are needed to decide which correlations warrant further study for causal links.
- Abstraction from Reality
- Concept: Choices are made when gathering and designing data sets, representing an abstraction from reality that is never neutral.
- Implications: Statistical methods create a model of reality, not reality itself, involving choices about the algorithm and the training data. This necessitates critical questions about representativeness and embedded biases, which are ethical issues.
-
Applications of Machine Learning and Data Science
- Diverse applications across many fields:
- Security: Face recognition, emotion recognition, predictive policing, recidivism prediction.
- Commerce/Marketing: Search suggestions, product recommendations (Amazon), theft detection, shopper mood analysis (Walmart), targeted advertising (Facebook, Instagram).
- Transportation: Self-driving cars (BMW).
- Finance: Mortgage recommendations (Experian), fraud prediction (American Express), market analysis, automated trading.
- Healthcare: Cancer diagnosis (radiology scans, IBM Watson), detection of infectious diseases, degenerative eye conditions (DeepMind), wearable health devices.
- Journalism: Writing news stories (Press Association).
- Home/Personal: Robots, assistive interactive devices (Hello Barbie using natural language processing).
- Entertainment: Content creation (Netflix), music retrieval, video games.
- Other: Education, recruiting, criminal justice, office work, agriculture, military weapons.
- Current Status: This narrow or weak AI is already pervasive and powerful, making discussions on its ethical issues urgent. Statistics, once unappealing, is now "hot" and seen as "new magic" within data science and AI.
- Diverse applications across many fields: