AI ethics is the study of ethical issues that arise in the development, distribution, and use of AI systems. As AI-based technologies become more powerful and accurate, they will have a large influence on many aspects of human societies. Already, AI systems are being deployed to help make decisions in finance, law, transportation, employment, and many other areas. It is therefore extremely important for those involved in the development and distribution of AI technologies to carefully consider ethical issues that will arise in the use of these technologies.
AI ethics is both a practical and a philosophical endeavor. Some of the key tasks involved in these two facets are described below.
Philosophical Tasks of AI Ethics
The central philosophical tasks of AI ethics are:
- Determining general ethical principles that should govern AI systems as they are deployed in any area of human society.
- Making ethical judgments about particular uses of AI and situations involving AI systems.
- Discerning and publicizing relevant ethical factors that should be considered by developers of AI, users of AI, and governmental authorities.
Practical Tasks of AI Ethics
The central practical tasks of AI ethics are:
- Developing ways of implementing ethical principles into AI systems to ensure positive outcomes.
- Making decisions about the ethical development, distribution and use of AI even under both factual and ethical uncertainty.
- Creating laws governing the development, distribution, and use of AI.
Specific Issues and Concepts in AI Ethics
The specific issues and concepts involved in AI ethics are described in the term pages below.
- The inability to explain the inner workings of some AI systems.