Artificial Intelligence (AI) and Ethics has become an important topic in the development and implementation of AI related applications. Someone once said, “Just because you can do something does not mean that you should”!
The intent of AI is to enable computers to perform tasks that normally require human intelligence, as such AI will evolve to take many jobs once performed by humans. AI in the late 1980’s through the late 1990’s was identified as a multidisciplinary science, which included expert systems, neural networks, robotics, Natural Language Processing (NPL), Speech Recognition and Virtual Reality. This multidisciplinary aspect of AI enabled AI to make inroads into industries such as healthcare, automotive, manufacturing, and the military. The most recent focus in AI is concerned with the implementation of Machine Learning (ML) Algorithms. ML algorithms is where we have taken more notice of the ethical and bias challenges faced with implementing AI applications.
Cognitive Bias, Cultural Bias and System of Belief
Cognitive bias refers to the systematic way in which the context and framing of data, information and knowledge will influence an individuals’ judgement and decision making. There are several types of cognitive bias and multiple types of cognitive bias can be influencing your decision making at one time. The types of cognitive bias include: Actor-observer bias, Anchoring bias, Attentional bias, Availability heuristic, Confirmation bias, False consensus effect, Functional fixedness, Halo effect, Misinformation effect, Optimism bias, Self-serving bias, and The Dunning-Kruger effect (see more on Cognitive Bias).
All people are susceptible to cognitive bias. Confirmation bias is especially concerning. Confirmation bias is the favoring of information that conforms to your existing beliefs and discounting evidence that does not conform will have a profound effect on data selection when training machine learning algorithms.
Things that you think are true but is not, gets in the way of understanding what is objectively true, the urge to find a pattern in something that is not really there is another aspect of confirmation bias.
Cultural bias, A cultural bias is a tendency to interpret a word or action according to culturally derived meaning assigned to it. Cultural bias can influence how we interpret and judge data, the results of analyzing the data and the knowledge gained from the analysis.
The bias of the members of the team selecting the datasets to be used to train the algorithms will influence the results of the algorithms. These bias results will contribute to the ethical challenges represented by the results. In the face of new data there must be the need to question the results of the algorithms. In this case examine the data used to train the algorithms and remove any bias data, also examine the type of algorithms used to ensure that the correct algorithms are being used for the appropriate tasks. Otherwise the algorithms will perpetuate, extrapolate and compound these biases in the results from the execution of these algorithms, how the results are interpreted, and the knowledge gained from them.
Systems of Belief, “Every personal system of belief contains identifiable biases” (Neil deGrasse Tyson). These biases will influence how someone thinks. A person’s systems of belief will also drive what they believe is the ethical use of AI and in particular machine learning. In any research it is important to identify potential biases and work to remove them from effecting the results of the research. This is no different from when we select the data, build and train machine learning algorithms, these biases must be identified and mitigated in the results.
Biases effect everything we do, including the data we select and the emphasis we put on certain types of data. We must recognize when bias occurs and remove it before it influences the design development and implementation of your AI applications. This is why I believe that having a diverse team that brings diverse ideas, experience and knowledge is important in addressing bias in AI and in turn improve the ethicality of AI.
Adhering to Standards
Adhering to AI standards that include criteria for examining ethicality of AI applications and identifying criteria for eliminating bias will go a long way to establish a universal baseline to addressing ethics in AI. I indicate a baseline because I believe that the technology will continue to evolve, and the AI ethics community will need to evolve the standards to keep pace.
One such standard comes to mind when addressing AI ethics and bias and that is the OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449. This is indicated as the first intergovernmental standard on AI, was proposed by the Committee and on Digital Economy Policy (CDEP) and adopted on May 22, 2019. Although this standard does not explicitly have a section addressing how to overcome ethical issues and bias in AI, it does have sections detailing the responsibility of stewarding and international cooperation of implementing trustworthy AI. I believe that it is this international standard (although it may be others) that must be instituted, adhered to and evolve to include how ethical issues and bias in AI should be addressed.
AI & Ethics Journal
The AI & Ethics Journal published by Springer: in which I am honored to be a part of the founding editorial board, will focus on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. It seeks to promote informed debate and discussion of the current and future developments in AI, and the ethical, moral, regulatory, and policy implications that arise from these developments. The journal will provide opportunities for academics, scientists, practitioners, policy makers, and the public to consider how AI might affect our lives in the future, and what implications, benefits, and risks might emerge.
The journal will cover a broad range of issues relating to AI, including (but not limited to):
- The moral questions which arise from basic research and the design and use of practical applications of AI
- The need for greater public understanding of AI, including the benefits and risks AI developments may bring
- The need for regulatory frameworks and guidelines, including self-regulation and constraints, and controls on development and implementation of AI technologies
- Emergent public policy and the possibility of legislative interventions linked to current and future ethical issues related to the use of AI technologies and their applications.
Relevant subject areas impacted by AI and ethics include (but are not limited to); Automation, Autonomous Vehicles, Emerging Technologies, Entertainment Industry, Finance, Health and Medicine, Human Interaction with AI, Law, Machine Learning and Deep Learning, Manufacturing and Industry, Media, Military, Politics, Privacy and Security (Privacy-preserving), Religion/Theology, Robotics, Social Media, Society and Workforce of the Future. I would encourage anyone who interested to submit articles for peer review. We need your input, knowledge and though leadership!
Thanks, Tony, for the great summary and historical context of this rapidly growing new field of AI and Ethics. Clearly from your post, even though new, this discipline impacts and is impacted by all fields involving data, information, and knowledge. That’s pretty much everything these days. Also from your post, we see that bias can be unintentional and intentional, which implies eternal vigilance in our professions.
Welcome to the board; your experience in general, and longtime KM perspective in particular, are much needed.
Thanks Larry for your comments and insights! I am excited to be a part of the AI & Ethics Board and looking forward to applying my experiences in AI and KM.