Select Page

I recently had the pleasure of being a guest on Edwin K. Morris podcast “Because You Need to Know”. During this podcast the conversation around AI ethics came up. A couple of questions immediately came to mind; are tech giants like Google, Facebook, Microsoft and IBM shaping and controlling the narrative when it comes to AI Ethics? Furthermore, can we trust tech giants when they implement AI into their products that they are practicing ethical AI? There is certainly a need for more transparency in the implementation of AI.

The intent of AI is to enable computers to perform tasks that normally require human intelligence, as such AI will evolve to take many jobs once performed by humans. AI in the late 1980’s through the late 1990’s was identified as a multidisciplinary science, which included expert systems, neural networks, robotics, Natural Language Processing (NPL), Speech Recognition and Virtual Reality. This multidisciplinary aspect of AI enabled AI to make inroads into industries such as healthcare, automotive, manufacturing, and the military. The most recent focus on AI is concerned with the implementation of Machine Learning (ML) Algorithms. ML algorithms is where we have taken more notice of the ethical and bias challenges faced with implementing AI applications.

Cognitive Bias, Cultural Bias and System of Belief Cognitive bias refers to the systematic way in which the context and framing of data, information and knowledge will influence an individuals’ judgement and decision making. There are several types of cognitive bias and multiple types of cognitive bias can be influencing your decision making at one time. The types of cognitive bias include: Actor-observer bias, Anchoring bias, Attentional bias, Availability heuristic, Confirmation bias, False consensus effect, Functional fixedness, Halo effect, Misinformation effect, Optimism bias, Self-serving bias, and The Dunning-Kruger effect (see more on Cognitive Bias).

All people are susceptible to cognitive bias. Confirmation bias, the favoring information that conforms to your existing beliefs and discounting evidence that does not conform will have a profound effect on data selection when training machine learning algorithms. Things that you think are true but is not, gets in the way of understanding what is objectively true, the urge to find a pattern in something that is not really there, (selection) confirmation bias

Cultural bias, A cultural bias is a tendency to interpret a word or action according to culturally derived meaning assigned to it. Cultural bias can influence how we interpret and judge data, the results of analyzing the data and the knowledge gained from the analysis.

The bias of the members of the team selecting the datasets to be used to train the algorithms will influence the results of the algorithms. These bias results will contribute the ethical challenges represented by the results. In the face of new data there is the need to question the results of the algorithms. In this case examine the data used to train the algorithms and remove any bias data, also examine the type of algorithms used to ensure that the correct algorithms are being used for the appropriate tasks. Otherwise the algorithms will perpetuate, extrapolate and compound these biases in the results from the execution of these algorithms, how the results are interpreted, and the knowledge gained from them.  Systems of Belief, “Every personal system of belief contains identifiable biases” (Neil deGrasse Tyson).

I like the quote from Neil deGrasse Tyson because it is simply stated and relatable. Biases will influence how someone thinks. A person’s systems of belief when it comes to implementing AI will also drive what that person believe is the ethical use of AI and in particular machine learning. In any research it is important to identify potential biases and work to remove them from effecting the results of the research. This is no different from when we select the data, and build and train machine learning algorithms, these biases must be identified, mitigated and removed.

Biases effect everything we do, including the data we select and the emphasis we put on certain types of data. We must recognize when bias occurs and remove it before it influences the design development and implementation of your AI applications. This is why I believe that having a diverse team that brings diverse ideas, experience and knowledge is important in addressing bias in AI and in turn improve the ethicality of AI.

Putting AI Ethics into Practice

In order to put AI Ethics into practice you must start with a sound AI Policy and Standards. There are several international standards on AI that include ethical standards.

  • Adhering to AI standards that include criteria for examining ethicality of AI applications and identifying criteria for eliminating bias
  • I believe that the technology will continue to evolve, and the AI ethics community will need to evolve the standards to keep pace.

One such standard comes to mind when address AI ethics and bias and that is The Organization for Economic Co-operation and Development (OECD), Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449. This is indicated as the first intergovernmental standard on AI, was proposed by the Committee and on Digital Economy Policy (CDEP) and adopted on May 22, 2019.

Establish a Diverse AI Product development team:

Through collaboration, knowledge sharing, and knowledge reuse it is important to leverage different points of view, different experiences and different cultural backgrounds to stimulate diversity of thought. This diversity of thought leads to innovation. This innovation will enable organizations to deliver unique and or improved AI products.

  • Establish a Diverse Team in the design, development and implementation of AI applications
  • A diverse team will bring a “diversity of thought” to the initiative and especially during the selection and cleansing of data for AI applications that use Machine Learning.

Develop AI applications to be people (human)-centered.

  • People-centered AI applications support the inclusivity and well-being of people that it serves and respects human-centered values and fairness.
  • People-centered AI applications must be designed, developed and implemented with transparency, be robust and safe; and are accountable for the results and decisions that are produce and/or the decisions AI applications influence

Establish AI KPI’s and Metrics:

  • To ensure that progress toward implementing an AI standards, guidelines and principles, you need to establish standard metrics to measure AI systems
  • Build evidence base metrics and KPI’s to continually assess AI applications that are implemented.

Relevant subject areas impacted by AI and ethics include (but are not limited to); Human Resources, Entertainment Industry, Finance, Health and Medicine, Law, Learning and Development, Manufacturing and Industry, Media, Military, Politics, Privacy and Security (Privacy-preserving), and Social Media.

(Visited 14 times, 1 visits today)