
AI Ethics has become a hot (and necessary) topic of discussion over the past few years. I have written about AI Ethics in the delivery of knowledge, AI ethics in delivering healthcare analytics, AI Ethics and Diversity Equity and Inclusion (DEI), how to put AI ethics into practice, as well as participated in webinars and podcasts on AI Ethics.
As we enter 2023, I see several challenges for Implementing and executing AI Ethics at organizations in the coming months.
- The ethicality of AI applications must be examined to understand whether the outcomes from the application of AI violate US Federal, GDPR, and/or other ethical, security, and privacy standards.
- Organizations must adopt and adhere to an AI ethical standard one in which AI solutions will comply with ethical, security, and privacy standards set by the individual organization and aligned with an established AI standard. The question however will be which AI standard to adopt and/or align to? There are several to consider including the AI Bill of Rights, International Standard for AI developed by the Organization for Economic Co-operation and Development (OECD), United Nations Educational, Scientific and Cultural Organization (UNESCO), United States Artificial Intelligence Institute (USAII), and U.S. Department of Commerce National Institute of Standards and Technology (NIST) AI Standards for Federal Engagement, just to name a few (see AI Governance Landscape below for a snapshot of AI governance standards).
- Organizations must ensure compliance with responsible AI. The challenge is for organizations using AI technology to provide the capability for results generated by AI to be: 1) Explainable: ensure that the performance of AI systems are understandable by the people in charge; 2) Fairness: ensure that the data is not biased; 3) Accountability: clearly identify which decisions will be delegated to machines versus require human intervention, and who is responsible for each of them; and 4) Symmetry: data use must be compliant with the expectations of the original owners of the data.
- Establish Diverse AI Product development teams:
- Through collaboration, knowledge sharing, and knowledge reuse it is important to leverage different points of view, different experiences, and different cultural backgrounds to stimulate innovation and to eliminate (or limit) bias. This innovation will enable organizations to deliver unique and or improved AI products. The challenge here is for organizations to establish a diverse team that brings together skill sets in the ethical use, data collection, design, development, and implementation of AI applications. A diverse team will bring a “diversity of thought” to the initiative and especially during the selection and cleansing of data to assist in removing bias from AI applications that use Machine Learning.
- Expand the Mission of Diversity, Equity and Inclusion (DE&I)
- To support the creation of diverse teams’ and champion a human-centered AI approach, organizations have the challenge of expanding (in most cases establishing) their DE&I programs. People-centered AI applications support the inclusivity and diverse team engagement championed by DE&I. DE&I supports the well-being of people and values and fairness that a human-centered AI approach serves. People-centered AI applications must be designed, developed, and implemented with transparency, be robust and safe; and are accountable for the results and decisions that are produce and/or the decisions AI applications influence. Leveraging DE&I to advance ethical AI, I believe will result in tangible results and beneficial returns for the organization, its culture, people, and customers being served by the organization.

These are some of the challenges I see facing organizations implementing AI Ethics in 2023. I’m sure there are much more. I welcome your comments and suggestions. If your organization is addressing any of these challenges, I’m sure everyone would like to know what your organization is doing to meet these challenges!
Recent Comments