The intent of AI is to enable computers to perform tasks that normally require human intelligence, as such AI will evolve to take many jobs once performed by humans. AI in the late 1980’s through the late 1990’s was identified as a multidisciplinary science, which included expert systems, neural networks, robotics, Natural Language Processing (NPL), Speech Recognition and Virtual Reality. This multidisciplinary aspect of AI enabled AI to make inroads into industries such as healthcare, automotive, manufacturing, and the military. The most recent focus on AI is concerned with the implementation of Machine Learning (ML) Algorithms. ML algorithms is where we have taken more notice of the ethical and bias challenges faced with implementing AI applications.
When it comes to AI ethics a couple of questions immediately came to mind; are tech giants like Google, Facebook, Microsoft, and IBM shaping and controlling the narrative when it comes to AI Ethics? Furthermore, can we trust tech giants when they implement AI into their products that they are practicing ethical AI? There is certainly a need for more transparency in the implementation of AI.
Transparency in AI implementation is an important area of which DE&I (Diversity Equity and Inclusion) programs can champion and will provide an actionable area where diversity, equity and inclusion practices can be realized. Diversity Equity and Inclusion programs are becoming prevalent at organizations across all industries globally. Building an inclusive culture and improve diverse team engagement through policies and programs that promote the representation and participation of different groups of individuals, including people of different ages, races and ethnicities, abilities and disabilities, genders, religions, cultures, and sexual orientations are the pillars of any DE&I program.
Demonstrating DE&I in supporting ethical AI is to work with senior leadership including AI leadership in executing ethical AI by initially contributing to sound AI Policy and Standards. These AI policies and standards will include criteria for examining the ethicality of AI applications and identifying criteria for eliminating bias. Secondly DE&I can champion the establishment of diverse AI product development teams and stressing that AI applications are developed to be people (human)-centered.
Establish a Diverse AI Product development team:
Through collaboration, knowledge sharing, and knowledge reuse it is important to leverage different points of view, different experiences, and different cultural backgrounds to stimulate innovation and to eliminate (or limit) bias. This DE&I championed action leads to innovation. This innovation will enable organizations to deliver unique and or improved AI products.
- Establish a Diverse Team in the design, development, and implementation of AI applications
- A diverse team will bring a “diversity of thought” to the initiative and especially during the selection and cleansing of data to assist in removing bias from AI applications that use Machine Learning.
Develop AI applications to be people (human)-centered.
- People-centered AI applications support the inclusivity and diverse team engagement championed by DE&I.
- DE&I supports the well-being of people and values and fairness that a human-centered AI approach serves.
- People-centered AI applications must be designed, developed, and implemented with transparency, be robust and safe; and are accountable for the results and decisions that are produce and/or the decisions AI applications influence
Leveraging DE&I to advance ethical AI, I believe will result in tangible results and beneficial returns for the organization, its culture, people, and customers being served by the organization!