The full article appears in the AI and Ethics Journal published by Springer
Introduction
The focus of knowledge management (KM) is to enable people and organizations to collaborate, share, create and use knowledge. Understanding this KM is leveraged to improve performance, increase innovation and grow the knowledge base of both people and the organization. Knowledge must be Dynamic, Accurate and Personal to be applied in the decision-making process. Artificial Intelligence (AI) through machine learning allows machines to acquire, process and use knowledge to perform tasks and to unlock knowledge that can be delivered to people to improve the decision-making process. AI plays an important part to delivering knowledge in a digitized organization by elevating how the delivery of knowledge occurs to the people who need it. AI is used to scale the volume and effectiveness of knowledge distribution. It is imperative that when AI is applied to deliver knowledge for people to make decisions; including when AI is used to make decisions without human involvement; that the knowledge is without bias and the decisions made with the knowledge are ethical.
Incorporating artificial intelligence (AI)
AI provides the mechanisms to enable machines to learn. Incorporating AI in the delivery of knowledge will facilitate fast, efficient and accurate decision making. AI provides the capabilities to expand, use, and create knowledge in ways we have not yet imagined. AI systems which use machine learning, can detect patterns in enormous volumes of data and model complex, interdependent systems to generate outcomes that improve the efficiency of decision making, The use of AI (machine learning) in delivering knowledge is based on the data that is used to train the machine learning algorithms. We must keep in mind that when it comes to AI, we need both responsible use and responsible design.
Scale the delivery of knowledge
AI plays an important part to delivering knowledge in a digitized organization by elevating how the delivery of knowledge occurs to the people who need it. AI is used to scale the volume and effectiveness of knowledge distribution by:
- Predicting trending knowledge areas/topics that your employees need
- Identifying which targeted knowledge will resonate with your employees based on real-time engagement and content consumption
- Auto-curate and personalize knowledge based on individual preferences
- Improve content decisions by leveraging machine learning around what content will be best suited to address the situation
- AI will make search and its search products more relevant, precise and efficient.
- Chatbots with Natural Language Processing (NLP): will provide value for all employees in the various organizational functions along critical decision-making points with personalization of the delivery of knowledge.
Ethical issues of AI delivery of knowledge
There are various ethical issues that arise with certain uses of AI technologies. Each type of AI technology will raise different types of ethical issues when it comes to making decisions based on AI. AI technologies include, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, autonomous vehicles, and Digital Assistants (chatbots), just to name a few. AI also involves a number of computational techniques such as classical symbol-manipulating inspired by natural cognition, or machine learning via neural networks.
Some examples of the decisions being made with knowledge that is provided by AI applications include:
Financial services
In financial services (including insurance companies) many organizations are deploying AI solutions. In many cases financial service organizations are combining different AI solutions combined with machine learning (i.e., Robotic Process Automation, Language processing/NLP, Deep Learning Decision solutions) to deliver knowledge to its users to make better decisions. There are several benefits to deploying AI solutions in the financial services sector, which include improving customer service, providing smarter investment tools, credit analysis and scoring, and smarter financial analysis tools. However, these AI empowered tools raise policy questions related to ensuring accuracy, preventing discrimination and bias (especially in credit analysis and scoring) as well as impacting jobs.
As it pertains to AI being used for credit analysis and scoring, the difficulty to explain results from these algorithms have been a problem (OECD Artificial Intelligence in Society, 2019). This is driven by the legal standards in several countries, including the United States that require high levels of transparency. For example, in the United States the Fair Credit Reporting Act (1970), and the Equal Credit Opportunity Act (1974) implies that the process and the output of any algorithm has to be explainable.
Healthcare
AI applications in healthcare and pharmaceuticals has produced many benefits by delivering knowledge to detect health conditions early, deliver preventative services, optimizing clinical decision making, discovering new treatments and medications, delivering personalized healthcare, while providing powerful self-monitoring tools, applications and trackers. Although AI in healthcare offers many benefits it also raises policy questions and concerns that include access to (health) data and privacy, which includes personal data protection.
The healthcare sector is a knowledge-intensive industry and it depends on data and analytics to improve the delivery of healthcare (treatments, practices, procedures). There has been tremendous growth in the range of information collected, including clinical, genetic, behavioral and environmental data, with healthcare professionals, biomedical researchers and patients producing vast amounts of data from an array of devices (i.e., electronic health records (EHRs), genome sequencing machines, high-resolution medical imaging, and smartphone applications). How this data is collected used and protected will also bring challenges that all countries driven by its legal standards will have to address. In the United States organizations such as the Food and Drug Administration (FDA) and policies such as the Health Insurance Portability and Accountability Act (HIPAA) of 1996 are in place to ensure standards, guidelines, data security and privacy is adhered to and enforced.
To read the entire article access the AI and Ethics Journal.
Join us online on November 16 at 10 am ET for the webinar on AI Ethics and its Impact on Knowledge Management, conducted through the KM Institute‘s webinar series.
The focus of knowledge management (KM) is to enable people and organizations to collaborate, share, create, and use knowledge. Understanding this KM is leveraged to improve performance, increase innovation, and grow the knowledge base of both people and the organization. Knowledge must be Dynamic, Accurate, and Personal to be applied in the decision-making process.
Artificial Intelligence (AI) through machine learning allows machines to acquire, process, and use knowledge to perform tasks and to unlock knowledge that can be delivered to people to improve the decision-making process. AI plays an important part in delivering knowledge in a digitized organization by elevating how the delivery of knowledge occurs to the people who need it. AI is used to scale the volume and effectiveness of knowledge distribution.
It is imperative that when AI is applied to deliver knowledge for people to make decisions – including when AI is used to make decisions without human involvement – that the knowledge is without bias and the decisions made with the knowledge are ethical.
This webinar examines how the challenges to remove bias from AI applications and the ethicality of applying AI will impact decision making with the knowledge being delivered.
Recent Comments