Select Page

Artificial Intelligence (AI) has become the latest “buzzword” in the industry today. However, AI has been around for decades. I was first exposed to Artificial Intelligence (AI) in 1987 as a Computer Science Masters student at DePaul University. I was so intrigued by the possibilities of AI, it became my specialty. I specifically focused on Expert Systems and Artificial Neural Networks (ANN). After spending a few years (1989 – 1993) designing and developing expert systems I participated in the IEEE conference on AI, in which I wrote an article entitled “Getting Started in Artificial Intelligence”. Published in: [1993] Proceedings IEEE International Conference on Developing and Managing Intelligent System Projects.

The way most organizations became involved in AI at that time was in the form of developing expert (rule-based/knowledge based) systems. From 1989 – 1998, I was developing expert systems and neural networks with platforms such as AionDS, Nexpert Object and PegaRules. The intent of AI was (and still is) to enable computers to perform tasks that normally requiring human intelligence. AI’s promise during that time was to build systems that will make decisions faster and more efficiently than humans, with the possibility of automating some jobs eventually replacing humans on these jobs. Does this sound familiar?

We must remember that AI just doesn’t consist of expert systems and neural networks. AI in the late 1980’s and early 1990’s evolved into a multidisciplinary science, which also includes Robotics, Natural Language Processing (NLP), Speech Recognition and Virtual Reality. As we moved into 2010 the focus of AI for most organizations turned to machine learning. This “rebirth” of AI was initiated by Big Data and the promise that machine learning will deliver in extracting knowledge and insights from an organization’s multitude and variety of data.

Getting started in AI for most organizations now consists the following;
• Selecting the right business case
• Understand the organization’s data
• Leverage the right machine learning algorithm(s)
• Select the right hardware and software
• Assemble the right team (including the Data Scientist)

To provide more insight on machine learning algorithms and its use on big data, the machine learning algorithms that are typically developed include supervised and unsupervised learning. The aim of supervised, machine learning algorithms are to build a model that makes predictions based on evidence in the presence of uncertainty. This adaptive algorithm will identify patterns in data and allows the system to “learn” from the observations. When exposed to more observations, the system will improve its predictive performance. The unsupervised learning algorithm will be used to find hidden patterns in the data. This will enable the machine learning algorithm to draw inferences from the various datasets consisting of input data without labeled responses. The unsupervised learning method that will be used is cluster analysis, which is best for exploratory data analysis to find hidden patterns or grouping in the data.

In recent years, the ability to mine large amounts of data to gain competitive advantage and the importance of machine learning and text analytics to this effort is gaining momentum. As the proliferation of structured and unstructured data continues to grow we will continue to have a need to uncover the knowledge contained within these big data resources. Machine learning will be key in extracting knowledge from big data. Strategy, process centric approaches and interorganizational aspects of decision support to research on new technology and academic endeavors in this space will continue to provide insights on how we process big data to enhance decision making. Deploying machine learning will be the catalyst to enable organizations to get started in AI.

(Visited 147 times, 1 visits today)