Jun 282017
 

RobotThis is the second of a three (3) part post on the connection between Artificial Intelligence (AI) and Knowledge Management (KM). In this post I examine those industries that will or are soon to be disrupted by AI and KM, specifically in the form of Cognitive Computing. Before we look ahead, let’s take a look back. During the time I first became involved in AI (late 80’s), it’s hype and promise at that time became too much to live up to (a typical phenomenon in software see Hype Cycle) and its promise faded into the background. Fast forward to 2010 and AI is beginning to become the “next big thing”. AI had already made its presence felt in the automobile industry (robotics), as well as with decision making systems in medicine, logistics, and manufacturing (expert systems and neural networks). Now AI in the form of Cognitive Computing is making its mark on several industries. In a recent CB Insights Newsletter, it was stated that the US Bureau of Labor Statistics indicates that 10.5 million jobs are at risk of automation. Due to the rapid adoption and application of better hardware processing capabilities which facilitate artificial intelligence algorithms use on big data this is leading the change in blue and white collar jobs.

At a recent Harvard University commencement address , Facebook Chief Executive Mark Zuckerberg stated “Our generation will have to deal with tens of millions of jobs replaced by automation like self-driving cars and trucks,”. Bill Gates, the founder of Microsoft and Chairman of the Bill and Melinda Gates Foundation in a recent MarketWatch story had this to say “In that movie, old Benjamin Braddock (Dustin Hoffman) was given this very famous piece of advice: “I just want to say one word to you. Just one word …Plastics,” And today? That word would likely be “robots“, and “artificial intelligence” would have a huge impact”.

Although there are many industries where Cognitive Computing will disrupt the way business is conducted including the economics around job loss and future job creation, I have chosen to look at three (3) industries; Legal Services, Automotive Industry, and Healthcare.

Legal Services

LegalKnowledge Management (KM) is becoming more prevalent within law firms as well as legal departments as the practice of KM has become more mature. AI technologies are also making its way into the practice of law. Ability to reuse internally developed knowledge assets such as precedents, letters, research findings, and case history information is vital to a law firm’s success. Paralegals currently play a critical role in assisting attorneys with discovery. With the use of AI systems attorneys will be able to “mine” more accurately and efficiently the large volumes of documents (i.e., precedents, research findings, and case history information) located in various repositories to aid in decision making and successful client outcomes. This ability will limit the use of paralegals and attorneys currently needed to perform these tasks.

Cognitive computing, will enable computers to learn how to complete tasks traditionally done by humans. The focus of cognitive computing is to look for patterns in data, carrying out tests to evaluate the data and finding results. This will provide lawyers with similar capabilities as it provides doctors; an in-depth look into the data that will provide insights that cannot be provided otherwise. According to a 2015 Altman Weil Law Firms in Transition survey  35% of law firm leaders indicate cognitive computing will replace 1st year associates in the next ten (10) years. While 20% of law firm leaders indicate cognitive computing will replace 2nd and 3rd year attorneys as well. In addition, 50% of law firm leaders indicate cognitive computing will replace paralegals altogether. Cognitive computing capability to mine big data is the essential reason lower level research jobs will be replaced by computers. This situation is not just limited to the legal profession.

Automotive Industry

Autonomous VehicleAutonomous Vehicles and Vehicle Insurance

Autonomous vehicles, also known as a driverless car, robot car (here we go with robots again!), and self-driving car can guide themselves without human intervention. This kind of vehicle is paving the way for future cognitive systems where computers take over the art of driving. Autonomous Vehicles are positioned to disrupt the insurance industry. Let’s take a look at what coverages are a part of the typical vehicle insurance policy.

Vehicle insurance typically addresses six (6) coverages. These coverages include (1) Bodily Injury Liability, which typically applies to injuries that you, the designated driver or policyholder, cause to someone else; (2) Medical Payments or Personal Injury Protection (PIP), which covers the treatment of injuries to the driver and passengers of the policyholder’s vehicle; (3) Property Damage Liability, which covers damage you (or someone driving the car with your permission) may cause to someone else’s property; (4) Collision, which covers damage to your car resulting from a collision with another car, object or and even potholes; (5) Comprehensive, which covers you for loss due to theft or damage caused by something other than a collision with another car or object, such as fire, falling objects, etc.; (6) Uninsured and Underinsured Motorist Coverage, which reimburses you, a member of your family, or a designated driver if one of you is hit by an uninsured or hit-and-run driver. The way these coverage’s are applied (or not) to a vehicle policy will be disrupted by the use of autonomous vehicles.

According to an 2016 Forbes article by Jeff McMahon  about 90 percent of car accidents are caused by human error. However, it is estimated that autonomous vehicles will significantly reduce the number of accidents. This will significantly disrupt the insurance revenue model, effecting all six (6) types of coverage identified above. When the risk of accidents drops, the demand for insurance will potentially drop as well (this will not happen unless the states no longer require insurance that covers accidents). So, there will be no doubt that auto insurance companies will the type of coverage and the language effecting the policies.

Some Unintended? Side Effects

The autonomous vehicle with its multiple sensors have the potential to eliminate accidents due to distractions and drunk driving. This will disrupt the vehicle repair industry by largely eliminating crashes so collision repair shops will lose a huge portion of their business. Indirectly, the decreased demand for new auto parts will hurt vehicle part manufacturers. According to the U.S. Department of Transportation in 2010 approximately 24 million vehicles were damaged in accidents, which had an economic cost of $76 billion in property damages. The loss of this revenue will put a strain on these manufacturers.

Healthcare

CogMedThe healthcare delivery process presents a consistent flow of data, information and knowledge for the delivery of healthcare. These areas include Patient Intake, Data Collection, Decision Support, Diagnosis and Treatment, and Patient Closeout. The areas of the healthcare delivery process that will be disrupted by cognitive computing include Patient Intake, Data Collection and Diagnosis and Treatment.

Patient Intake and Data Collection: The patient intake process is the first opportunity to capture knowledge about the patient and his/her condition at the time of arrival at the healthcare facility. Cognitive computing executed through natural language processing (NLP) tools will capture medical insurance information, method of payment, medical history and current vital condition. All of this data is transitioned to the facilities database. This presents an opportunity for the data, information and knowledge about the patient to be automatically shared. NLP tools will limited or eliminate the need for a receptionist/admin to initially capture patient information.

Diagnosis and Treatment: Making a diagnosis is a very complex process, which includes cognitive tasks that involves both logical reasoning and pattern recognition. The development of Artificial Neural Networks that incorporates deep-learning capabilities are being developed to mine health related big data repositories. This innovation is providing clinicians and researchers with effective tools for improving and personalizing patient treatment options. It has been established that big-data analysis could help to identify which patients are most likely to respond to specific therapeutic approaches versus others. Analysis of such data may also improve drug development by allowing researchers to better target novel treatments to patient populations.

The clinical trials that pharmaceutical companies rely on for FDA approval and drug labeling capture too little of the information patients and physicians need. The trials only enroll a small percentage patients and can take years and tens of millions of dollars to finish. Many trials never enroll enough patients to get off the ground. Using cognitive computing will assist physicians to understand which patients are most likely to respond with standard approaches, and which need more aggressive treatment and monitoring. Enabling cognitive computing to harness the genetic and clinical data routinely generated by hospitals and physicians would also accelerate drug development, by rapidly matching targeted treatments sitting in companies’ research pipelines with the patients who are most likely to respond. In addition, the sheer number of clinical research and medical trials being published on an ongoing basis makes it difficult to analyze the resulting big data without the use of cognitive computing tools (for more information see Forbes article: IBM and Microsoft are Disrupting The Healthcare Industry with Cognitive Computing).

Where do we go from here!

AI, KM and Cognitive Computing will continue to evolve and more areas of disruption will be coming. So, the question is… how can we address the loss of jobs; how can we prepare for the new jobs; and how must business and government evolve to meet the challenges that cognitive computing present? It is clear that we must retrain/retool the current workforce and at the same time infuse our vocational schools, trade schools, colleges and universities with the right tools and experienced instructors/professors to teach the concepts and applications of AI, KM and Cognitive Computing. Businesses must continue to innovate. Innovating in the same old way will cause a business to become extinct. However, I’m talking about innovating by bringing a diversification of thought and experiences, including cultural into the innovation community. Creating your innovation intersection (for more on finding your innovation intersection – read The Medici Effect by Frans Johansson). Only by innovating differently will your business not only survive but thrive in this new world where interacting with computers (yes robots too!) will be an everyday occurrence in life!

May 312017
 

AI and KMThis is the first of a three (3) part post on the connection between Artificial Intelligence and Knowledge Management.

Artificial Intelligence (AI) has become the latest “buzzword” in the industry today. However, AI has been around for decades. The intent of AI is to enable computers to perform tasks that normally require human intelligence, as such AI will evolve to take many jobs once performed by humans. I studied and developed applications in AI from the mid to late 1980’s through the early 2000’s. AI in the late 1980’s and early 1990’s evolved into a multidisciplinary science which included expert systems, neural networks, robotics, Natural Language Processing (NPL), Speech Recognition and Virtual Reality.

Knowledge Management (KM) is also a multidisciplinary field. KM encompasses psychology, epistemology, and cognitive science. The goals of KM are to enable people and organizations to collaborate, share, create, use and reuse knowledge. Understanding this KM is leveraged to improve performance, increase innovation and expand what we know both from an individual and organizational perspective.

KM and AI at its core is about knowledge. AI provides the mechanisms to enable machines to learn. AI allows machines to acquire, process and use knowledge to perform tasks and to unlock knowledge that can be delivered to humans to improve the decision-making process. I believe that AI and KM are two sides of the same coin. KM allows an understanding of knowledge to occur, while AI provides the capabilities to expand, use, and create knowledge in ways we have not yet imagined.

The connection of KM and AI has lead the way for cognitive computing. Cognitive computing uses computerized models to simulate human thought processes. Cognitive computing involves self/deep learning artificial neural network software that use text/data mining, pattern recognition and natural language processing to mimic the way the human brain works. Cognitive computing is leading the way for future applications involving AI and KM.

In recent years, the ability to mine larger amounts of data, information and knowledge to gain competitive advantage and the importance of data and text analytics to this effort is gaining momentum. As the proliferation of structured and unstructured data continues to grow we will continue to have a need to uncover the knowledge contained within these big data resources. Cognitive computing will be key in extracting knowledge from big data. Strategy, process centric approaches and interorganizational aspects of decision support to research on new technology and academic endeavors in this space will continue to provide insights on how we process big data to enhance decision making.

Cognitive computing is the next evolution of the connection between AI and KM. In future post, I will examine and discuss the industries where cognitive computing is being a disruptive force. This disruption will lead to dramatic changes on how people will work in these industries.

Mar 312017
 

CognitiveThere are approximately 22,000 new cases of lung cancer each year with an overall 5-year survival rate of only ~18 percent (American Cancer Society). The economic burden of lung cancer just based on per patient cost is estimated $46,000/patient (lung cancer journal). Treatment efforts using drugs and chemotherapy are effective for some, however more effective treatment has been hampered by the inability of clinicians to better target treatments to patients. It has been determined that Big Data holds the key for providing clinicians with the ability to develop more effective patient centered cancer treatments.

Analysis of Big Data may also improve drug development by allowing researchers to better target novel treatments to patient populations. Providing the ability for clinicians to harness Big Data repositories to develop better targeted lung cancer treatments and to enhance the decision-making process to improve patient care can only be accomplished through the use of cognitive computing. However, having a source or sources of data available to “mine” for answers to improve lung cancer treatments is a challenge!

There is also a lack of available applications that can take advantage of Big Data repositories to recognize patterns of knowledge and extract that knowledge in any meaningful way. The extraction of knowledge must be presented in a way that researchers can use to improve patient centric diagnosis and the development of patient centric treatments. Having the ability to use cognitive computing and KM methods to uncover knowledge from large cancer repositories will provide researchers in hospitals, universities, and pharmaceutical companies with the ability to use Big Data to identify anomalies, discover new treatment combinations and enhance diagnostic decision making.

Content Curation

An important aspect to cognitive computing and Big Data is the ability to perform a measure of content curation. The lung cancer Big Data environment that will be analyzed should include both structured and unstructured data (unstructured being documents, spreadsheets, images, video, etc.). In order to ingest the data from the Big Data resource the data will need to be prepared. This data preparation includes applying Information Architecture (IA) to the unstructured data within the repository. Understanding the organization and classification schemes relating to the data both structured and unstructured is essential to unifying the data into one consistent ontology.

Are We Up for the Challenge!

Even if a Big Data source was available and content curation was successful, the vast amounts of patient data is governed by HIPAA laws which makes it difficult for researchers to gain access to clinical and genomic data shared across multiple institutions or firms including research institutions and hospitals. According to Dr. Tom Coburn in his January 14th article in the Wall Street Journal ‘A Cancer ‘Moonshot’ Needs Big Data; gaining access to a big data repository all inclusive of patient specific data is essential to offering patient centered cancer treatments. Besides the technology challenges, there are data and regulation challenges. I’m sure that many of these challenges are being addressed. Thus, far there have been no solutions. Are we up for the challenge? Big Data analysis could help tell us which cancer patients are most likely to be cured with standard approaches, and which need more aggressive treatment and monitoring. It is time we solve these challenges to make a moonshot a certain reality!

Feb 292016
 

Cancer MoonshotOn January 12, 2016 in his State of the Union address, President Obama called for America to become “the country that cures cancer once and for all” As he introduced the “Moonshot” initiative that will be guided by Vice President Joe Biden.

Dr. Tom Coburn, former Republican Senator from the state of Oklahoma and three time cancer survivor, in his January 14th article in the Wall Street Journal ‘A Cancer ‘Moonshot’ Needs Big Data ; indicates that “harnessing that information (“big data”) would allow us to personalize prevention and treatment based on the genetic characteristics of a patient’s tumor, family history and personal preferences, while minimizing unwanted side effects.”

On February 5, 2016 on CNN’s Global Public Square show: Big data could be a health care game-changer author and doctor, David Agus tells Fareed Zakaria how using big data and examining thousands of cases might increase how long we live and our quality of life.

At this time in our history, with the continuing electronic capture of patient information from intake to discharge, the opportunity could not be brighter to cure cancer. The Obama administration’s 2010 initiative to capture electronic health records has enabled the opportunity to improve patient care, increase patient participation, improve diagnostics and patient outcomes, improve care coordination, as well as create practice efficiencies and cost savings.

The electronic capture of patient information has created medical big data repositories. One such repository is the American College of Surgeons/American Cancer Society’s National Cancer Database – NCDB. Resources such as these will benefit by utilizing knowledge management and information architecture techniques to identify and unlock knowledge patterns contained within these big data sources. In several of my blog post dating back to January 2013, I wrote about the advantages of applying KM to big data. From understanding Contextual Intelligence KM and Big Data ; to devoting a chapter on KM and Big Data in my upcoming book KM in Practice; I believe when executed the right way KM, powered by information architecture will provide the essential ingredient when applied to big data. This will enable researchers to discover better treatments and possible cures for many diseases including cancer and we will realize the dream presented by the Moonshot initiative!

Sep 192015
 

contextual%20intelligence-technologyThis all started during a conversation I had with a colleague (Baron Murdock of GreenBox Ventures, LLC) and he mentioned the term Contextual Intelligence. Due to the fact that we were talking about knowledge management and big data I believe that I understood what he was talking about. However, I had never heard of the term. Not long after our meeting I began to do a little research on the concept of contextual intelligence.

What is Contextual Intelligence?

It is during my initial research (consisting of a series of internet search queries) where I began to understand that the term Contextual Intelligence is not new. As a matter of fact it’s a term that has been used in graduate business schools since the 80’s.

Contextual Intelligence is, according to Matthew Kutz “a leadership competency based on empirical research that integrates concepts of diagnosing context and exercising knowledge”; Tarun Khanna states that ”understanding the limits of our knowledge is at the heart of contextual intelligence” and Dr. Charles Brown states that “Contextual intelligence is the practical application of knowledge and information to real-world situations. This is an external, interactive process that involves both adapting to and modifying an environment to accomplish a desired goal; as well as recognizing when adaptation is not a viable option. This is the ability that is most closely associated with wisdom and practical knowledge”

While there are several positions on what contextual intelligence is. I align more to Dr. Brown’s assertion of Contextual Intelligence. When it comes to knowledge management (KM) and contextual intelligence, context matters! Understanding that contextual intelligence is link to our tacit knowledge, I immediately thought of what is the connection between KM and Contextual Intelligence. Knowledge management among other aspects is concerned with the ability to understand knowledge and adapt that knowledge across a variety of environments (cultures) different from the origin of that knowledge.

To enable the flow of knowledge to the right person in the right time and in the right context, it is essential to understand the context of that knowledge. Information Architecture (IA) is the backbone of delivering knowledge in the right context to users of Knowledge Management Systems (KMS). IA focuses on organizing, structuring, and labeling content (information and knowledge). IA enables users to find relevant content in the right context, understand how content fits together, connects questions to answers and people to experts. It is the incorporation of IA that contributes to giving knowledge its context.

Understanding the context of knowledge consists of:

  • Understanding the intent of the knowledge
  • Understanding the cultural and environmental influences on the knowledge
  • Understanding the role (or who) the knowledge is intended to be used by
  • Understanding the relevancy of the knowledge (The knowledge could only be valid for a specific period of time)
  • Understanding the origin (lineage) of the knowledge

Big Data

Without context data is meaningless, this includes structured and unstructured data. Big Data resources contain a proliferation of structured and unstructured data. Knowledge management techniques applied to big data resources to extract knowledge will need to understand the context of the data in order to deliver pertinent knowledge to its users. Knowledge Management has the ability to integrate and leverage information from multiple perspectives. Big Data is uniquely positioned to take advantage of KM processes and procedures. These processes and procedures enables KM to provide a rich structure to enable decisions to be made on a multitude and variety of data.

We know that context matters. Especially when it comes to what we know (our knowledge). Being able to adapt our knowledge with others is at the heart of successfully communicating, sharing what we know and to fuel innovation.

Obtaining contextual intelligence for your organization consists of leveraging or hiring people who are fluent in more than one culture, partnering with local companies, developing localized talent and enabling your employees to do more field work to immerse themselves in other cultures (tuning in to cultural and environmental differences).

A couple of great resources to read on Contextual Intelligence are “Contextual Intelligence” by Tarun Khanna from the September 2014 issue of Harvard Business Review and “Understanding Contextual Intelligence: a critical competency for today’s leaders” by Matthew R Kutz and Anita Bamford-Wade from the July 2013 Emergent Publications, Vol. 15 No. 3.

Jul 282015
 

KMandBigDataBig Knowledge! Knowledge Management and Big Data – Excerpt from Chapter 14: Knowledge Management in Practice:

A goal of knowledge management is to capture and share knowledge wherever it resides in the organization. Leveraging the corporate collective know-how will improve decision making and innovation where it is needed. The proliferation of data, information and knowledge has created a phenomenon called “Big Data”. Knowledge Management when applied to Big Data will enable the type analysis that will uncover the complete picture of the organization and be a catalyst for driving decisions. In order to leverage an organizations Big Data it must be broken down into smaller more manageable parts. This will facilitate a succinct analysis, which then can be regrouped with other smaller subsets to produce “big picture” results.
Volume, Velocity, and Variety are all aspects that define Big Data.
Volume: The proliferation of all types of data expanding many terabytes of information.
Velocity: The ability to process data quickly.
Variety: Refers to the different types of data (structured and unstructured data such as data in databases, content in Content Management and Knowledge Management systems/repositories, collaborative environments, blogs, wikis, sensor data, audio, video, click streams, log files, etc.).
Variety is the component of Big Data in which KM will play a major role in driving decisions. Enterprises need to be able to combine their analyses to include information from both structured databases and unstructured content.

Data, Information and Knowledge

Since the focus here is about leveraging knowledge management techniques to extract knowledge from Big Data, it is important to understand the difference between data, information and knowledge: Data, I often refer to as being represented by numbers and words representing a discrete set of facts. Information is an organized set of data (puts context around data). This can result in an artifact such as a stock report, news article, etc. Knowledge on the other hand emerges from the receiver of information applying his/her analysis (aided by their experience and training) to form judgments in order to make decisions. Erickson and Rothberg indicates that information and data only revel their full value when insights are drawn from them (knowledge). Big Data becomes useful when it enhances decision making, which in turn is only enhanced when analytical techniques and an element of human interaction is applied (Erickson and Rothberg, 2014).

In a February 26 2014 KM World article titled “Big Data Delivering Big Knowledge” Stefan Andreasen is Chief Technology Officer at Kapow Software indicates that “To gain a 360 degree view of their ecosystem, organizations should also monitor user-generated data, public data, competitor data and partner data to discover critical information about their business, customers and competitive landscape” (Andreasen, 2014). The user-generated data, public data, competitor data and partner data provides the variety of data needed to be analyzed by KM and it’s this type of data that will be examined more closely.

User-generated data
Customers are sharing information about their experience with products and services, what they like and don’t like, how it compares to the competition and many other insights that can be used for identifying new sales opportunities, planning campaigns, designing targeted promotions or guiding product and service development. This information is available in social media, blogs, customer reviews or discussions on user forums. Combining all this data contained in call center records and information from other back-office systems can help identify trends, have better predictions and improve the way organizations engage with customers (Andreasen, 2014).

Public data
Public information made available by federal, state and local agencies can be used to support business operations in human resources, compliance, financial planning, etc. Information from courthouse websites and other state portals can be used for background checks and professional license verifications. Other use cases include monitoring compliance regulation requirements, bill and legislation tracking, or in healthcare obtaining data on Medicare laws and which drugs are allowed per state (Andreasen, 2014).

Competitor data
Information about competitors is now widely available by monitoring their websites, online prices, and press releases, events they participate in, open positions or new hires. This data allows better evaluation of the competition, monitor their strategic moves, identify unique market opportunities and take action accordingly. As a retailer for example, correlate this data with order transaction history and inventory levels to design and implement a more dynamic pricing strategy to win over your competition and grow the business (Andreasen, 2014).

Partner data
Across your ecosystem, there are daily interactions with partners, suppliers, vendors and distributors. As part of these interactions organizations exchange data about products, prices, payments, commissions, shipments and other data sets that are critical for to business. Beyond the data exchange, intelligence can be gleaned by identifying inefficiencies, delays, gaps and other insights that can help improve and streamline partner interactions (Andreasen, 2014).
To comb through the various sources of user-generated data, public data, competitor data and partner data leveraging KM analytics (data analysis, statistics, and trend analysis) and content synthesis technology (technology that categorizes, analyze, combines, extracts details, and re-assess content aimed at developing new meanings and solutions) will be necessary.

Applying KM to Big Data
Knowledge Management has the ability to integrate and leverage information from multiple perspectives. Big Data is uniquely positioned to take advantage of KM processes and procedures.

These processes and procedures enables KM to provide a rich structure to enable decisions to be made on a multitude and variety of data. In the “KM World March 2012” issue it was pointed out that “organizations do not make decisions just based on one factor, such as revenue, employee salaries or interest rates for commercial loans. The total picture is what should drive decisions”. KM enables organizations to take the total picture Big Data provides, and along with leveraging tools that provide processing speed to break up the data into subsets for analysis will empower organizations to make decisions on the vast amount and variety of data and information being provided.

The emerging challenge for organizations is to derive meaningful insights from available data and re-apply it intelligently. Knowledge management plays a crucial role in efficiently managing this data and delivering it to the end users to aid in the decision making process. This involves the collection of data from direct and indirect, structured and unstructured sources, analyzing and synthesizing it to derive meaningful information and intelligence. Once this achieved it must be converted it into a useful knowledge base, storing it and finally delivering it to end users.

Mar 012013
 
Big Data in knowledge management

In my last post regarding Big Data I posed two questions:

  • What is your experience with Big Data?
  • Are you like most of us determining what Big Data really means to me and my organization?

As it pertains to Knowledge Management (KM) and Big Data within organizations, the advancement of search technologies on Big Data is making an impact. In KM World’s 100 companies that matter in KM, they point out that Search Technologies’ ability to implement, service, and manage Big Data environments is the key reason for their inclusion. The “findability” of information and knowledge within large amounts of unstructured data contribute to the ability to disseminate and reuse the knowledge of the enterprise.

Besides Search Technologies, there are several companies offering KM solutions to address Big Data. Some of these companies include:

  • CACI which offers solutions and services to go from data to decisions
  • Autonomy (an HP Company) offers KM solutions that mine unstructured data, tag this data and where appropriate make it available to the knowledge base
  • IBM who offers a Big Data platform that includes KM to address Big Data’s vast amount of unstructured data.

As organizations learn more about Big Data and how to manage, and use and reuse the vast amounts of information and knowledge it provides, more software and consulting companies will provide the products and solutions organizations are looking for.

Where is Big Data going?

A recent Gartner Report stated that “Many global organisations have failed to implement a data management strategy but will have to as IT leaders need to support big data volumes, velocity, and variety,” as well as “decisions from big data projects for decision support, and insights in the context of their role and job function, will expand from 28 per cent of users in 2011 to 50 per cent in 2014.”

Big Data will be the catalyst for an increase in the need for Data, Information and Knowledge Management expertise. We will need Big Knowledge Management to address Big Data!

Jan 312013
 
Knowledge Management and Big Data

Big Data has been buzzing for some time now. Many organizations  are formulating their approach to managing Big Data and aligning it with their strategic objectives. Lets first take a look of what Big Data is; Big Data refers to data that has grown so big it is difficult to manage.

Big Data spans four dimensions Volume, Velocity, Variety, and Veracity.

  • Volume: The proliferation of all types of data expanding many terabytes of information.
  • Velocity: The ability to process data quickly.
  • Variety: Refers to the different types of data (structured and unstructured data such as text, sensor data, audio, video, click streams, log files, etc.)

See what IBM is saying about Big Data.

Knowledge Management has the ability to integrate and leverage information from multiple perspectives. Big Data is uniquely positioned to take advantage of KM processes and procedures. These processes and procedures enables KM to provide a rich structure to enable decisions to be made on a multitude and variety of data. In the KM World March 2012 issue it was pointed out that “organizations do not make decisions just based on one factor, such as revenue, employee salaries or interest rates for commercial loans. The total picture is what should drive decisions”. KM enables organizations to take the total picture Big Data provides, and along with leveraging tools that provide processing speed to break up the data into subsets for analysis will empower organizations to make decisions on the vast amout and variety of data and information being provided.

What is your experience with Big Data? Are you like most of us determining what Big Data really means to me and my organization? If these and other questions are on your mind concerning Big Data I want to hear from you!