Oct 312017
 

Data-Science-IA-Big-DataInformation Architecture is an enabler for Big Data Analytics. You may be asking, why would I say this, or how does IA enable Big Data Analytics. We need to remember that Big Data includes all data (i.e., Unstructured, Semi-structured, and Structured). The primary characteristics of Big Data (Volume, Velocity, and Variety) are a challenge to your existing architecture and how you will effectively, efficiently and economically process data to achieve operational efficiencies.

In order to derive the maximum benefit from Big Data, organizations must be able to handle the rapid rate of delivery and extraction of huge volumes of data, with varying data types. This can then be integrated with the organization’s enterprise data and analyzed. Information Architecture provides the methods and tools for organizing, labeling, building relationships (through associations), and describing (through metadata) your unstructured content adding this source to your overall pool of Big Data. In addition, information architecture enables Big Data to rapidly explore and analyze any combination of structured, semi-structured and unstructured sources. Big Data requires information architecture to exploit relationships and synergies between your data. This infrastructure enables organizations to make decisions utilizing the full spectrum of your big data sources.

                                                            Big Data – Component

Information Architecture Element Volume Velocity Variety
Content Consumption Provides an understanding of the universe of relevant content through performing a content audit. This contributes directly to volume of available content. This directly contributes to the speed at which content is accessed by providing initial volume of the available content. Identifies the initial variety of content that will be a part of the organization’s Big Data resources.
Content Generation Fill gaps identified in the content audit by Gather the requirements for content creation/ generation, which contributes to directly to increasing the amount of content that is available in the organization’s Big Data resources. This directly contributes to the speed at which content is accessed due to the fact that volumes are increasing. Contributes to the creation of a variety of content (documents, spreadsheets, images, video, voice) to fill identified gaps.
Content Organization Content Organization will provide business rules to identify relationships between content, create metadata schema to assign content characteristic to all content. This contributes to increasing the volume of data available and in some ways leveraging existing data to assign metadata values. This directly contributes to improving the speed at which content is accessed by applying metadata, which in turn will give context to the content. The Variety of Big Data will often times drive the relationships and organization between the various types of content.
Content Access Content Access is about search and establishing the standard types of search (i.e., keyword, guided, and faceted). This will contribute to the volume of data, through establishing the parameters often times additional metadata fields and values to enhance search. Contributes to the ability to access content and the speed and efficiency in which content is accessed. Contributes to how the variety of content is access. The Variety of Big Data will often times drive the search parameters used to access the various type of content.
Content Governance The focus here is on establishing accountability for the accuracy, consistency and timeliness of content, content relationships, metadata and taxonomy within areas of the enterprise and the applications that are being used. Content Governance will often “prune” the volume of content available in the organization’s Big Data resources by only allowing access to pertinent/relevant content, while either deleting or archiving other content. When the volume of content available in the organization’s Big Data resources is trimmed through Content Governance it will improve velocity by making available a smaller more pertinent universe of content. When the volume of content available in the organization’s Big Data resources is trimmed through Content Governance the variety of content available may be affected as well.
Content Quality of Service Content Quality of Service focuses on security, availability, scalability, usefulness of the content and improves the overall quality of the volume of content in the organization’s Big Data resources by: – defending content from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction – eliminating or minimizing disruptions from planned system downtime making sure that the content that is accessed is from and/or based on the authoritative or trusted source, reviewed on a regular basis (based on the specific governance policies), modified when needed and archived when it becomes obsolete – enabling the content to behave the same no matter what application/tool implements it and flexible enough to be used from an enterprise level as well as a local level without changing its meaning, intent of use and/or function – by tailoring the content to the specific audience and to ensure that the content serves a distinct purpose, helpful to its audience and is practical. Content Quality of Service will eliminate or minimize delays and latency from your content and business processes by speeding to analyze and make decisions directing effecting the content’s velocity. Content Quality of Service will improve the overall quality of the variety of content in the organization’s Big Data resources through aspects of security, availability, scalability, and usefulness of content.

The table above aligns key information architecture elements to the primary components of Big Data. This alignment will facilitate a consistent structure in order to effectively apply analytics to your pool of Big Data. The Information Architecture Elements include; Content Consumption, Content Generation, Content Organization, Content Access, Content Governance and Content Quality of Service. It is this framework that will align all of your data to enable business value to be gained from your Big Data resources.

Note: This table originally appeared in the book Knowledge Management in Practice (ISBN: 978-1-4665-6252-3) by Anthony J. Rhem.

Jun 282017
 

RobotThis is the second of a three (3) part post on the connection between Artificial Intelligence (AI) and Knowledge Management (KM). In this post I examine those industries that will or are soon to be disrupted by AI and KM, specifically in the form of Cognitive Computing. Before we look ahead, let’s take a look back. During the time I first became involved in AI (late 80’s), it’s hype and promise at that time became too much to live up to (a typical phenomenon in software see Hype Cycle) and its promise faded into the background. Fast forward to 2010 and AI is beginning to become the “next big thing”. AI had already made its presence felt in the automobile industry (robotics), as well as with decision making systems in medicine, logistics, and manufacturing (expert systems and neural networks). Now AI in the form of Cognitive Computing is making its mark on several industries. In a recent CB Insights Newsletter, it was stated that the US Bureau of Labor Statistics indicates that 10.5 million jobs are at risk of automation. Due to the rapid adoption and application of better hardware processing capabilities which facilitate artificial intelligence algorithms use on big data this is leading the change in blue and white collar jobs.

At a recent Harvard University commencement address , Facebook Chief Executive Mark Zuckerberg stated “Our generation will have to deal with tens of millions of jobs replaced by automation like self-driving cars and trucks,”. Bill Gates, the founder of Microsoft and Chairman of the Bill and Melinda Gates Foundation in a recent MarketWatch story had this to say “In that movie, old Benjamin Braddock (Dustin Hoffman) was given this very famous piece of advice: “I just want to say one word to you. Just one word …Plastics,” And today? That word would likely be “robots“, and “artificial intelligence” would have a huge impact”.

Although there are many industries where Cognitive Computing will disrupt the way business is conducted including the economics around job loss and future job creation, I have chosen to look at three (3) industries; Legal Services, Automotive Industry, and Healthcare.

Legal Services

LegalKnowledge Management (KM) is becoming more prevalent within law firms as well as legal departments as the practice of KM has become more mature. AI technologies are also making its way into the practice of law. Ability to reuse internally developed knowledge assets such as precedents, letters, research findings, and case history information is vital to a law firm’s success. Paralegals currently play a critical role in assisting attorneys with discovery. With the use of AI systems attorneys will be able to “mine” more accurately and efficiently the large volumes of documents (i.e., precedents, research findings, and case history information) located in various repositories to aid in decision making and successful client outcomes. This ability will limit the use of paralegals and attorneys currently needed to perform these tasks.

Cognitive computing, will enable computers to learn how to complete tasks traditionally done by humans. The focus of cognitive computing is to look for patterns in data, carrying out tests to evaluate the data and finding results. This will provide lawyers with similar capabilities as it provides doctors; an in-depth look into the data that will provide insights that cannot be provided otherwise. According to a 2015 Altman Weil Law Firms in Transition survey  35% of law firm leaders indicate cognitive computing will replace 1st year associates in the next ten (10) years. While 20% of law firm leaders indicate cognitive computing will replace 2nd and 3rd year attorneys as well. In addition, 50% of law firm leaders indicate cognitive computing will replace paralegals altogether. Cognitive computing capability to mine big data is the essential reason lower level research jobs will be replaced by computers. This situation is not just limited to the legal profession.

Automotive Industry

Autonomous VehicleAutonomous Vehicles and Vehicle Insurance

Autonomous vehicles, also known as a driverless car, robot car (here we go with robots again!), and self-driving car can guide themselves without human intervention. This kind of vehicle is paving the way for future cognitive systems where computers take over the art of driving. Autonomous Vehicles are positioned to disrupt the insurance industry. Let’s take a look at what coverages are a part of the typical vehicle insurance policy.

Vehicle insurance typically addresses six (6) coverages. These coverages include (1) Bodily Injury Liability, which typically applies to injuries that you, the designated driver or policyholder, cause to someone else; (2) Medical Payments or Personal Injury Protection (PIP), which covers the treatment of injuries to the driver and passengers of the policyholder’s vehicle; (3) Property Damage Liability, which covers damage you (or someone driving the car with your permission) may cause to someone else’s property; (4) Collision, which covers damage to your car resulting from a collision with another car, object or and even potholes; (5) Comprehensive, which covers you for loss due to theft or damage caused by something other than a collision with another car or object, such as fire, falling objects, etc.; (6) Uninsured and Underinsured Motorist Coverage, which reimburses you, a member of your family, or a designated driver if one of you is hit by an uninsured or hit-and-run driver. The way these coverage’s are applied (or not) to a vehicle policy will be disrupted by the use of autonomous vehicles.

According to an 2016 Forbes article by Jeff McMahon  about 90 percent of car accidents are caused by human error. However, it is estimated that autonomous vehicles will significantly reduce the number of accidents. This will significantly disrupt the insurance revenue model, effecting all six (6) types of coverage identified above. When the risk of accidents drops, the demand for insurance will potentially drop as well (this will not happen unless the states no longer require insurance that covers accidents). So, there will be no doubt that auto insurance companies will the type of coverage and the language effecting the policies.

Some Unintended? Side Effects

The autonomous vehicle with its multiple sensors have the potential to eliminate accidents due to distractions and drunk driving. This will disrupt the vehicle repair industry by largely eliminating crashes so collision repair shops will lose a huge portion of their business. Indirectly, the decreased demand for new auto parts will hurt vehicle part manufacturers. According to the U.S. Department of Transportation in 2010 approximately 24 million vehicles were damaged in accidents, which had an economic cost of $76 billion in property damages. The loss of this revenue will put a strain on these manufacturers.

Healthcare

CogMedThe healthcare delivery process presents a consistent flow of data, information and knowledge for the delivery of healthcare. These areas include Patient Intake, Data Collection, Decision Support, Diagnosis and Treatment, and Patient Closeout. The areas of the healthcare delivery process that will be disrupted by cognitive computing include Patient Intake, Data Collection and Diagnosis and Treatment.

Patient Intake and Data Collection: The patient intake process is the first opportunity to capture knowledge about the patient and his/her condition at the time of arrival at the healthcare facility. Cognitive computing executed through natural language processing (NLP) tools will capture medical insurance information, method of payment, medical history and current vital condition. All of this data is transitioned to the facilities database. This presents an opportunity for the data, information and knowledge about the patient to be automatically shared. NLP tools will limited or eliminate the need for a receptionist/admin to initially capture patient information.

Diagnosis and Treatment: Making a diagnosis is a very complex process, which includes cognitive tasks that involves both logical reasoning and pattern recognition. The development of Artificial Neural Networks that incorporates deep-learning capabilities are being developed to mine health related big data repositories. This innovation is providing clinicians and researchers with effective tools for improving and personalizing patient treatment options. It has been established that big-data analysis could help to identify which patients are most likely to respond to specific therapeutic approaches versus others. Analysis of such data may also improve drug development by allowing researchers to better target novel treatments to patient populations.

The clinical trials that pharmaceutical companies rely on for FDA approval and drug labeling capture too little of the information patients and physicians need. The trials only enroll a small percentage patients and can take years and tens of millions of dollars to finish. Many trials never enroll enough patients to get off the ground. Using cognitive computing will assist physicians to understand which patients are most likely to respond with standard approaches, and which need more aggressive treatment and monitoring. Enabling cognitive computing to harness the genetic and clinical data routinely generated by hospitals and physicians would also accelerate drug development, by rapidly matching targeted treatments sitting in companies’ research pipelines with the patients who are most likely to respond. In addition, the sheer number of clinical research and medical trials being published on an ongoing basis makes it difficult to analyze the resulting big data without the use of cognitive computing tools (for more information see Forbes article: IBM and Microsoft are Disrupting The Healthcare Industry with Cognitive Computing).

Where do we go from here!

AI, KM and Cognitive Computing will continue to evolve and more areas of disruption will be coming. So, the question is… how can we address the loss of jobs; how can we prepare for the new jobs; and how must business and government evolve to meet the challenges that cognitive computing present? It is clear that we must retrain/retool the current workforce and at the same time infuse our vocational schools, trade schools, colleges and universities with the right tools and experienced instructors/professors to teach the concepts and applications of AI, KM and Cognitive Computing. Businesses must continue to innovate. Innovating in the same old way will cause a business to become extinct. However, I’m talking about innovating by bringing a diversification of thought and experiences, including cultural into the innovation community. Creating your innovation intersection (for more on finding your innovation intersection – read The Medici Effect by Frans Johansson). Only by innovating differently will your business not only survive but thrive in this new world where interacting with computers (yes robots too!) will be an everyday occurrence in life!

Mar 312017
 

CognitiveThere are approximately 22,000 new cases of lung cancer each year with an overall 5-year survival rate of only ~18 percent (American Cancer Society). The economic burden of lung cancer just based on per patient cost is estimated $46,000/patient (lung cancer journal). Treatment efforts using drugs and chemotherapy are effective for some, however more effective treatment has been hampered by the inability of clinicians to better target treatments to patients. It has been determined that Big Data holds the key for providing clinicians with the ability to develop more effective patient centered cancer treatments.

Analysis of Big Data may also improve drug development by allowing researchers to better target novel treatments to patient populations. Providing the ability for clinicians to harness Big Data repositories to develop better targeted lung cancer treatments and to enhance the decision-making process to improve patient care can only be accomplished through the use of cognitive computing. However, having a source or sources of data available to “mine” for answers to improve lung cancer treatments is a challenge!

There is also a lack of available applications that can take advantage of Big Data repositories to recognize patterns of knowledge and extract that knowledge in any meaningful way. The extraction of knowledge must be presented in a way that researchers can use to improve patient centric diagnosis and the development of patient centric treatments. Having the ability to use cognitive computing and KM methods to uncover knowledge from large cancer repositories will provide researchers in hospitals, universities, and pharmaceutical companies with the ability to use Big Data to identify anomalies, discover new treatment combinations and enhance diagnostic decision making.

Content Curation

An important aspect to cognitive computing and Big Data is the ability to perform a measure of content curation. The lung cancer Big Data environment that will be analyzed should include both structured and unstructured data (unstructured being documents, spreadsheets, images, video, etc.). In order to ingest the data from the Big Data resource the data will need to be prepared. This data preparation includes applying Information Architecture (IA) to the unstructured data within the repository. Understanding the organization and classification schemes relating to the data both structured and unstructured is essential to unifying the data into one consistent ontology.

Are We Up for the Challenge!

Even if a Big Data source was available and content curation was successful, the vast amounts of patient data is governed by HIPAA laws which makes it difficult for researchers to gain access to clinical and genomic data shared across multiple institutions or firms including research institutions and hospitals. According to Dr. Tom Coburn in his January 14th article in the Wall Street Journal ‘A Cancer ‘Moonshot’ Needs Big Data; gaining access to a big data repository all inclusive of patient specific data is essential to offering patient centered cancer treatments. Besides the technology challenges, there are data and regulation challenges. I’m sure that many of these challenges are being addressed. Thus, far there have been no solutions. Are we up for the challenge? Big Data analysis could help tell us which cancer patients are most likely to be cured with standard approaches, and which need more aggressive treatment and monitoring. It is time we solve these challenges to make a moonshot a certain reality!

Feb 292016
 

Cancer MoonshotOn January 12, 2016 in his State of the Union address, President Obama called for America to become “the country that cures cancer once and for all” As he introduced the “Moonshot” initiative that will be guided by Vice President Joe Biden.

Dr. Tom Coburn, former Republican Senator from the state of Oklahoma and three time cancer survivor, in his January 14th article in the Wall Street Journal ‘A Cancer ‘Moonshot’ Needs Big Data ; indicates that “harnessing that information (“big data”) would allow us to personalize prevention and treatment based on the genetic characteristics of a patient’s tumor, family history and personal preferences, while minimizing unwanted side effects.”

On February 5, 2016 on CNN’s Global Public Square show: Big data could be a health care game-changer author and doctor, David Agus tells Fareed Zakaria how using big data and examining thousands of cases might increase how long we live and our quality of life.

At this time in our history, with the continuing electronic capture of patient information from intake to discharge, the opportunity could not be brighter to cure cancer. The Obama administration’s 2010 initiative to capture electronic health records has enabled the opportunity to improve patient care, increase patient participation, improve diagnostics and patient outcomes, improve care coordination, as well as create practice efficiencies and cost savings.

The electronic capture of patient information has created medical big data repositories. One such repository is the American College of Surgeons/American Cancer Society’s National Cancer Database – NCDB. Resources such as these will benefit by utilizing knowledge management and information architecture techniques to identify and unlock knowledge patterns contained within these big data sources. In several of my blog post dating back to January 2013, I wrote about the advantages of applying KM to big data. From understanding Contextual Intelligence KM and Big Data ; to devoting a chapter on KM and Big Data in my upcoming book KM in Practice; I believe when executed the right way KM, powered by information architecture will provide the essential ingredient when applied to big data. This will enable researchers to discover better treatments and possible cures for many diseases including cancer and we will realize the dream presented by the Moonshot initiative!

Sep 192015
 

contextual%20intelligence-technologyThis all started during a conversation I had with a colleague (Baron Murdock of GreenBox Ventures, LLC) and he mentioned the term Contextual Intelligence. Due to the fact that we were talking about knowledge management and big data I believe that I understood what he was talking about. However, I had never heard of the term. Not long after our meeting I began to do a little research on the concept of contextual intelligence.

What is Contextual Intelligence?

It is during my initial research (consisting of a series of internet search queries) where I began to understand that the term Contextual Intelligence is not new. As a matter of fact it’s a term that has been used in graduate business schools since the 80’s.

Contextual Intelligence is, according to Matthew Kutz “a leadership competency based on empirical research that integrates concepts of diagnosing context and exercising knowledge”; Tarun Khanna states that ”understanding the limits of our knowledge is at the heart of contextual intelligence” and Dr. Charles Brown states that “Contextual intelligence is the practical application of knowledge and information to real-world situations. This is an external, interactive process that involves both adapting to and modifying an environment to accomplish a desired goal; as well as recognizing when adaptation is not a viable option. This is the ability that is most closely associated with wisdom and practical knowledge”

While there are several positions on what contextual intelligence is. I align more to Dr. Brown’s assertion of Contextual Intelligence. When it comes to knowledge management (KM) and contextual intelligence, context matters! Understanding that contextual intelligence is link to our tacit knowledge, I immediately thought of what is the connection between KM and Contextual Intelligence. Knowledge management among other aspects is concerned with the ability to understand knowledge and adapt that knowledge across a variety of environments (cultures) different from the origin of that knowledge.

To enable the flow of knowledge to the right person in the right time and in the right context, it is essential to understand the context of that knowledge. Information Architecture (IA) is the backbone of delivering knowledge in the right context to users of Knowledge Management Systems (KMS). IA focuses on organizing, structuring, and labeling content (information and knowledge). IA enables users to find relevant content in the right context, understand how content fits together, connects questions to answers and people to experts. It is the incorporation of IA that contributes to giving knowledge its context.

Understanding the context of knowledge consists of:

  • Understanding the intent of the knowledge
  • Understanding the cultural and environmental influences on the knowledge
  • Understanding the role (or who) the knowledge is intended to be used by
  • Understanding the relevancy of the knowledge (The knowledge could only be valid for a specific period of time)
  • Understanding the origin (lineage) of the knowledge

Big Data

Without context data is meaningless, this includes structured and unstructured data. Big Data resources contain a proliferation of structured and unstructured data. Knowledge management techniques applied to big data resources to extract knowledge will need to understand the context of the data in order to deliver pertinent knowledge to its users. Knowledge Management has the ability to integrate and leverage information from multiple perspectives. Big Data is uniquely positioned to take advantage of KM processes and procedures. These processes and procedures enables KM to provide a rich structure to enable decisions to be made on a multitude and variety of data.

We know that context matters. Especially when it comes to what we know (our knowledge). Being able to adapt our knowledge with others is at the heart of successfully communicating, sharing what we know and to fuel innovation.

Obtaining contextual intelligence for your organization consists of leveraging or hiring people who are fluent in more than one culture, partnering with local companies, developing localized talent and enabling your employees to do more field work to immerse themselves in other cultures (tuning in to cultural and environmental differences).

A couple of great resources to read on Contextual Intelligence are “Contextual Intelligence” by Tarun Khanna from the September 2014 issue of Harvard Business Review and “Understanding Contextual Intelligence: a critical competency for today’s leaders” by Matthew R Kutz and Anita Bamford-Wade from the July 2013 Emergent Publications, Vol. 15 No. 3.

Jan 302013
 

Knowledge Management TrendsNow that we are firmly into 2013 lets take a look at what is trending in Knowledge Management (KM) this year.

With the proliferation of mobile devices (iPhone, Chromebook, iPad, Android devices) Personal KM is moving front and center. In the enterprise, as more and more content and knowledge gets created and the need to access and use that knowledge and content to address day-to-day knowledge needs of workers and customers (see Big Data), providing knowledge quickly to address internal and external users, as well as search and findability, are getting much attention within organizations implementing KM.

Let’s take a look at what some others are indicating the trends will be for KM in 2013:

  • Matthew Whalley – ClientKnowledge Manager (Legal Services), talks about “helping clients to realize efficiencies and knowledge gains, the growing realization that KM delivers more than “documents” providing  “operational” efficiency, transaction delivery, knowledge re-use and transformation, and technology – social and mobile channels”.
  • SAP indicates that “defining a knowledge management strategy, structuring content and measuring business impact as well as reaching external leadership are becoming more and more important”.
  • KMWorld indicates that the focus is on sharing collective knowledge and on KM strategy more so than the technology.

These are some thoughts on what to expect regarding KM for 2013. So, what do you think? I look forward to hearing more about what other organizations and individuals are doing with KM in 2013!