5 components that artificial intelligence must have to succeed
Healthcare organizations have high hopes for using data to gain new insights, but most approaches to AI are limited and destined to disappoint.
It seems like it was only a few years ago that the term “big data” went from a promising area of research and interest to something so ubiquitous that it lost all meaning, descending ultimately into the butt of jokes.
Thankfully, the noise associated with “big data” is abating as sophistication and common sense take hold. In fact, in many circles, the term actually exposes the user as someone who doesn’t really understand the space. Unfortunately, the same malady has now afflicted artificial intelligence (AI). Everyone I meet is doing an “AI play.”
AI is, unfortunately, the new “big data.” While not good, it is not all bad either. After all, the data ecosystem benefited from all of the “big data” attention and investment, creating some amazing software and producing some exceptional productivity gains.
The same will happen with AI—with the increased attention comes investment dollars which in turn will drive adoption—enhancing the ecosystem.
But the question should be raised—what qualifies as AI?
First, the current AI definition is focused on the narrow, application specific AI, not the more general problem of artificial general intelligence (AGI), where simulating a person using software is the equivalent of intelligence.
Second, the vast, vast majority of the data that exists in the world is unlabeled. It is not practical to label that data manually, and doing so would likely create bias anyway. Unlabeled data presents a different challenge, but the key point here is that it is everywhere and represents the key to extracting business value, or any value.
Third, we are not producing data scientists at a rate that can keep pace with the growth of data. Even with the moniker as the “sexiest job of the 21st century,” the pace at which data scientists are created doesn’t begin to approach the growth rate we are seeing in data.
Fourth, data scientists, for the most part, are not UX designers or product managers or, in many cases, even engineers. As a result, the subject matter experts—those that sit in the business—don’t have effective interfaces to the data science outputs. The interfaces that they have—PowerPoint, Excel or PDF reports—have limited utility in transforming the behavior of a company. What is required is something to shape behavior and applications.
So what does qualify as intelligence? Here is a take for what AI should display, and it encompasses a framework. While some of these elements may seem self-evident, that is because they are taken as a single item. Intelligence has a broader context. All the elements must work in conjunction with each other to qualify as AI.
Discover
Discovery is the ability of an intelligent system to learn from data without upfront human intervention. Often, this needs to be done without being presented with an explicit target. It relies on the use of unsupervised and semi-supervised machine learning techniques (such as segmentation, dimensionality reduction, anomaly detection and more), as well as more supervised techniques where there is an outcome or there are several outcomes of interest.
Usually, in enterprise software, the term discovery refers to the ability of ETL/MDM solutions to discover the various schemas of tables in large databases and automatically find, join keys etc. This is not what we mean by discovery. We use the term very differently and has important implications.
In complex datasets, it is nearly impossible to ask the “right” questions. To discover what value lies within the data, one must understand all the relationships that are inherent and important in the data. That requires a principled approach to hypothesis generation.
One technique, topological data analysis (TDA), is exceptional at surfacing hidden relationships that exist in the data and identifying those relationships that are meaningful without having to ask specific questions of the data. The result is an output that is able to represent complex phenomena, and is therefore able to surface weaker signals as well as the stronger signals.
This permits the detection of emergent phenomena. As a result, enterprises can now discover answers to questions they didn’t even know to ask and do so with data that is unlabeled.
Predict
Once the data set is understood through intelligent discovery, supervised approaches are applied to predict what will happen in the future. These types of problems include classification, regression and ranking.
For this pillar, most companies use a standard set of supervised machine learning algorithms, including random forests, gradient boosting, linear/sparse learners. It should be noted, however, that the unsupervised work from the previous step is highly useful in many ways. For example, it can generate relevant features for use in prediction tasks or finding local patches of data where supervised algorithms may struggle (systematic errors).
The predict phase is an important part of the business value associated with data science; however, generally, in predictive analytics, there exists a notion that this is the sum total of machine learning. This is not the case by far.
Prediction, while important, is pretty well understood and does not, on its own qualify as “intelligence.” Further, prediction can go wrong along a number of dimensions, particularly if the groups on which you are predicting are racked with some type of bias. In and of itself, prediction is not AI, and we need to stop calling it as such.
Justify
Applications need to support interaction with humans in a way which makes outcomes recognizable and believable. For example, when one builds a predictive model, it is important to have an explanation of how the model is doing what it is doing, like what the features in the model are doing in terms that are familiar to the users of the model. This level of familiarity is important in generating trust and intuition.
Similarly, in the same way that automobiles have mechanisms not just for detecting the presence of a malfunction, but also for specifying the nature of the malfunction and suggesting a method for correcting it, so one needs to have a nuts-and-bolts understanding of how an application is working in order to “repair” it when it goes awry.
There is a difference between transparency and justification. Transparency tells you what algorithms and parameters were used, while justification tells you why. For intelligence to be meaningful, it must be able to justify and explain its assertions, as well as to be able to diagnose failures. No leader should deploy intelligent and autonomous applications against critical business problems without a thorough understanding of what variables power the model. Enterprises cannot move to a model of intelligent applications without trust and transparency.
Act
AI without UX is of limited utility. UX is what distributes that intelligence across the organization and pushes it to the edge – where it can be consumed by practitioners and subject matter experts.
Ultimately, the process of operationalizing an intelligent application within the enterprise requires some change in the organization, an acceptance that the application will evolve over time, and that it will demand downstream changes – automated or otherwise.
For this to happen, intelligent applications need to be “live” in the business process, seeing new data and automatically executing the loop of discover, predict, justify on a frequency that makes sense for that business process. For some processes, that may be quarterly, for others, daily. That loop can even be measured in seconds.
Learn
Intelligent systems are designed to detect and react as the data evolves. An intelligent system is one that is always learning, lives in the workflow and is constantly improving. In the modern data world, an application that is not getting more intelligent is getting dumber.
Intelligent applications are designed to detect and react when data distributions evolve. As a result, they need to be “on the wire” to detect those phenomena before they become a problem.
Too many solutions provide an answer in a point of time; an intelligent system is one that is always learning through the framework outlined here. This is what defines intelligence—not a machine learning algorithm kicking out PDFs containing predictions or the results of a data scientist’s work. For the industry to continue to grow and evolve, we need to start doing a better job of recognizing what is truly AI and what is not.
Thankfully, the noise associated with “big data” is abating as sophistication and common sense take hold. In fact, in many circles, the term actually exposes the user as someone who doesn’t really understand the space. Unfortunately, the same malady has now afflicted artificial intelligence (AI). Everyone I meet is doing an “AI play.”
AI is, unfortunately, the new “big data.” While not good, it is not all bad either. After all, the data ecosystem benefited from all of the “big data” attention and investment, creating some amazing software and producing some exceptional productivity gains.
The same will happen with AI—with the increased attention comes investment dollars which in turn will drive adoption—enhancing the ecosystem.
But the question should be raised—what qualifies as AI?
First, the current AI definition is focused on the narrow, application specific AI, not the more general problem of artificial general intelligence (AGI), where simulating a person using software is the equivalent of intelligence.
Second, the vast, vast majority of the data that exists in the world is unlabeled. It is not practical to label that data manually, and doing so would likely create bias anyway. Unlabeled data presents a different challenge, but the key point here is that it is everywhere and represents the key to extracting business value, or any value.
Third, we are not producing data scientists at a rate that can keep pace with the growth of data. Even with the moniker as the “sexiest job of the 21st century,” the pace at which data scientists are created doesn’t begin to approach the growth rate we are seeing in data.
Fourth, data scientists, for the most part, are not UX designers or product managers or, in many cases, even engineers. As a result, the subject matter experts—those that sit in the business—don’t have effective interfaces to the data science outputs. The interfaces that they have—PowerPoint, Excel or PDF reports—have limited utility in transforming the behavior of a company. What is required is something to shape behavior and applications.
So what does qualify as intelligence? Here is a take for what AI should display, and it encompasses a framework. While some of these elements may seem self-evident, that is because they are taken as a single item. Intelligence has a broader context. All the elements must work in conjunction with each other to qualify as AI.
Discover
Discovery is the ability of an intelligent system to learn from data without upfront human intervention. Often, this needs to be done without being presented with an explicit target. It relies on the use of unsupervised and semi-supervised machine learning techniques (such as segmentation, dimensionality reduction, anomaly detection and more), as well as more supervised techniques where there is an outcome or there are several outcomes of interest.
Usually, in enterprise software, the term discovery refers to the ability of ETL/MDM solutions to discover the various schemas of tables in large databases and automatically find, join keys etc. This is not what we mean by discovery. We use the term very differently and has important implications.
In complex datasets, it is nearly impossible to ask the “right” questions. To discover what value lies within the data, one must understand all the relationships that are inherent and important in the data. That requires a principled approach to hypothesis generation.
One technique, topological data analysis (TDA), is exceptional at surfacing hidden relationships that exist in the data and identifying those relationships that are meaningful without having to ask specific questions of the data. The result is an output that is able to represent complex phenomena, and is therefore able to surface weaker signals as well as the stronger signals.
This permits the detection of emergent phenomena. As a result, enterprises can now discover answers to questions they didn’t even know to ask and do so with data that is unlabeled.
Predict
Once the data set is understood through intelligent discovery, supervised approaches are applied to predict what will happen in the future. These types of problems include classification, regression and ranking.
For this pillar, most companies use a standard set of supervised machine learning algorithms, including random forests, gradient boosting, linear/sparse learners. It should be noted, however, that the unsupervised work from the previous step is highly useful in many ways. For example, it can generate relevant features for use in prediction tasks or finding local patches of data where supervised algorithms may struggle (systematic errors).
The predict phase is an important part of the business value associated with data science; however, generally, in predictive analytics, there exists a notion that this is the sum total of machine learning. This is not the case by far.
Prediction, while important, is pretty well understood and does not, on its own qualify as “intelligence.” Further, prediction can go wrong along a number of dimensions, particularly if the groups on which you are predicting are racked with some type of bias. In and of itself, prediction is not AI, and we need to stop calling it as such.
Justify
Applications need to support interaction with humans in a way which makes outcomes recognizable and believable. For example, when one builds a predictive model, it is important to have an explanation of how the model is doing what it is doing, like what the features in the model are doing in terms that are familiar to the users of the model. This level of familiarity is important in generating trust and intuition.
Similarly, in the same way that automobiles have mechanisms not just for detecting the presence of a malfunction, but also for specifying the nature of the malfunction and suggesting a method for correcting it, so one needs to have a nuts-and-bolts understanding of how an application is working in order to “repair” it when it goes awry.
There is a difference between transparency and justification. Transparency tells you what algorithms and parameters were used, while justification tells you why. For intelligence to be meaningful, it must be able to justify and explain its assertions, as well as to be able to diagnose failures. No leader should deploy intelligent and autonomous applications against critical business problems without a thorough understanding of what variables power the model. Enterprises cannot move to a model of intelligent applications without trust and transparency.
Act
AI without UX is of limited utility. UX is what distributes that intelligence across the organization and pushes it to the edge – where it can be consumed by practitioners and subject matter experts.
Ultimately, the process of operationalizing an intelligent application within the enterprise requires some change in the organization, an acceptance that the application will evolve over time, and that it will demand downstream changes – automated or otherwise.
For this to happen, intelligent applications need to be “live” in the business process, seeing new data and automatically executing the loop of discover, predict, justify on a frequency that makes sense for that business process. For some processes, that may be quarterly, for others, daily. That loop can even be measured in seconds.
Learn
Intelligent systems are designed to detect and react as the data evolves. An intelligent system is one that is always learning, lives in the workflow and is constantly improving. In the modern data world, an application that is not getting more intelligent is getting dumber.
Intelligent applications are designed to detect and react when data distributions evolve. As a result, they need to be “on the wire” to detect those phenomena before they become a problem.
Too many solutions provide an answer in a point of time; an intelligent system is one that is always learning through the framework outlined here. This is what defines intelligence—not a machine learning algorithm kicking out PDFs containing predictions or the results of a data scientist’s work. For the industry to continue to grow and evolve, we need to start doing a better job of recognizing what is truly AI and what is not.
More for you
Loading data for hdm_tax_topic #better-outcomes...