How healthcare organizations can use data unification to find insights
Technologies like those created at MIT’s Computer Science and AI Lab under the guidance of noted expert Michael Stonebraker are helping organizations make better use of analytics.
Even as organizations increasingly turn to data preparation to feed their analytics tools and create better data-driven intelligence, they are encountering difficult challenges in terms of how many data sources they can handle.
Useful data comes in all forms and from a wide range of sources, and that’s particularly true in healthcare organizations, which gather patient data from a variety of systems. But at the same time, many organizations are experiencing a fundamental limitation with their traditional data preparation tools.
This problem is particularly acute for larger, mature organizations that have been accumulating data in separate systems for a number of years. They may have several ERP and CRM systems, or they may have acquired or merged with other organizations that have collected information in their own data silos for a number of years.
But regardless of the source of the incompatibilities, their traditional data management tools just can’t handle the scale of data needed to make full use of modern analytics. Instead, businesses like these need data unification.
Unlike data preparation, which hits a brick wall when combining data from anything more than about a dozen sources, data unification specifically addresses the challenge of bringing together data from numerous and disparate sources. Data unification technologies, like those created at MIT’s Computer Science and Artificial Intelligence Laboratory under the leadership of Turing award honoree Michael Stonebraker, apply human-guided machine learning to unearthing the underlying structure in divergent data.
These tools evaluate the metadata, offer suggestions for combining similar fields and query experts for guidance on possible matches to enhance their models. In this way, data unification quickly creates a single view of the relevant data, ready for analysis.
Human-guided machine learning is a key design principle for data unification. As data scientists and data engineers use the technology to integrate data, the system learns from the process to automate more of the data matching and better structure the final result.
Unlike other approaches that send users back to square one for every project, data unification can utilize previous results, along with what it learned from generating those results, to provide faster and more accurate outcomes for each undertaking.
In this way, data unification enhances the efforts of self-service data preparation users within these larger enterprises. One large pharmaceutical company, for instance, is using the technique to curate thousands of clinical trial datasets. Data unification is used to get source datasets into the correct format for analysis, while a data prep system is used downstream for individual data “wrangling.”
Data unification is also helping companies address a host of challenges caused by dirty, fractured data. For example, computer solutions and services provider Hewlett Packard Enterprise created a data-driven customer’s journey that goes beyond individual activity to capture activity at the company level. Iana Dankova, Business Analytics Manager at HPE, says the process has “allowed us to get to views and insights we otherwise could never have reached, ultimately improving our win rate.”
Thomson Reuters is also using data unification to deliver better-connected content within a fraction of the time and cost of legacy approaches. Likewise, GE applied machine learning to its procurement data to uncover tens of millions of dollars in savings.
Indeed, data unification efforts are crossing all industries from telecommunications to business consulting, replacing or working in conjunction with traditional ETL and MDM systems. And while it can be a boon for organizations of all sizes, enterprises with significant fracture and noise in their data are discovering that the technology allows them to transform their massive data stores from a liability to a significant decision-making advantage.
Useful data comes in all forms and from a wide range of sources, and that’s particularly true in healthcare organizations, which gather patient data from a variety of systems. But at the same time, many organizations are experiencing a fundamental limitation with their traditional data preparation tools.
This problem is particularly acute for larger, mature organizations that have been accumulating data in separate systems for a number of years. They may have several ERP and CRM systems, or they may have acquired or merged with other organizations that have collected information in their own data silos for a number of years.
But regardless of the source of the incompatibilities, their traditional data management tools just can’t handle the scale of data needed to make full use of modern analytics. Instead, businesses like these need data unification.
Unlike data preparation, which hits a brick wall when combining data from anything more than about a dozen sources, data unification specifically addresses the challenge of bringing together data from numerous and disparate sources. Data unification technologies, like those created at MIT’s Computer Science and Artificial Intelligence Laboratory under the leadership of Turing award honoree Michael Stonebraker, apply human-guided machine learning to unearthing the underlying structure in divergent data.
These tools evaluate the metadata, offer suggestions for combining similar fields and query experts for guidance on possible matches to enhance their models. In this way, data unification quickly creates a single view of the relevant data, ready for analysis.
Human-guided machine learning is a key design principle for data unification. As data scientists and data engineers use the technology to integrate data, the system learns from the process to automate more of the data matching and better structure the final result.
Unlike other approaches that send users back to square one for every project, data unification can utilize previous results, along with what it learned from generating those results, to provide faster and more accurate outcomes for each undertaking.
In this way, data unification enhances the efforts of self-service data preparation users within these larger enterprises. One large pharmaceutical company, for instance, is using the technique to curate thousands of clinical trial datasets. Data unification is used to get source datasets into the correct format for analysis, while a data prep system is used downstream for individual data “wrangling.”
Data unification is also helping companies address a host of challenges caused by dirty, fractured data. For example, computer solutions and services provider Hewlett Packard Enterprise created a data-driven customer’s journey that goes beyond individual activity to capture activity at the company level. Iana Dankova, Business Analytics Manager at HPE, says the process has “allowed us to get to views and insights we otherwise could never have reached, ultimately improving our win rate.”
Thomson Reuters is also using data unification to deliver better-connected content within a fraction of the time and cost of legacy approaches. Likewise, GE applied machine learning to its procurement data to uncover tens of millions of dollars in savings.
Indeed, data unification efforts are crossing all industries from telecommunications to business consulting, replacing or working in conjunction with traditional ETL and MDM systems. And while it can be a boon for organizations of all sizes, enterprises with significant fracture and noise in their data are discovering that the technology allows them to transform their massive data stores from a liability to a significant decision-making advantage.
More for you
Loading data for hdm_tax_topic #better-outcomes...