Why enterprise information management is a key to analytics success
Healthcare organizations must eliminate the coping mechanisms they use to meet ad hoc data needs and instead promote institution-wide approaches.
As the velocity of data increases—and the demand for and consumption of that data intensifies—many organizations find themselves struggling to manage all of that data effectively.
Increasingly, organizations find themselves unable to keep pace with volume, velocity and variety of all of this data. As a result, many organizations have created data coping mechanisms to try to keep up, but in the end, they just fall further behind.
In this article, we’ll examine how organizations can right the information ship, putting themselves, their employees and their customers on a course to information management excellence.
EIM Visual-1A-CROP.jpg Several of these common reactions to the swell of data can be found in most organizations.
The creation of silos of data analysis. This is often a first step: Frustrated with the lack of responsiveness from IT, business users start to set up their own mini data infrastructures based on whatever data sources or extracts they can get their hands on and start building some of their own reports.
Shadow IT. Still frustrated by the slow reaction time from IT and confronted with urgent customer needs, business divisions take IT control into their own hands and independently hire the required technical skills to get the job done. They often grow into shadow IT groups with dozens of people in informal organizational structures. This often results in shorter tenures of IT staff, because they do not see any long-term career opportunities in the dysfunctional business and its relationship.
Duplication. As the amount of data silos grows, a significant duplication of both data and analysis starts to occur across the organization. Duplicated data starts to get out of sync with source systems and repositories because of changes, enhancements, modernization efforts or deprecation. This results in manifold increases in the total cost to the enterprise for maintaining the same underlying information across several data assets.
Fractured or no Data Governance. As this is occurring, any pre-existing data governance starts to break down, and business users start to question what reports to trust, which one is up to date or who has implemented the correct calculation.
There is little doubt that when business divisions take matters into their own hands, it can reduce time to value. However, doing so creates a new set of operational challenges that adds to an already long list of problems to be solved.
The way to facilitate solving this problem of uncontrolled data, while minimizing the creation of new problems, begins with an overriding philosophical approach to information as follows. Organizations that democratize generations of information and business intelligence have a major advantage over organizations where IT—or any other single group—has a monopoly over the creation and dissemination of information. Rather, IT should have a supporting role in information development and certainly not a monopoly.
This is a fundamental shift in thinking for most organizations. However, the concept of Enterprise Information Management (EIM) can provide a framework that can oversee and govern the use of information development and dissemination across the organization.
According to Gartner, EIM is an integrative discipline for structuring, describing and governing information assets across organizational and technological boundaries to improve efficiency, promote transparency and enable business insight.
Before EIM implementation can start, it is important to gain cross-functional organizational support from senior management. It is best to adopt EIM as early as possible as a supportive measure to facilitate bringing data close the business, and at the same time as a preventive measure to keep the possible issues highlighted above, from becoming bigger problems.
How EIM is structured, what functions it performs and how it interacts with other functions is all part of the EIM evolution in any organization—there is no one approach that fits all organizations. One has to carefully evaluate an organization’s needs and internal dynamics to tailor a phased approach.
In broad strokes, EIM would be concerned with the following:
Data Governance. Data Governance is a process of gaining consensus on the standards and meaning of data, setting up and enforcing sound data quality standards, creating and auditing data security and privacy policies, and guiding new systems development to comply with the data standards. Data Governance is only successful if a there is a cross-divisional commitment and investment into working through the painstaking process of consensus building and standards definition. It does and must not stop there, however, lest the new standards quickly become “shelfware.” Active information advocacy, along with a demand for compliance and resistance to the impulse to cut corners, are absolutely essential. After the larger organization sees the benefits, adoption becomes much easier.
Enterprise Information Architecture. More often than not, in most mid-size and large organizations, no one has created an inventory of how many data assets exist, including all databases for transactional systems, enterprise and operational data marts, analytical databases, feeds and extracts and so on. This is often further complicated by the lack of documentation or understanding of data flows across these systems, how data is moving from one system to another, and what information is exchanged via feeds. In the absence of a clear picture of data assets and flows, it is very difficult to understand duplication, evaluate reuse or attempt consolidation. Enterprise Information Architecture, in addition to maintaining a blueprint of data assets and data flows, also describes how data is modeled, how it should be integrated, and identifies system or sources of records.
Impact Analysis. In the absence of data governance and an enterprise information architecture approach, change management is applied on a local scale only. As a result, upstream and downstream impacts cannot be effectively assessed. Some downstream impacts are easy to identify if they result in breaking something. However, other impacts, such as duplication or out of compliance with data governance standards, are harder to detect.
Data Warehouse and Business Intelligence Layer. A properly constructed data warehouse aggregates data from different sources and performs two major roles: It integrates data and presents it in a format that is suitable for rapid reporting and analysis; and secondly, it encapsulates business rules. EIM ensures that the appropriate data sources are integrated in a way that is usable for end users, assures that the common business rules are implemented within the data warehouse, and it minimizes any possibility of an erroneous data integration and business rules implementation downstream.
Metadata, master and reference data management. Metadata is the data that describes the characteristics or the context of the data. Master data management processes concern the connecting or de-duplicating representation of the same data into a single entity. For example, a customer may be in a contact listing the bill-to section of an invoice, as an insured on a policy, or a claimant on a claim. Master data processes attempt to connect all of them into one person, if that is the appropriate action.
Reference data assists in providing a common nomenclature across the organization, e.g. should a set of ZIP codes be represented as “Southern California” or “So Cal.” Another type of data, for example, is the Agency for Healthcare Research and Quality classification of diagnosis codes, which gets revised every few years. Without knowing which version of the hierarchy is valid, the data analysts are working in the dark, and the quality of their deliverables might become suspect.
Information Security Management. Information security management deals with the definition of standards, processes and protocols for extending and revoking levels of access to data assets. Access can be provided to individuals, systems and applications, vendors, and other regulatory or law enforcement agencies as needed. Common occurrences like data footprint expansion, business use case evolution, and increased user demands for new sources of data, constantly challenges the security status quo. Other drivers such as HIPAA compliance and lawsuit risks stemming from accidental PHI disclosures can also force a rethink on the security management approach.
Information Quality Management. Information quality is the key attribute that determines user adoption. It answers one key question about the data—can I trust it? In the absence of robust quality checks that are applied across all data assets, users will likely not trust and use the data. They will ultimately resort to using other creative ways of getting the data they want and need, pushing all the EIM progress to date back to square one. Therefore, information quality management is one of the core pillars of EIM.
IQM should be looked at as an extension of data-governance, in that it needs to have a strong component of business oversight and ownership to it. Data quality is a unique challenge for information architects because it requires a focus beyond immediate code and design quality, into the realm of run-time quality scoring, triage processes, management of data-reloads, and the like.
Most organizations have fragmented implementations of some of these key aspects of EIM. Until a holistic and unified strategy is put into action that covers all of the above, organizations will continue to move one foot forward and two steps backward.
More information about the authors can be found here.
Increasingly, organizations find themselves unable to keep pace with volume, velocity and variety of all of this data. As a result, many organizations have created data coping mechanisms to try to keep up, but in the end, they just fall further behind.
In this article, we’ll examine how organizations can right the information ship, putting themselves, their employees and their customers on a course to information management excellence.
The creation of silos of data analysis. This is often a first step: Frustrated with the lack of responsiveness from IT, business users start to set up their own mini data infrastructures based on whatever data sources or extracts they can get their hands on and start building some of their own reports.
Shadow IT. Still frustrated by the slow reaction time from IT and confronted with urgent customer needs, business divisions take IT control into their own hands and independently hire the required technical skills to get the job done. They often grow into shadow IT groups with dozens of people in informal organizational structures. This often results in shorter tenures of IT staff, because they do not see any long-term career opportunities in the dysfunctional business and its relationship.
Duplication. As the amount of data silos grows, a significant duplication of both data and analysis starts to occur across the organization. Duplicated data starts to get out of sync with source systems and repositories because of changes, enhancements, modernization efforts or deprecation. This results in manifold increases in the total cost to the enterprise for maintaining the same underlying information across several data assets.
Fractured or no Data Governance. As this is occurring, any pre-existing data governance starts to break down, and business users start to question what reports to trust, which one is up to date or who has implemented the correct calculation.
There is little doubt that when business divisions take matters into their own hands, it can reduce time to value. However, doing so creates a new set of operational challenges that adds to an already long list of problems to be solved.
The way to facilitate solving this problem of uncontrolled data, while minimizing the creation of new problems, begins with an overriding philosophical approach to information as follows. Organizations that democratize generations of information and business intelligence have a major advantage over organizations where IT—or any other single group—has a monopoly over the creation and dissemination of information. Rather, IT should have a supporting role in information development and certainly not a monopoly.
This is a fundamental shift in thinking for most organizations. However, the concept of Enterprise Information Management (EIM) can provide a framework that can oversee and govern the use of information development and dissemination across the organization.
According to Gartner, EIM is an integrative discipline for structuring, describing and governing information assets across organizational and technological boundaries to improve efficiency, promote transparency and enable business insight.
Before EIM implementation can start, it is important to gain cross-functional organizational support from senior management. It is best to adopt EIM as early as possible as a supportive measure to facilitate bringing data close the business, and at the same time as a preventive measure to keep the possible issues highlighted above, from becoming bigger problems.
How EIM is structured, what functions it performs and how it interacts with other functions is all part of the EIM evolution in any organization—there is no one approach that fits all organizations. One has to carefully evaluate an organization’s needs and internal dynamics to tailor a phased approach.
In broad strokes, EIM would be concerned with the following:
Data Governance. Data Governance is a process of gaining consensus on the standards and meaning of data, setting up and enforcing sound data quality standards, creating and auditing data security and privacy policies, and guiding new systems development to comply with the data standards. Data Governance is only successful if a there is a cross-divisional commitment and investment into working through the painstaking process of consensus building and standards definition. It does and must not stop there, however, lest the new standards quickly become “shelfware.” Active information advocacy, along with a demand for compliance and resistance to the impulse to cut corners, are absolutely essential. After the larger organization sees the benefits, adoption becomes much easier.
Enterprise Information Architecture. More often than not, in most mid-size and large organizations, no one has created an inventory of how many data assets exist, including all databases for transactional systems, enterprise and operational data marts, analytical databases, feeds and extracts and so on. This is often further complicated by the lack of documentation or understanding of data flows across these systems, how data is moving from one system to another, and what information is exchanged via feeds. In the absence of a clear picture of data assets and flows, it is very difficult to understand duplication, evaluate reuse or attempt consolidation. Enterprise Information Architecture, in addition to maintaining a blueprint of data assets and data flows, also describes how data is modeled, how it should be integrated, and identifies system or sources of records.
Impact Analysis. In the absence of data governance and an enterprise information architecture approach, change management is applied on a local scale only. As a result, upstream and downstream impacts cannot be effectively assessed. Some downstream impacts are easy to identify if they result in breaking something. However, other impacts, such as duplication or out of compliance with data governance standards, are harder to detect.
Data Warehouse and Business Intelligence Layer. A properly constructed data warehouse aggregates data from different sources and performs two major roles: It integrates data and presents it in a format that is suitable for rapid reporting and analysis; and secondly, it encapsulates business rules. EIM ensures that the appropriate data sources are integrated in a way that is usable for end users, assures that the common business rules are implemented within the data warehouse, and it minimizes any possibility of an erroneous data integration and business rules implementation downstream.
Metadata, master and reference data management. Metadata is the data that describes the characteristics or the context of the data. Master data management processes concern the connecting or de-duplicating representation of the same data into a single entity. For example, a customer may be in a contact listing the bill-to section of an invoice, as an insured on a policy, or a claimant on a claim. Master data processes attempt to connect all of them into one person, if that is the appropriate action.
Reference data assists in providing a common nomenclature across the organization, e.g. should a set of ZIP codes be represented as “Southern California” or “So Cal.” Another type of data, for example, is the Agency for Healthcare Research and Quality classification of diagnosis codes, which gets revised every few years. Without knowing which version of the hierarchy is valid, the data analysts are working in the dark, and the quality of their deliverables might become suspect.
Information Security Management. Information security management deals with the definition of standards, processes and protocols for extending and revoking levels of access to data assets. Access can be provided to individuals, systems and applications, vendors, and other regulatory or law enforcement agencies as needed. Common occurrences like data footprint expansion, business use case evolution, and increased user demands for new sources of data, constantly challenges the security status quo. Other drivers such as HIPAA compliance and lawsuit risks stemming from accidental PHI disclosures can also force a rethink on the security management approach.
Information Quality Management. Information quality is the key attribute that determines user adoption. It answers one key question about the data—can I trust it? In the absence of robust quality checks that are applied across all data assets, users will likely not trust and use the data. They will ultimately resort to using other creative ways of getting the data they want and need, pushing all the EIM progress to date back to square one. Therefore, information quality management is one of the core pillars of EIM.
IQM should be looked at as an extension of data-governance, in that it needs to have a strong component of business oversight and ownership to it. Data quality is a unique challenge for information architects because it requires a focus beyond immediate code and design quality, into the realm of run-time quality scoring, triage processes, management of data-reloads, and the like.
Most organizations have fragmented implementations of some of these key aspects of EIM. Until a holistic and unified strategy is put into action that covers all of the above, organizations will continue to move one foot forward and two steps backward.
More information about the authors can be found here.
More for you
Loading data for hdm_tax_topic #better-outcomes...