Why quality measurement raises the stakes for true interoperability
Value-based care performance assessment must collect data from multiple EHRs and source systems to incorporate the complete longitudinal record.
The future of healthcare is value-based care, and the idea behind it is simple: producers of higher quality products and services are financially rewarded. That’s how most markets work.
When we think about buying healthcare, the concepts of “components” and “experience” translate into “processes” and “outcomes.” When our medical care has more processes, such as physician and specialty consults, we acknowledge it should cost more. But getting more care doesn’t necessarily mean better health. And better health is the ultimate outcome in the context of buying healthcare.
VBC contracts come in many flavors. Many include incentives to manage to the total cost of care or the costs of episodes of care; providers often share in savings. These contracts almost universally incorporate quality and outcome metrics. Measuring those can be complicated and challenging. As a result, thousands of measures have been developed, focused on different specialties and patient populations.
But even after we figure out the right measures, we need accurate and complete patient data to make accurate measurements. The data challenge has become even greater as the measurement focus has shifted from process measures to outcome measures, which often require clinical data not present in claims. In an era of widespread electronic health record (EHR) adoption, this is possible today, but remains challenging.
This month, two leading standards organizations (NCQA & HL7) held a conference in Washington to advance electronic quality measurement. Attendees of this “Digital Quality Summit” included leaders from the Office of the National Coordinator for Health Information Technology, the Centers for Medicare and Medicaid Services (CMS), major EHR vendors, payers, health systems and health information exchanges (HIEs).
Here are the major themes:
Data quality remains a challenge
EHRs have enabled the collection of the essential data for quality measurement, and standards provide a common way for providers to transmit such data. However, a lot of information still needs to be normalized, cleaned and enriched before it’s ready to be used for quality measurement and population health.
It’s also important for data to be semantically interoperable, and many other attendees at the conference also shared the need for tools to monitor and improve data quality.
Longitudinal data is key
EHR design reflects how healthcare organizations have used medical records historically. Today, EHRs are primarily transactional systems. During a patient visit, a clinician documents some findings, orders lab tests and prescribes one or more drugs. When patients go to different providers with different EHRs, that data doesn’t naturally flow into other providers’ systems. Because sick patients rarely see a single provider, the result is that many systems have data about individual visits, but no one has the complete picture.
At this event, David Kendrick, MD, demonstrated the extent of care fragmentation in his state using HIE data, and there are published studies that document care fragmentation. To calculate the quality measures correctly, a healthcare organization needs all the data. Stated another way, data gaps cause errors in quality measurement. For example, the measure for glucose control is based upon the most recent lab result. If one EHR only has an outdated value, it will not calculate the measure accurately.
Significant data gaps exist in real-world systems. If incentive payments are to meaningfully align with quality and outcome measures, trust must be established in the measure calculations. To that end, longitudinal information is a must.
A measure is a measure
During one discussion, CMS stated that, for a given measure, there were different measure definitions depending on whether a payer or provider was being assessed. For example, when looking at mammography for breast cancer prevention, it is OK for a payer to use the presence of a CPT code for compliance, but providers would be expected to document relevant LOINC codes to be compliant (i.e. in the MIPS program).
Many attendees thus expressed the need for common definitions. Measure alignment between programs is going to be critically important in providing clear direction and milestones for clinical care improvement. Shahid Shah gave a great presentation during the summit on a future vision of patient-oriented quality, rather than legacy institution-based concepts (such as payers, hospitals and clinics). That vision at the patient level forces a single definition for all quality measures.
Trusted third-parties are important
Who should measure quality? Should it be the provider, the payer or a neutral third-party? These questions were repeated throughout the conference. Some payers wanted to receive all the clinical data and then calculate measures themselves. Many providers were participating in self-selected and self-reported measures through the Meaningful Use program. Both of these methods suffer from serious limitations.
Payers only receive the data long after care is provided, often through inefficient processes. If they perform the measurement, it’s hard for them to give timely feedback to the providers who can actually improve care. Providers, on the other hand, can measure quickly using their EHR, but relevant data from other providers is often not available, compromising the measurements.
The solution is to have trusted third parties measure quality. Candidates include medical societies and health information exchanges. CMS has a program, Qualified Clinical Data Registries, which certifies organizations to calculate clinical quality measures from longitudinal patient data.
HIEs are ideally suited for this role, because they have multi-sourced longitudinal patient data and can use this data for other purposes, including facilitating care transitions and providing clinical notifications. A neutral intermediary that does not have a financial interest in the outcomes can play a key role in providing objective measurements that will be trusted by all parties to value-based contracts.
Interoperability and quality measurement share a common future
In the past, interoperability and quality measurement were thought of as unrelated. To achieve accurate quality measurement and gain the trust of providers engaged in value-based arrangements, the industry needs a new paradigm. Any system that measures quality must incorporate data from multiple source systems because no one EHR can be assured to have the complete longitudinal record.
This implies that we need true interoperability. While the free flow of information has been a long-sought goal of healthcare, the economic imperative for data-sharing is significantly strengthened by growing adoption of value based care. The future of interoperability and quality measurement go hand-in-hand, which is why it was so important to the diverse participants at the Digital Quality Summit.
When we think about buying healthcare, the concepts of “components” and “experience” translate into “processes” and “outcomes.” When our medical care has more processes, such as physician and specialty consults, we acknowledge it should cost more. But getting more care doesn’t necessarily mean better health. And better health is the ultimate outcome in the context of buying healthcare.
VBC contracts come in many flavors. Many include incentives to manage to the total cost of care or the costs of episodes of care; providers often share in savings. These contracts almost universally incorporate quality and outcome metrics. Measuring those can be complicated and challenging. As a result, thousands of measures have been developed, focused on different specialties and patient populations.
But even after we figure out the right measures, we need accurate and complete patient data to make accurate measurements. The data challenge has become even greater as the measurement focus has shifted from process measures to outcome measures, which often require clinical data not present in claims. In an era of widespread electronic health record (EHR) adoption, this is possible today, but remains challenging.
This month, two leading standards organizations (NCQA & HL7) held a conference in Washington to advance electronic quality measurement. Attendees of this “Digital Quality Summit” included leaders from the Office of the National Coordinator for Health Information Technology, the Centers for Medicare and Medicaid Services (CMS), major EHR vendors, payers, health systems and health information exchanges (HIEs).
Here are the major themes:
Data quality remains a challenge
EHRs have enabled the collection of the essential data for quality measurement, and standards provide a common way for providers to transmit such data. However, a lot of information still needs to be normalized, cleaned and enriched before it’s ready to be used for quality measurement and population health.
It’s also important for data to be semantically interoperable, and many other attendees at the conference also shared the need for tools to monitor and improve data quality.
Longitudinal data is key
EHR design reflects how healthcare organizations have used medical records historically. Today, EHRs are primarily transactional systems. During a patient visit, a clinician documents some findings, orders lab tests and prescribes one or more drugs. When patients go to different providers with different EHRs, that data doesn’t naturally flow into other providers’ systems. Because sick patients rarely see a single provider, the result is that many systems have data about individual visits, but no one has the complete picture.
At this event, David Kendrick, MD, demonstrated the extent of care fragmentation in his state using HIE data, and there are published studies that document care fragmentation. To calculate the quality measures correctly, a healthcare organization needs all the data. Stated another way, data gaps cause errors in quality measurement. For example, the measure for glucose control is based upon the most recent lab result. If one EHR only has an outdated value, it will not calculate the measure accurately.
Significant data gaps exist in real-world systems. If incentive payments are to meaningfully align with quality and outcome measures, trust must be established in the measure calculations. To that end, longitudinal information is a must.
A measure is a measure
During one discussion, CMS stated that, for a given measure, there were different measure definitions depending on whether a payer or provider was being assessed. For example, when looking at mammography for breast cancer prevention, it is OK for a payer to use the presence of a CPT code for compliance, but providers would be expected to document relevant LOINC codes to be compliant (i.e. in the MIPS program).
Many attendees thus expressed the need for common definitions. Measure alignment between programs is going to be critically important in providing clear direction and milestones for clinical care improvement. Shahid Shah gave a great presentation during the summit on a future vision of patient-oriented quality, rather than legacy institution-based concepts (such as payers, hospitals and clinics). That vision at the patient level forces a single definition for all quality measures.
Trusted third-parties are important
Who should measure quality? Should it be the provider, the payer or a neutral third-party? These questions were repeated throughout the conference. Some payers wanted to receive all the clinical data and then calculate measures themselves. Many providers were participating in self-selected and self-reported measures through the Meaningful Use program. Both of these methods suffer from serious limitations.
Payers only receive the data long after care is provided, often through inefficient processes. If they perform the measurement, it’s hard for them to give timely feedback to the providers who can actually improve care. Providers, on the other hand, can measure quickly using their EHR, but relevant data from other providers is often not available, compromising the measurements.
The solution is to have trusted third parties measure quality. Candidates include medical societies and health information exchanges. CMS has a program, Qualified Clinical Data Registries, which certifies organizations to calculate clinical quality measures from longitudinal patient data.
HIEs are ideally suited for this role, because they have multi-sourced longitudinal patient data and can use this data for other purposes, including facilitating care transitions and providing clinical notifications. A neutral intermediary that does not have a financial interest in the outcomes can play a key role in providing objective measurements that will be trusted by all parties to value-based contracts.
Interoperability and quality measurement share a common future
In the past, interoperability and quality measurement were thought of as unrelated. To achieve accurate quality measurement and gain the trust of providers engaged in value-based arrangements, the industry needs a new paradigm. Any system that measures quality must incorporate data from multiple source systems because no one EHR can be assured to have the complete longitudinal record.
This implies that we need true interoperability. While the free flow of information has been a long-sought goal of healthcare, the economic imperative for data-sharing is significantly strengthened by growing adoption of value based care. The future of interoperability and quality measurement go hand-in-hand, which is why it was so important to the diverse participants at the Digital Quality Summit.
More for you
Loading data for hdm_tax_topic #better-outcomes...