Ensuring appropriate use of health data for the right purpose
Trust in a data exchange environment that shares PHI can’t exist without agreement on why and how data can be used.
Here’ we’ll discuss a crucial — and often overlooked — aspect of zero-trust data exchange: The right purpose. Understanding, codifying and then enforcing a valid rule set for data exchange ensures that protected health information (PHI) is used in ways that protect patient privacy while building trust.
Clear purpose is critical for trust
To take an example from outside healthcare, the Washington Times reports, “Investigators at the Treasury Inspector General for Tax Administration say they uncovered and referred to the Justice Department 648 cases of unauthorized attempts to access taxpayer data by IRS employees from 2018 through the middle of last year.” This sort of snooping might be a contributing factor for why only 38 percent of Americans view the IRS favorably.
Trust in a data exchange environment — particularly when discussing PHI — can’t exist without an agreed-upon, enforceable set of purposes that detail both why and how data can be used.
And before healthcare as an industry begins to beat our collective chest, remember that its misuse of data is no less salacious. HIPAA violations have been handed out for everything from snooping on celebrities’ medical records to social media giants selling PHI to advertisers.
Regulators take establishing this level of trust seriously. Last year, the Office of Civil Rights and Federal Trade Commission issued a joint letter to more than 100 health systems, warning them of the privacy and security problems related to sharing the wrong data with advertisers. A breach of purpose on data usage has led to numerous lawsuits and security concerns around PHI in recent years.
Real-world misuse of data
Let’s consider a couple of real-world examples. A well-known hospital system was recently fined for using patient information gathered during the scheduling process to market elective procedures to patients. While the data itself was accurate, the use was not authorized under the patient's original consent. This is a textbook case of violating "The Right Purpose."
Or think back to the headlines about Cambridge Analytica. Although not healthcare-specific, that scandal revealed how personal data was repurposed for political profiling without first seeking user consent. Unauthorized data repurposing often has far more severe implications, such as identity theft or extortion. When patients are asked why they entrust sensitive information to doctors, there’s typically only one overarching reason — to improve their health.
The unfortunate reality is that, without proper controls, this data can — and often, is — used for other purposes without authorization. In a world in which large language models ingest mass amounts of data indiscriminately, it’s no longer just nice to have a system to ensure data’s purpose; it’s a necessity.
Defining the right purpose
In healthcare, the "right purpose" can vary depending on the situation but generally falls within a set of well-defined categories – treatment, payment or healthcare operations (TPO), as set out by HIPAA. When a patient receives treatment, the right purpose is clear — a clinician needs access to the patient’s medical history to make informed decisions. Similarly, billing departments need certain data for processing claims. However, when data is requested for research, marketing or even quality improvement, the lines start to blur.
To have the "right purpose," data access must meet these key conditions.
Specific. The purpose must be narrowly defined, not a blanket permission. Reasons like "quality improvement" are often too vague; something like, "reviewing post-op infection rates" is better because it’s specific.
Approved. The purpose should align with regulatory guidelines, institutional policies and, whenever needed, patient consent.
Auditable. Every instance of data access for a particular purpose must be traceable, enabling transparency and accountability. In other words, an organization also needs data provenance.
Enforcing with technology and governance
Traditionally, data use policies have relied heavily on agreements and institutional governance. While these are important, they’re not foolproof. An attorney friend of mine recently opined, “I’ve seen 800-page legal documents that everyone ignores, and I’ve watched back-of-the-napkin sketches that have launched companies.” That suggests that a written policy is only as good as the people enacting it.
That can mean monitoring data’s uses and purposes often becomes a laborious, boring manual process. What is needed is a mechanism to verify the intended use of data automatically, every time it’s accessed. That’s where the open-source protocol called authentic chained data containers (ACDCs) comes in.
ACDCs enable organizations to encapsulate health data with embedded permissions, including its right purpose. When data is exchanged, the receiving entity can check the ACDC to verify that the purpose of use is specific, approved and auditable. Without this cryptographic proof, the transaction is barred. So once allowed purposes are established, ACDCs automate the enforcement.
For example, if researchers want to access patient records for a study, they would need a cryptographic credential specifying that their access is, for example, "permitted research approved by our institutional review board." If this purpose doesn’t match the one encoded in the data’s ACDC, access is automatically denied.
This approach eliminates the need for third-party assertions, reducing the reliance on traditional access controls, which are prone to human error. Instead, it offers a decentralized, zero-trust model that assures data integrity and purpose in every transaction.
Purpose drift and how to prevent It
A major concern in data security is purpose drift, a situation in which data gradually starts being used for reasons outside its original intent. This often happens when data is shared across departments or organizations. Particularly considering the long shelf life of PHI, the risk of starting to pull data out of its initially approved use cases into other avenues is high.
To combat purpose drift, organizations must employ both policy and technology.
Role-based access control. As discussed in our previous article, defining roles is crucial. Only those with the right role and having the right purpose should access data.
Regular audits. Healthcare entities must conduct periodic audits to ensure that all data access aligns with its stated purpose. These audits can be automated through cryptographic logging, making it easier to identify purpose violations.
Clear consent protocols. Patients should have clear, understandable ways to consent to specific purposes of data use. More importantly, technology like ACDCs should encode this consent directly into the data itself.
The right purpose is the linchpin of secure health data exchange. By defining, enforcing and regularly auditing the uses of health data, healthcare entities can create an environment in which patient information is not just protected but used ethically. Cryptographic protocols like ACDCs offer a practical, zero-trust way to enforce purpose verification every time data is accessed, building a foundation for trustworthy healthcare interactions.
As we continue to unpack the Five Rights of Secure Health Data, our next and final article will focus on the right route, exploring the importance of secure channels for data transactions.
Jared Jeffery, FACHDM is CEO of healthKERI and Phil Feairheller, FACHDM is CTO of healthKERI.