Algorithm can quickly ID cancer outcomes from radiology reports
Deep natural language processing can harness information from radiology reports in electronic health records to rapidly ascertain if a patient has cancer and shifts in the disease.
Deep natural language processing can harness information from radiology reports in electronic health records to rapidly ascertain if a patient has cancer and shifts in the disease.
EHRs hold a tremendous amount of information that can be used to optimize cancer care and drive cancer research. However, unless a cancer patient is in a clinical trial, important clinical endpoints, such as response to therapy, is often recorded as unstructured text, not encoded into structured data. As a result, it’s difficult to extract this information and predict clinical outcomes. Humans can curate the data manually, but that’s resource intensive and often impractical.
The researchers, from Boston’s Dana Farber Cancer Institute, hypothesized that a deep learning algorithm could pull data from routinely generated text reports of imaging studies to speed the process of curating relevant clinical outcomes.
They developed and trained a deep learning model to curate clinical outcomes among patients with solid lung tumors, using the imaging reports in the EHRs. For instance, words such as “mass” and “burden” in a radiology report would indicate that cancer is present; “increasing” and “metastatic” indicate that the cancer was worsening; “decrease” indicates improvement.
They then tested the model in a retrospective study of 2,406 patients with lung cancer. The radiology reports were manually reviewed for 1,112 of the patients.
The deep learning model and the human curation provided similar measurements of clinical outcomes. However, the model did so much faster. A human curator can annotate imaging reports for about three patients an hour; it would take one curator six months to annotate the reports in the study cohort. In contrast, the deep learning model could annotate all of the reports in 10 minutes.
“By reducing the time and expense necessary to review medical records, this technique could substantially accelerate efforts to use real-world data from all patients with cancer to generate evidence regarding effectiveness of treatment approaches and guide decision support,” the study authors stated.
For instance, the information could enable clinicians to match patients to targeted therapies at appropriate times in their disease trajectories, or improve the use of data for precision medicine.
The study was published in JAMA Oncology. “Automated collection of clinically relevant, real-world cancer outcomes from unstructured EHRs appears to be feasible. This technique has the potential to augment capacity for learning from the large population of patients with cancer who receive care outside the clinical trial context,” the researchers concluded.
EHRs hold a tremendous amount of information that can be used to optimize cancer care and drive cancer research. However, unless a cancer patient is in a clinical trial, important clinical endpoints, such as response to therapy, is often recorded as unstructured text, not encoded into structured data. As a result, it’s difficult to extract this information and predict clinical outcomes. Humans can curate the data manually, but that’s resource intensive and often impractical.
The researchers, from Boston’s Dana Farber Cancer Institute, hypothesized that a deep learning algorithm could pull data from routinely generated text reports of imaging studies to speed the process of curating relevant clinical outcomes.
They developed and trained a deep learning model to curate clinical outcomes among patients with solid lung tumors, using the imaging reports in the EHRs. For instance, words such as “mass” and “burden” in a radiology report would indicate that cancer is present; “increasing” and “metastatic” indicate that the cancer was worsening; “decrease” indicates improvement.
They then tested the model in a retrospective study of 2,406 patients with lung cancer. The radiology reports were manually reviewed for 1,112 of the patients.
The deep learning model and the human curation provided similar measurements of clinical outcomes. However, the model did so much faster. A human curator can annotate imaging reports for about three patients an hour; it would take one curator six months to annotate the reports in the study cohort. In contrast, the deep learning model could annotate all of the reports in 10 minutes.
“By reducing the time and expense necessary to review medical records, this technique could substantially accelerate efforts to use real-world data from all patients with cancer to generate evidence regarding effectiveness of treatment approaches and guide decision support,” the study authors stated.
For instance, the information could enable clinicians to match patients to targeted therapies at appropriate times in their disease trajectories, or improve the use of data for precision medicine.
The study was published in JAMA Oncology. “Automated collection of clinically relevant, real-world cancer outcomes from unstructured EHRs appears to be feasible. This technique has the potential to augment capacity for learning from the large population of patients with cancer who receive care outside the clinical trial context,” the researchers concluded.
More for you
Loading data for hdm_tax_topic #better-outcomes...