Machine learning can locate wrist fractures in radiographs
AI algorithms can quickly detect and localize wrist fractures in X-ray images, which can augment the work of harried emergency physicians and radiologists.
AI algorithms can quickly detect and localize wrist fractures in X-ray images, which can augment the work of harried emergency physicians and radiologists.
Missing a fracture on an emergency department radiograph is one of the most common causes of diagnostic errors and subsequent litigation. Such errors are due to clinical inexperience, distraction, fatigue, poor viewing conditions and time pressures.
The study authors, from the National University of Singapore, hypothesized that automated analysis using artificial intelligence (AI) would be “invaluable” in reducing these misreadings and that an object detection convolutional neural network (CNN) would work better than other CNNs.
Object detection CNNs are extensions of image classification models that not only recognize and classify objects on images, but also localize the position of each object.
The researchers theorized that an object detection CNN could be used to identify and localize fractures on wrist radiographs by treating a fracture as an object.
They trained an object detection CNN on 7,356 wrist radiograph studies from a hospital picture archiving and communication system. Radiologists annotated all of the radius and ulna fractures in the images with bounding boxes.
The CNN model was then tested on an unseen test set of 524 ED wrist radiographic studies, with two radiologists used as the reference standard.
The AI exceeded prior work using deep learning on orthopedic radiographs.
It detected and localized radius and ulna fractures on wrist radiographs with high sensitivity at a per-fracture (frontal 91.2 percent, lateral 96.3 percent), per-image (frontal 95.7 percent, lateral 96.7 percent), and per-study (98.1 percent) level, even with the relatively modest training dataset size.
There was also no significant difference in the algorithm’s performance between adult and pediatric images, nor whether the wrist was in a cast when the image was taken. The model was more sensitive to displaced fractures.
The mean processing time per test image was only 0.18 seconds.
The researchers suggested that their success rate was in part because they focused only on wrist radiographs to train the model, and used radiologists to manually check and annotate the images, rather than automated annotation, making for more accurate data labeling.
The study was reported in the journal Radiology: Artificial Intelligence.
“The ability to predict location information of abnormality with deep neural networks is an important step toward developing clinically useful artificial intelligence tools to augment radiologist reporting,” the study authors concluded.
Missing a fracture on an emergency department radiograph is one of the most common causes of diagnostic errors and subsequent litigation. Such errors are due to clinical inexperience, distraction, fatigue, poor viewing conditions and time pressures.
The study authors, from the National University of Singapore, hypothesized that automated analysis using artificial intelligence (AI) would be “invaluable” in reducing these misreadings and that an object detection convolutional neural network (CNN) would work better than other CNNs.
Object detection CNNs are extensions of image classification models that not only recognize and classify objects on images, but also localize the position of each object.
The researchers theorized that an object detection CNN could be used to identify and localize fractures on wrist radiographs by treating a fracture as an object.
They trained an object detection CNN on 7,356 wrist radiograph studies from a hospital picture archiving and communication system. Radiologists annotated all of the radius and ulna fractures in the images with bounding boxes.
The CNN model was then tested on an unseen test set of 524 ED wrist radiographic studies, with two radiologists used as the reference standard.
The AI exceeded prior work using deep learning on orthopedic radiographs.
It detected and localized radius and ulna fractures on wrist radiographs with high sensitivity at a per-fracture (frontal 91.2 percent, lateral 96.3 percent), per-image (frontal 95.7 percent, lateral 96.7 percent), and per-study (98.1 percent) level, even with the relatively modest training dataset size.
There was also no significant difference in the algorithm’s performance between adult and pediatric images, nor whether the wrist was in a cast when the image was taken. The model was more sensitive to displaced fractures.
The mean processing time per test image was only 0.18 seconds.
The researchers suggested that their success rate was in part because they focused only on wrist radiographs to train the model, and used radiologists to manually check and annotate the images, rather than automated annotation, making for more accurate data labeling.
The study was reported in the journal Radiology: Artificial Intelligence.
“The ability to predict location information of abnormality with deep neural networks is an important step toward developing clinically useful artificial intelligence tools to augment radiologist reporting,” the study authors concluded.
More for you
Loading data for hdm_tax_topic #better-outcomes...