How best to manage artificial intelligence in radiology
The rapid growth in the number of algorithms is building the case for vendor neutral AI solutions in imaging.
For well over a decade, VNA solutions have been available to provide a shared multi-department, multi-facility repository and integration point for healthcare enterprises.
Organizations employing these systems, often in conjunction with an enterprise-wide electronic medical record (EMR) system, typically benefit from a reduction in complexity, compared with managing disparate archives for each site and department. These organizations can invest their IT dollars in ensuring that the system is fast and provides maximum uptime, using on-premises or cloud deployments. And it can act as a central, managed broker for interoperability with other enterprises.
The ability to standardize on the format, metadata structure, quality of data (completeness and consistency of data across records, driven by organizational policy), and interfaces for storage, discovery and access of records is much more feasible with a single centrally managed system. Ensuring adherence to healthcare IT standards, such as HL7 and DICOM, for all imaging records across the enterprise is possible with a shared repository that has mature data analytics capabilities and quality control tools.
The same benefits of centralization and standardization of interfaces and data structures that VNA solutions provide are applicable to artificial intelligence solutions, in the form of a vendor neutral artificial intelligence (VNAi) approach. This is not to say that a VNAi solution must also be a VNA (though it could be), just that they are both intended to be open and shared resources that provide services to several connected systems.
Without a shared, centrally managed solution, healthcare enterprises run the risk of deploying a multitude of vendor-proprietary systems, each with a narrow set of functions. Each of these systems would require integration with data sources and consumer systems, user interfaces to configure and support it and potentially varying platforms to operate on.
The question, at its essence, is do we want to repeat the historic challenges and costs associated with managing disparate image archives when implementing AI capabilities in an enterprise? Not surprisingly, we shouldn’t.
The following capabilities are important for a VNAi solution.
Interfaces. Flexible, well-documented and supported interfaces for both imaging and clinical data are required. Standards should be supported, where they exist. Where standards do not exist, good design principles, such as the use of REST APIs and support for IT security best practices, should be adhered to. Connections to, or inclusion of, other sub-processes—such as optical character recognition (OCR) and natural language processing (NLP)—may be necessary to extract and preprocess unstructured data before use by AI algorithms.
Data format support. The data both coming in and going out will vary, and a VNAi will need to support all kinds of data formats (including multimedia ones) with the ability to process this data for use in its algorithms. The more the VNAi can perform data parsing and preprocessing, the less each algorithm will need to deal with this. It may be required to have a method to anonymize some inbound or outbound data, based on configurable rules.
Processor plug-in framework. To provide consistent and reliable services to algorithms, which could be written in different programming languages or run on different hosts, the VNAi needs a well-documented, tested, and supported framework for plugging in algorithms for use by connected systems. Methods to manage the state of a plug-in—from test, production, and disabled, as well as revision controls—will be valuable.
Quality control tools. Automated and manual correction of data inputs and outputs will be required to address inaccurate or incomplete data sets.
Logging. Capturing the logic and variables used in AI processes will be important to retrospectively assess their success and to identify data generated by processes that prove over time to be flawed.
Data analytics. For both business stakeholders (people) and connected applications (software), the ability to use data to measure success and predict outcomes will be essential.
Data persistence rules. Much like other data processing applications that rely on data as input, the VNAi will need to have configurable rules that determine how long defined sets of data are persisted, and when they are purged.
Performance. The VNAi will need to be able to quickly process large data sets at peak loads, even with highly complex algorithms. Dynamically assigning IT resources (compute, network, storage and the like) within minutes, not hours or days, may be necessary.
Deployment flexibility. Some organizations will want their VNAi in the cloud, others will want it on-premises. Perhaps some organizations want a hybrid approach, where learning and testing is on-premises, but production processing is done in the cloud.
High availability (HA) and business continuity (BC) and system monitoring. Like any critical system, uptime is important. The ability for the VNAi to be deployed in an HA/BC configuration will be essential.
Multi-tenant data segmentation and access controls. A shared VNAi reduces the effort to build and maintain the system, but its use and access to the data it provides will require data access controls to ensure that data is accessed only by authorized parties and systems.
Cost sharing. Although this is not a technical characteristic, the VNAi solution likely requires the ability to share the system build and operating costs among participating organizations. Methods to identify usage of specific functions and algorithms to allocate licensing revenues would be very helpful.
Effective technical support. A VNAi can be a complex ecosystem with variable uses and data inputs and outputs. If the system is actively learning, how it behaves on one day may be different than on another. Supporting such a system will require developer-level profile support staff in many cases.
Organizations employing these systems, often in conjunction with an enterprise-wide electronic medical record (EMR) system, typically benefit from a reduction in complexity, compared with managing disparate archives for each site and department. These organizations can invest their IT dollars in ensuring that the system is fast and provides maximum uptime, using on-premises or cloud deployments. And it can act as a central, managed broker for interoperability with other enterprises.
The ability to standardize on the format, metadata structure, quality of data (completeness and consistency of data across records, driven by organizational policy), and interfaces for storage, discovery and access of records is much more feasible with a single centrally managed system. Ensuring adherence to healthcare IT standards, such as HL7 and DICOM, for all imaging records across the enterprise is possible with a shared repository that has mature data analytics capabilities and quality control tools.
The same benefits of centralization and standardization of interfaces and data structures that VNA solutions provide are applicable to artificial intelligence solutions, in the form of a vendor neutral artificial intelligence (VNAi) approach. This is not to say that a VNAi solution must also be a VNA (though it could be), just that they are both intended to be open and shared resources that provide services to several connected systems.
Without a shared, centrally managed solution, healthcare enterprises run the risk of deploying a multitude of vendor-proprietary systems, each with a narrow set of functions. Each of these systems would require integration with data sources and consumer systems, user interfaces to configure and support it and potentially varying platforms to operate on.
The question, at its essence, is do we want to repeat the historic challenges and costs associated with managing disparate image archives when implementing AI capabilities in an enterprise? Not surprisingly, we shouldn’t.
The following capabilities are important for a VNAi solution.
Interfaces. Flexible, well-documented and supported interfaces for both imaging and clinical data are required. Standards should be supported, where they exist. Where standards do not exist, good design principles, such as the use of REST APIs and support for IT security best practices, should be adhered to. Connections to, or inclusion of, other sub-processes—such as optical character recognition (OCR) and natural language processing (NLP)—may be necessary to extract and preprocess unstructured data before use by AI algorithms.
Data format support. The data both coming in and going out will vary, and a VNAi will need to support all kinds of data formats (including multimedia ones) with the ability to process this data for use in its algorithms. The more the VNAi can perform data parsing and preprocessing, the less each algorithm will need to deal with this. It may be required to have a method to anonymize some inbound or outbound data, based on configurable rules.
Processor plug-in framework. To provide consistent and reliable services to algorithms, which could be written in different programming languages or run on different hosts, the VNAi needs a well-documented, tested, and supported framework for plugging in algorithms for use by connected systems. Methods to manage the state of a plug-in—from test, production, and disabled, as well as revision controls—will be valuable.
Quality control tools. Automated and manual correction of data inputs and outputs will be required to address inaccurate or incomplete data sets.
Logging. Capturing the logic and variables used in AI processes will be important to retrospectively assess their success and to identify data generated by processes that prove over time to be flawed.
Data analytics. For both business stakeholders (people) and connected applications (software), the ability to use data to measure success and predict outcomes will be essential.
Data persistence rules. Much like other data processing applications that rely on data as input, the VNAi will need to have configurable rules that determine how long defined sets of data are persisted, and when they are purged.
Performance. The VNAi will need to be able to quickly process large data sets at peak loads, even with highly complex algorithms. Dynamically assigning IT resources (compute, network, storage and the like) within minutes, not hours or days, may be necessary.
Deployment flexibility. Some organizations will want their VNAi in the cloud, others will want it on-premises. Perhaps some organizations want a hybrid approach, where learning and testing is on-premises, but production processing is done in the cloud.
High availability (HA) and business continuity (BC) and system monitoring. Like any critical system, uptime is important. The ability for the VNAi to be deployed in an HA/BC configuration will be essential.
Multi-tenant data segmentation and access controls. A shared VNAi reduces the effort to build and maintain the system, but its use and access to the data it provides will require data access controls to ensure that data is accessed only by authorized parties and systems.
Cost sharing. Although this is not a technical characteristic, the VNAi solution likely requires the ability to share the system build and operating costs among participating organizations. Methods to identify usage of specific functions and algorithms to allocate licensing revenues would be very helpful.
Effective technical support. A VNAi can be a complex ecosystem with variable uses and data inputs and outputs. If the system is actively learning, how it behaves on one day may be different than on another. Supporting such a system will require developer-level profile support staff in many cases.
More for you
Loading data for hdm_tax_topic #better-outcomes...