What are the challenges faced by AI-based digital health products?

There is a massive opportunity for predictive analytics and Artificial Intelligence (AI) techniques to transform healthcare. A quick PubMed search for ‘analytics in healthcare’ results in close to 22K articles. However, despite this amount of published research, there is a lack of a comparable amount of thoroughly validated AI-based digital health products and services in the real world! The translation of research to innovative clinical products and services is possible if we address the following few key issues:

Technical
Adequate training All stakeholders (physicians and nurses, hospital administrators, medical students, biomedical engineers, data scientists, industry partners, etc.) should have at least the essential competencies in healthcare and medical data mining concepts and principles to comprehend and effectively validate and use the digital health solutions in practice.


Predictive model safety and efficacy If the solution uses predictive models, to ensure efficacy, these models should be developed with (i) reliable and valid datasets, (ii) an adequate number of samples, and (iii) robust and repeatable algorithms and analytics frameworks. To ensure safety, the models should be validated with extensive clinical trials and real-world data, and updated regularly with new data.


Model transparency Full disclosure of the data and algorithms used in the model building is not generally possible due to intellectual property protection and the risk of cybersecurity issues. However, there should be transparency around the type of data used and the bias in training data (gender, socio-economic factors, demographics, etc). More comprehendible models could also be developed so that there is some level of understanding about why a particular set of features leads to better predictions. Explainability is key.

Regulatory
In the USA, the Food and Drug Administration (FDA) regulates medical devices. Only in September 2021 did the FDA publish a AI/ML software as a medical device action plan (https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device). More efforts are still needed to establish the framework for these type of products. Studies have also argued that since there is substantial human involvement in the decision-making process that includes these type of products, the FDA should not regulate SaMD as products but as overall systems considering the human factor issues.

Ethical
Informed consent There is a need to determine when clinicians should deploy principles of informed consent around AI-the type of data used, the nature of the algorithms used, the extent of AI use alongside traditional clinical practice for decision support, patient data privacy and security, etc. For medical apps, it is essential to frame ethically responsible user agreements that could be quite similar to informed consent documents.

Patient privacy It is important to be transparent about how the patient data are used and who owns them. Steps should be taken to ensure that the patient data are not used for purposes other than the intended medical use and discussions with the physician.

Leave a comment