Artificial intelligence (AI) systems are developing exponentially and their use applications are increasing across all sectors. In healthcare, automation related to assistance and care in health and in the clinical research for the analysis of medical images, medical and hospital centers, as well as clinical research centers is being implemented. The current standard of AI systems based on machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs) algorithms, essentially the goals of these algorithms being to learn by representing raw data, employing, at different levels of processing and abstraction being able to solve complex problems [1]. Particularly, in the context of medical imaging, the tasks to be performed by AI models involve [1-3]:
- Classification of pathologies;
- Anatomical segmentation: cerebral volumes and correlated areas;
- Design of typical and atypical lesions: osteoarticular fractures, pulmonary nodules, pulmonary embolism, pneumothorax, cardiomegaly, aortic dissection/aneurysm, breast lesions and oncological follow-up;
- Quantification of pathological biomarkers: such as iron quantification by magnetic resonance imaging
- Automation of care processes: peer review in the health team, automated radiologist quality control, use of natural language to review reports of clinical findings in images, such as pulmonary nodules.
Due to recent advances and the expressive potential of AI models in medical images, in the period 2017/2018 there were about 106 AI products related to radiology [4]. Currently, there are about 200 AI medical products in Radiology cleared through the United States Food and Drug Administration (FDA) [5]. However, despite encouraging advances, AI models still face considerable challenges with regard to ethical and regulatory aspects. Recent researchers in legal affairs have shown that AI models can be biased against subpopulations, classified according to age, race, sex or gender, socioeconomic status, among others [6].
The potential unethical behavior of AI models set forth by the International Council for Harmonization (E6/ICH) meters the rate at which AI systems are implemented [7]. Recent efforts regarding regulatory issues related to AI in medical images, refer to the maintenance of protected health information (PHI) efforts to maintain metadata protection even after three-dimensional reconstruction models are implemented, in addition, another challenge Considerable is the ownership of the data, that is, patients can consent to the use of their data exclusively for clinical and scientific research or care [8]. Not far away, other challenges faced in conducting AI research and development in medical imaging include:
Need for performance metrics that consider [9]:
- Clinical context;
- Pre-processing and labeling of large amounts of data;
- Application and elaboration of reproducible algorithms (inter-intra institutions);
Finally, a solid amount of scientific evidence has been proposing solutions regarding research and development for the implementation of AI in medical images, which are divided into three stages: before, during and after algorithm training.
Before training:
- Use of a representative database and its representative analyzed considering subpopulations, such as age, race/ethnicity, sex or gender;
- Clinical practice: guarantee of registration of consent considering the ICH guide to good clinical practices.
During training:
- Generative methods: as a strategy to increase the number of entries in the database;
- Contradictory methods: emphasizing the improvement of primary models resulting in increased performance, reducing the ability of the second model to predict attributes considered protected;
After training:
- Post-processed models with adequate predictions in the different subgroups.
- Processing capacity for forecasts in subgroups based on large databases.
In addition to these previous factors, the diverse composition of AI developers and clinical and scientific research staff results in greater clinical, demographic (age, race/ethnicity, sex or gender) representativeness of AI models resulting in greater clinical generalization of these models.
Citations
[1] SL Goldenberg, G Nir, SE Salcudean, A new era: artificial intelligence and machine learning in prostate cancer, Nat Rev Urol. 2019;7, 391–403.
[2] Zhou LQ, Wang JY, Yu SY, Wu GG, Wei Q, Deng YB, Wu XL, Cui XW, Dietrich CF. Artificial intelligence in medical imaging of the liver. World J Gastroenterol. 2019 14;25(6):672-682.
[3] Le EPV, Wang Y, Huang Y, Hickman S, Gilbert FJ. Artificial intelligence in breast imaging. Clin Radiol. 2019 May;74(5):357-366.
[4] Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2018; 2: 35.
[5] Lin M. What’s needed to bridge the gap between us FDA clearance and real-world use of AI algorithms. Acad Radiol. 2022; 29, 567–568
[6] Larrazabal AJ, Nieto N, Peterson V, Milone DH, Ferrante E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl Acad Sci.2020; 117, 12592–12594.
[7] Seyyed-Kalantari L, Zhang H, McDermott M, Chen IY, Ghassemi M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat Med 2021; 27, 2176–2182.
[8] Ricci Lara MA, Echeveste R, Ferrante E. Addressing fairness in artificial intelligence for medical imaging. Nat Commun. 2022 Aug 6;13(1):4581.
[9] Law M, Seah J, Shih G. Artificial intelligence, and medical imaging: applications, challenges and solutions. Med J Aust. 2021 Jun;214(10):450-452.e1