As artificial intelligence and machine learning (AI/ML) algorithms are utilized, we must ask ourselves, what are these models made of? Many neural networks operate in a black box: we give a model a goal of accuracy and training data and BAM! We the public ‘hope’ that the model is ‘correct’. By all accounts, the model is mathematically accurate and precise according to its training sets, but rarely accounts for algorithmic bias [1]–[3]. Model performance is often assessed using data that is easily available, rather than data that reflects the target population of actual model use. As AI/ML models are increasingly being used in clinical settings, the importance of being able to know how, when, how not, and when not to incorporate model output into clinical decisions is imperative. However, utilizing current models as they exist now may pose problems.
While machine learning engineers may be aware of these issues, health professionals may utilize a model uncritically, assuming that the computer scientists that created the algorithm would also account for the unique risks posed by each patient. As clinicians do, they may assume that risk has been minimized by analyzing health records as a doctor might, which is not the case. Models tend to see patient health and patient cost as synonymous [4]. Clinicians must be aware of the key risk indicators (KRIs) to estimate the potential risks utilizing this model whether it is safe and secure to deploy a given ML model in a specified environment [5]. These KRIs could address the robustness of a machine learning model to random input corruptions, distributional shifts caused by a changing environment, and adversarial perturbations. Clinicians and health team stakeholders are often unaware of the potential harm to patients that arise from clinical AI/ML that do not easily allow clinicians to work out their patients’ risk assessment. Machine-learning offers opportunities to improve accuracy by exploiting complex interactions between risk factors and currently can streamline hospital professionals’ workflows.
Take for example assessing the risk a patient has for cardiovascular disease in 10 years [6]. In 2017, researchers analyzed 378,256 patients from UK family practices, free from cardiovascular disease at outset. The participants were between 30 to 84 years at the start of the survey and they completed data for eight core baseline variables: sex, age, smoking status, systolic blood pressure, blood pressure treatment, total cholesterol, HDL cholesterol, and diabetes. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict the first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). There were 24,970 recorded cardiovascular events (6.6%). Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others. However, it also oversimplifies complex relationships with health data. Imagine if patients with asthma inappropriately were treated by this model as asthma was not one of the determining factors assessed.
In 2015, the Transparent Reporting of Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement was released to improve the reporting of prediction models in published literature. It developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes on a checklist of 22 items, deemed essential for transparent reporting of a prediction model study [7]. Furthering risk communication that is accessible, Sendak et al. have pushed to create “nutrition” labels for medical algorithms [8]. The “Model Facts” Label is an interdisciplinary effort by developers, clinicians, and regulatory experts for clinicians who make decisions supported by a machine learning model. It follows the risk communication defined by the USA FDA as the “the term of art used for situations when people need good information to make sound choices likening the model contents to food labels. Model Facts is a one page document that hopes to present information presented to the end user when it’s not immediately available clear that a model was involved. By understanding the objectives of the model designed by ML scientists and engineers, clinicians and other end users will be able to make better judgments for their patients.
Risk assessment in healthcare is a continuous process [9]. Confronting the challenge of integrating healthcare with machine learning rests on deep, consistent collaboration between clinicians, machine learning engineers, designers and patients. Training our models to be transparent to end-users not specialized in AI/ML is imperative to developing risk assessment guidance to our most vulnerable populations.