By Joshua Miller Miller, CEO of Gradient Health, July 2023.
Artificial Intelligence (AI) has undoubtedly revolutionized various industries, driving innovation and efficiency. However, as we venture into the realm of complex algorithms, the issue of explainability arises. In my area of interest, healthcare AI, the reasoning behind algorithmic decisions is crucial and the consequences of errors are life-changing.
The idea for this blog came to me when I read an excellent article from Stanford Social Innovation Review (SSIR) on “The Case for Causal AI”. I thought I’d share my take on the need and potential of causal AI in healthcare, and where I see things going next.
The Conundrum of Explainability in AI
Most AI systems can be thought of as “black boxes,” where their decision-making processes remain inscrutable. “Causal AI” in contrast aims to do away with these black boxes and instead highlights the relationships between inputs and outputs.
Traditional statistical methods like principal component analysis (PCAs), regressions, and Bayesian approaches provide transparent and interpretable results. You can easily trace back the factors that contributed to a specific outcome. However, as AI advances into more sophisticated algorithms, such as deep learning and neural networks, they become intricate and challenging to comprehend fully. This lack of transparency raises concerns, especially in sectors where decisions have profound implications for human lives.
AI in E-commerce vs. Healthcare
For e-commerce companies, like Amazon for example, where the goal is to drive sales, understanding the exact reasons behind a customer’s purchase might not be essential. The primary objective is to encourage a certain buying behavior, and the consequences of mistakes are typically limited to financial losses and opportunity costs. However, in healthcare, the situation is completely different. Algorithms are used to make critical decisions about patient care, treatment plans, and diagnoses. In this world, knowing why a particular decision was reached is critical.
The Perils of Survival Bias
The Stanford article (linked above) highlights the concept of “survival bias” and how it can significantly impact the accuracy of AI algorithms in healthcare.
Briefly, survival bias occurs when conclusions are drawn from incomplete and misleading data. The most famous example of this comes from second world war aircraft returning to base riddled with bullet holes. Initially, engineers rushed to add armor to those areas most frequently found damaged on returned aircraft but survival rates remained unchanged. What they had actually done was heavily armor a map of everywhere an aircraft could get shot and still survive, what they needed to do was protect the gaps in between, i.e. the cockpit, engines, and tail.
In healthcare, high spending (e.g. bullet holes) is often used as a proxy for the need for extra care (e.g. armor). While it may seem reasonable to assume that higher spending indicates higher medical needs, it fails to account for underlying disparities and limited access to care in certain communities. As a result, an AI algorithm may make incorrect decisions, furthering inequities in healthcare outcomes.
Striking the Balance with Causal AI
Causal AI emerges as a potential solution to the challenge of explainability in healthcare algorithms. It aims to uncover cause-and-effect relationships within data, providing a more transparent understanding of how decisions are made. By incorporating causal reasoning into AI models, researchers and healthcare professionals could gain new insights into the factors that lead to specific outcomes.
Navigating the Challenges
Developing causal AI models comes with its challenges. Building such algorithms is often more complex than traditional AI models, and there is a real risk of stalling innovation in the field. At Gradient Health we’re breaking down these barriers and helping streamline algorithmic development. In my opinion, the imperative of equitable healthcare compels us to take the causal path. Striking a balance between building AI tools that are transparent and pushing the boundaries of innovation will be crucial in ensuring that lives are saved without perpetuating health inequities.
In conclusion, the Stanford author’s perspective on the significance of causal AI in healthcare is well-founded and something I see every day in my role at Gradient Health.
As we progress further into medical AI, we need to acknowledge that algorithms can’t afford to remain black boxes. The drive for transparency and explainability must be prioritized to ensure equitable healthcare outcomes for all. By embracing causal AI, we can unravel the intricacies of algorithmic decision-making and build a future where innovative technology can coexist with social responsibility, ultimately saving lives and improving patient outcomes.