As you may have seen there’s a renewed buzz around the healthcare AI world in the shape of “Foundational Models”.
As most readers will know all too well getting enough high-quality, labeled training data to create dependable and generalizable algorithms is hard, really hard. Foundational models have been touted as one possible technique to address this issue. However, as recently highlighted in The Imaging Wire’s – AI Hits Speed Bumps, it’s not always gone to plan.
Being someone intimately aware of the challenges of data sourcing, I thought I’d scribble down my thoughts on foundational models in case they prove useful.
Firstly, it’s worth clearing something up; A lot of folks confuse the idea of needing “less labeled data” with the idea of “less data”. The truth is that from what we see at Gradient Health, the teams developing these models need way more data, it just happens to be unlabeled.
Speaking of what we’ve been seeing, the past 3 months have seen a significant shift in the sort of requests we’re receiving. We’re still getting the usual “I need 1,000 labeled studies” but suddenly we’re now getting “I need 1 million unlabeled studies”. In fact, over the past 4 weeks alone we’ve been approached by three major tech players for the sort of data volumes needed to train foundational models. This market shift is dramatic and it’s unfolding right now.
I’m excited to see where this latest bout of activity and innovation takes us, but for what it’s worth I don’t think that raw foundation models are likely to deliver the sorts of generalisability needed in healthcare. Instead, I think that raw foundation models will be fine-tuned with well-labeled data to create pathology-specific AI tools that work in a range of contexts. At least that is what I hope.
Written by Joshua Miller.
CEO, Gradient Health, Inc experts in medical data sourcing.