The HIMSS 2023 conference brought together 28,000+ clinicians, healthcare leaders, and health tech professionals to discuss the latest trends, innovations, and challenges in implementing health tech. AI took center stage in many conversations. In this two-part series, I explore the power and promise of AI in healthcare, while discussing the broader trends and insights from the conference.
Generative AI is inescapable these days – not a day goes by that my LinkedIn feed isn’t jam packed full of GPT-4 related news and headlines. In this first part, I resist the temptation and cover trends and challenges to deploying AI successfully; and in the second part, I dive deeper into trends specific to generative AI.
Three Trends to Watch
From charting to patient tracking, there are a lot of steps in clinical workflows today that frustrate both patients and providers. The current wave of automation and augmentation has the potential to make care more human than ever. Here are four trends to watch as AI becomes more mainstream in healthcare:
1. Actionable prediction vs powerful models: The challenge has shifted from access to data and building accurate models to knowing what to predict. For instance, accountable care organization Southwestern Health Resources developed a model to identify patients at risk of 3+ ER visits. Initially, their model identified an overwhelming number of ‘high risk’ individuals, including many already in palliative care. The key to improving patient outcomes and reducing cost was identifying those patients that could be influenced and investing time to craft effective intervention strategies. As a result, they’ve reduced ED overutilization by 5% and expect this to climb to 20% as their model improves.
2. Responsible AI and governance – it’s time to catch up: With AI applications proliferating in healthcare, responsible AI practices and governance are crucial to tackle challenges like harmful bias, explainability, and auditability. While I’m encouraged to see standards like Open Voice Network’s guidelines for voice experiences and NIST’s AI Risk Management Framework, legislation lags behind the rapid pace of change. It’s critical to involve frontline workers in the design and planning process to anticipate risks and biases, ensuring responsible AI implementation. An added bonus, explainable AI results in more effective teaming with human experts, and that’s the magic that creates ROI.
3. Multi-modal and language models are the next frontier: Though generative AI is still in its infancy, current technologies foreshadow the future. The next 12 months will see powerful use cases combining data across formats, such as computer vision systems detecting patient falls and applications that cross-referencing clinical notes, lab results, and radiology images to inform diagnoses.
Navigating the Pitfalls
The upside is tremendous, but successfully deploying AI in healthcare requires overcoming challenges associated with integrating AI technologies into existing systems and workflows. To tackle these challenges, organizations must consider the following key aspects:
1. Align on the business outcomes of decisions influenced: Phoenix Children’s Hospital has had tremendous success building machine learning models – key to this success was their ability to translate model outputs into clinical workflow. As a result, they’re able to identify two new children a week that were previously undiagnosed with malnutrition. In one example shared by Kaiser Permanente, with over 600 deep learning models in production, the ability of a model to streamline radiology workflow (i.e. prioritize which needed radiology review and why) was more valuable than a more accurate model.
2. Don’t mind the shiny objects – traditional models lead to quick, elegant wins: The Department of Veterans Affairs demonstrated an impressive use case in partnership with DataBricks, utilizing an older (2020 gen) language model to identify equivalent products in their supply chain. By focusing on creating an ‘easy win’ button in the purchasing workflow, they increased adoption rates of their pre-approved list from ~40 to 70%, saving money and simplifying purchasing decisions for end users without introducing new steps in the process. The best part? It took 6 weeks to deploy.
3. Balance between validation and speed: Strike a balance between the need for validation through clinical studies and the speed of innovation. Experiment where it is safer to fail, like streamlining workflows, to build up the organizational capacity and talent to be ready to seize future opportunities as we gain more evidence, at scale, of the efficacy and safety of AI solutions to deliver care. If your organization doesn’t yet have one, it’s time to adopt an internal policy on the responsible use of AI. NIST’s AI Risk Management Framework is a great place to start.
4. Foster AI literacy and interdisciplinary collaboration: Data science has finally left the lab, at scale. It’s never been more important to develop AI literacy among executives and team members, and collaborate closely with interdisciplinary teams, including clinical, legal, and financial leadership, to address potential risks and biases, ensuring responsible AI implementation. Data science teams play a crucial role here in setting practical expectations – ask questions like ‘would a 10% reduction in no-show rates be useful?’, ‘what decisions might you change as a result?’
There’s a reason Silicon Valley is shifting its targets to health tech innovation. The stakes are high, and the potential gains are enormous—humanizing healthcare, reducing physician burnout, improving patient equity and outcomes, and addressing the rising costs of care. In embracing generative AI, it’s essential for the healthcare industry to build organizational capacity while navigating the technology’s challenges and pitfalls.
In the second part of this series, I’ll dive deeper into the advances in generative AI and its applications in healthcare. Stay tuned!
Cal Al-Dhubaib is the AI Strategist and CEO at Pandata.