This stat caught my attention recently. According to a 2022 survey from Panda Health, 55% of hospital and health system executives receive more than 11 emails and calls from digital health vendors per week. With the recent boom in technologies like artificial intelligence (AI), it's no wonder why leaders have become inundated with opportunities. But resources are limited, and with the many challenges hospitals and health systems are facing today, making the right investment in the right technology takes careful planning.
We've seen healthcare organizations using AI for a few years now, as evidenced by the technology's $20 billion market size in 2023. In a recent poll, more than half of Vizient customers shared their organizations have some level of functioning AI. Many early adopters have been using predictive AI, which uses historical data and statistical algorithms to make predictions about future outcomes. In fact, the Mayo Clinic is using at least 40 of these predictive AI algorithms in patient care today.
As AI continues to dominate headlines, many healthcare systems are left sorting out fact from fiction about the latest additions to the list of revolutionary technologies. We see successful organizations use this four-step readiness roadmap to responsibly implement their AI programs and deliver better experiences for their patients and staff.
Step 1: Establish a strategic foundation
It might be tempting to jump onboard with AI right away, but it's important to master the basics so you're ready when the time is right for your organization. This will help ensure that when you implement a solution, it will be better received and more widely adopted.
Begin by confirming there is alignment within the organization about the problem AI aims to solve and its potential benefits. Determine whether AI is the best solution for the problem and be specific about the type of AI needed. 'AI' is the trending buzzword right now, so it's important to be clear with internal teams and third-party vendors about the technology powering the solution and what exactly it will offer your organization.
Assess your workforce's skills and recruit or build multi-disciplinary teams to fill any gaps. Organizations can consider partnerships with outside vendors if AI experts are not readily available, but don't completely rely on a third party. Your organization should have a level of expertise and ownership for the implementation to be successful. That's why investing in staff training and upskilling is vital, especially for decision-makers without technical backgrounds. This will enable them to understand AI concepts and effectively set a vision for the organization.
Data is the lifeblood for any AI system, whether you are building in-house or adapting a vendor's solution. That means providers need access to high-quality, well-governed data to improve how the AI performs. Organizations need to have the technical infrastructure to source, gather and "clean" data as well as the right compatibility to ensure that data is flowing where it needs to.
Step 2: Anticipate barriers to adoption
Every AI implementation in healthcare must account for challenges in these four areas: clinical, technical, business and legal, and ethical.
- Clinical: Quality and safety are paramount. Healthcare providers must ensure that AI systems are thoroughly tested and validated before deployment to avoid potential harm to patients. There is also concern about skill atrophy, where over-reliance on AI may lead to a decline in healthcare professionals' clinical skills. Balancing AI's role to complement physician autonomy rather than replace it is crucial.
- Technical: AI requires high-volume, high-quality data for training and validation. Healthcare organizations must have robust data management systems in place to handle this data effectively. And for those with limited infrastructure, selecting reliable and competent vendors can make or break the implementation.
- Business and legal: Development and implementation costs aren't cheap, and finding skilled AI professionals is challenging in a tight labor market. You may also find some staff concerned about AI replacing their jobs. Be transparent when communicating your plan and create a change management strategy to gain buy-in. Finally, providers must stay up-to-date with evolving regulations, collaborate with regulatory entities and engage with industry associations to ensure compliance.
- Ethical: We often think data is completely objective. In reality, both explicit and implicit bias finds its way into data and algorithms, which can exacerbate health disparities. Eliminating bias entirely may not be possible, but understanding blind spots and establishing safeguards can help providers honor their commitment to 'first, do no harm.'
Step 3: Pilot low-risk use cases
Once your organization has a solid foundation for AI implementation, it's time to pilot a solution. Many providers choose to first identify low-hanging fruit, such as automating routine administrative tasks and streamlining appointment scheduling. One provider used an AI scheduling system to spread patient appointments more evenly throughout the day, reducing wait times 27% without hiring additional nursing staff even while average daily volumes grew 9%.
Major healthcare industry players are experimenting with low-risk generative AI solutions as well. Microsoft and Epic recently announced a partnership that combines Microsoft's generative AI service with Epic's electronic health record (EHR) software. Several providers are participating in a pilot program that uses AI to draft responses to patient questions received through online portals that can then be integrated into the patient's EHR.
Similarly, another health system is piloting a program that uses generative AI to support clinicians in clinical documentation and patient communication. The tool generates summaries of patient-provider conversations for integration into the EHR system, reducing paperwork that contributes to clinician burnout.
Chatbots powered by generative AI technology are another opportunity to reduce staff workloads. One health system is piloting an internal chatbot that aims to streamline administrative work. Instead of spending time looking for reference materials and other internal documents, team members can simply ask the chatbot to pull these resources from the training library.
Step 4: Graduate to more advanced use cases
After successfully piloting low-risk AI solutions, healthcare systems can start thinking about more advanced use cases. Cutting-edge applications like AI-driven personalized medicine allow clinicians to tailor treatments based on genetics and other patient data, then use predictive analytics to suggest the best types of preventive care.
We also anticipate AI enhancing key elements of the drug development process, like optimizing clinical trial design and participant selection. Finally, with the right data and ethical considerations I mentioned earlier, we can use AI to better predict and prevent disease outbreaks, improve health equity and increase access to quality care. Many of these algorithms are still in development, but it will be exciting to see new applications of AI over the next several years.
AI is a tool, not a replacement
While we are continuing to discover AI's potential in healthcare, I believe the core of patient care remains deeply personal. I think American Medical Association President Jesse Ehrenfeld, MD, said it best: "AI will never replace physicians — but physicians who use AI will replace those who don't."
As providers sift through their crowded inboxes and voicemails for the right solution, this roadmap is a helpful reminder that AI is a tool to help humans care for other humans – and a tool is only as effective as the person using it. We must continue to focus on the people our technology serves, rather than the technology alone. After all, there's no algorithm for empathy.