Article

Start small, scale smart: How low-risk AI implementations can transform healthcare one step at time

Data & analytics
Workforce & culture
Quality & clinical operations
March 6, 2025

Consider for a moment the origins of spaceflight where giant leaps for mankind were preceded by small steps. First, there were unmanned rocket tests, then small-crewed missions. By the time we arrived at major milestones like lunar exploration and long-term human presence on space stations, the confidence to tackle high-stakes challenges had been earned through years of incremental risk-taking.

The deployment of AI in healthcare should follow a similar trajectory. Organizations that dive headfirst into high-risk options like autonomous radiology applications to detect cancer without first developing foundational AI integration and change management capabilities will struggle to handle the risks and disruptions that come with advanced implementations. In this newly promising era of AI, it’s essential to adopt a structured strategy that builds expertise over time.

Vizient Senior Vice President of Data and Digital New Ventures Robert Lord is uniquely positioned to provide perspective on how to do exactly that. As a physician and tech entrepreneur, he’s seen firsthand all that is often lost from the boardroom to the bedside — and those experiences drive his mission to ensure the right technologies are used efficiently and effectively to enhance patient care, reduce clinician burnout and ultimately improve the U.S. healthcare system.

Here, he discusses how low-risk applications of AI in healthcare can drive positive outcomes for patients and practitioners.

As AI in healthcare continues to evolve, many healthcare leaders remain skeptical after years of promised benefits that haven’t consistently materialized. How would you frame this new era of AI in healthcare, particularly in terms of agentic AI, and what kind of outcomes can leaders expect from even small-scale implementation of AI capabilities?

We’re moving from a model of AI that was historically about pattern recognition and analytics and is now capable of actual problem solving. That’s what’s exciting about LLMs [large language models] and agents — these advances now let health systems solve operational problems somewhat autonomously and with greater accuracy, reliability and lower cost.

There’s been a significant transformation with the accessibility of AI. It used to be a data scientist or analyst were the only ones who could pose questions; it had to be someone with fluency in a variety of tools. Now, everyone has these capabilities at their fingertips: It just requires asking a natural language question with your voice or keyboard. When you extend that to the world of agents — which are applications comprised of multiple LLMs working without human supervision to automate tasks like surgical scheduling — suddenly everyone has the opportunity to solve problems at scale.

In many ways, AI unleashes human creativity. With the appropriate guardrails and governance, particularly in healthcare, this new era of AI is going to lead to exponential innovation and clinicians getting back to patient care.

Not all AI is high risk — there are low-risk applications that can provide immediate value without threatening patient safety. Can you discuss some of the low-risk ways healthcare leaders can approach AI implementation within their institutions?

At the end of the day, the goal is to improve direct patient care with artificial intelligence. But as much as I love AI, I’m a doctor — the patient always comes first. If I’m not 100% sure that AI can do better than I can, then I’m not letting it touch my patient.

That patient-first mentality necessitates a low-risk start, and the domains I always think about first are in administrative areas like appointment scheduling, managing patient records, generating discharge summaries and revenue cycle management. It’s all those back-office functions that prevent clinicians from working top of license. We have a nursing shortage in the U.S., but we’re using nurses to abstract data in clinical registries manually. It makes no sense to use one of the most critical job resources in healthcare for a task that could be entirely handled by an autonomous agent in most cases. That’s an enormous opportunity and one that also applies in clinical documentation and supply chain workflows.

Ensuring everyone can focus on direct patient care as much as possible is hugely valuable to health systems.

A 2024 American Medical Association survey found that 57% of physicians see addressing administrative burdens through automation as the biggest area of opportunity with AI. Other areas of opportunity included augmenting physician capacity (18%); supporting chronic disease management through more regular patient monitoring (9%); and supporting preventative care (4%).

Consulting

A 2024 Define Ventures survey revealed that 83% of health system leaders see clinical documentation as their No. 1 AI priority use case. Finance management (59%), disease screening (39%), patient communication (31%) and supply chain management (21%) also made the top five.

As you point out, healthcare still faces many challenges including workforce shortages. How is agentic AI poised to alleviate those issues while still retaining the “human touch” that’s so important in healthcare?

There’s really one fundamental piece to this: Any patient-facing individual has an incredible amount of administrative burden that takes enormous joy out of practicing medicine. What AI represents is an opportunity for us to return to the uniquely human components of healthcare.

For instance, a resuscitation needs the full attention of a lot of humans to make split-second judgments that AI is not equipped to make. Meanwhile in the background, there are all these administrative tasks waiting. But what if we could keep all this moving simultaneously? Instead of worrying about the amount of paperwork to complete before and after shifts, doctors could just focus on being great doctors.

Healthcare does not have a shortage of tasks, so I don’t believe AI will replace staff. What it will do is help fix the troubling level of burnout that is endemic throughout healthcare. AI will unleash a renaissance where clinicians can fully enjoy practicing medicine again.

A 2023 American Medical Association prior authorization (PA) physician survey found that on average, physician practices complete 43 PAs per physician per week, which equals an average of 12 hours spent handling such paperwork.

Clearly, AI should not be deployed in a one-size-fits-all manner. How should leaders best determine which areas of their organization will most benefit from AI implementation?

This is something we think about a lot at Vizient, which is having a structured approach to identify where AI can have the most impact. One of the challenges is what I call a “science project problem”: A department chair finds a pet technology they want to pursue or there’s an existing vendor with a widget that seems like low-hanging fruit, so those are the technologies that are implemented.

But what’s important is a systematic discovery process of the challenges you’re facing. Think about your organizational priorities. Think about what your peers have had success with. Think about organically surfacing what people in your organization are having challenges with administratively: What’s most frustrating and what’s causing the highest levels of burnout? You have to commit to rejecting a reactive approach and instead develop a proactive AI strategy that tightly aligns to your organizational objectives — and you must stay disciplined if projects don’t fall into the top 5 overall goals of your institution.

A 2024 HealthLeaders survey created for Vizient found that 51% of healthcare leaders identified optimizing and engaging workforce as their top strategic priority over the next year.

What are some of the common characteristics you see in organizations that are excelling in AI implementation?

They tend to think about their overall portfolio of risks in a disciplined way. That means they’re comfortable saying ‘no’ to something that might be a good idea but isn’t aligned to core objectives — but they also make audacious bets when it’s the right thing to do for the organization. These health systems are typically already skilled at the basics of AI implementation, which are change management: What does your fundamental technology stack look like? What is your team’s openness to change? Do you have robust systems to implement new technologies? Do you have the ability to incorporate new workflows?

The organizations that are best at deploying AI are the ones that have their house in order in all the other ways. While I would certainly be the first to say there are unique components to AI, it’s ultimately a tool to solve business problems. So, it’s about identifying the issues that need solving and how AI can solve them — go take this hammer and find a nail. Those are the organizations that are most successful.

How does starting with a low-risk approach build risk tolerance in healthcare and help guide future strategies for deployment?

The important part of the low-risk to high-risk model is that there are many unknowns you only experience once you start implementation — and a lot of those things you will learn with the more low-risk elements. You don’t need to have autonomous surgical robots to know that you need robust cybersecurity, governance, change management, education and training, and that your infrastructure is appropriate for deployment. You could learn those same lessons by implementing AI-driven patient scheduling. Ladder up with the simple things so that when you start deploying patient-facing AI, you already have the governance, systems, shared language and organizational culture to address the tough questions that will arise.

Top five qualities foundational to a successful AI strategy
  • Responsible AI governance and process in place
  • Strong leadership support and innovation culture
  • Advanced data strategy
  • Modern technology infrastructure
  • Strong link between generative AI initiatives and strategic value

Source: BCG

Examples of success in low-risk AI implementation