On-demand webinar
Innovation without compromise: Navigating faster turnaround times with responsible AI
Discover what to ask AI vendors and how to build a strategy that supports accelerated decision-making, reduces administrative burden and maintains clinical integrity.
Video Transcript Title
Innovation Without Compromise
Transcript
Speaker: Joe Ellis
00;00;01;26 - 00;00;21;26
Joe Ellis
Hello, everyone, and thanks for joining. Today we're going to explore an important topic facing many healthcare organizations today, namely, how do we transform complex clinical processes and ways. Don't that don't sacrifice safety for speed or compliance for innovation?
00;00;21;29 - 00;00;46;09
Joe Ellis
And my goal over the next half hour, 45 minutes, isn't really to just try to convince you that AI solves everything, because it certainly doesn't. My goal is actually to give you a systematic way to start thinking about where AI can create value in your organization, and then how to integrate it in ways that are in trust from both your organization and the members that you serve.
00;00;46;12 - 00;00;50;17
Joe Ellis
So let's get started.
00;00;50;19 - 00;01;14;29
Joe Ellis
All right. So a little bit about me. Like I said, I'm Joe Ellis, senior director of AI product at Optum. And, you know, my background is actually a little bit unusual for a product leader in healthcare. Prior to this role, I spent a decade building data and AI intensive solutions as an engineer and having some success there before moving into the product role recently.
00;01;15;01 - 00;01;42;10
Joe Ellis
Specifically as an engineer, the last three years, I focused on developing language models and integrating them in clinical decision support applications, and I think that technical foundation really matters in healthcare in a way that it might not in other industries, you know, in consumer, tech or e-commerce, you can often learn what works through, rapid experimentation and iteration.
00;01;42;13 - 00;02;06;25
Joe Ellis
You sort of deploy a feature or deploy, an AI see how people react to it, you know, positively or negatively keep it or you don't. And either way, life goes on. People don't really care. But in health care, discovering that your AI or some feature that you built made an error after the fact, it can really have negative effects not just for your business, but for real people.
00;02;06;28 - 00;02;35;17
Joe Ellis
So to me, the stakes are fundamentally different. And so two of the approach to adopting AI has to be different. For my team, our approach to product development really centers on building systematic frameworks for identifying opportunities, evaluating solutions rigorously, and deploying AI in ways that gives people superpowers and augments their clinical expertise rather than trying to bypass it.
00;02;35;20 - 00;02;46;06
Joe Ellis
And that's what I'll be sharing with you today, both the framework and how we've applied it in practice.
00;02;46;09 - 00;03;17;02
Joe Ellis
So I just want to quickly walk us through our agenda today. First we'll start by examining where we are, with the authorization process, the, operational realities, regulatory landscape and the technology moment that we're in. And I think this, context is really critical because it shapes both the constraints and the possibilities. From there, we'll, discuss how systematically identifying, where AI can create value in your organization fits in.
00;03;17;04 - 00;03;45;14
Joe Ellis
Our belief is that, you know, not every problem is an AI problem, and not every AI opportunity is equally valuable or equally feasible. So I want to share a framework for continuous opportunities, discovery that helps you prioritize. Next, we'll dig into how do we evaluate AI solutions responsibly. What questions to ask, what metrics matter and how to design the evaluation pipelines before, deployment.
00;03;45;17 - 00;04;10;26
Joe Ellis
And I want to walk you through a detailed case study, of how we built the enter qual authorization accelerator using these principles, including some things that worked and what we learned along along the way. Finally, we'll close it with some, key considerations and Q&A.
00;04;10;28 - 00;05;03;23
Joe Ellis
So first, I just want to start by acknowledging the moment that we're in. Prior authorization is at a genuine inflection point, and facing unprecedented pressure from multiple directions. And at the same time, generally, organizations are experiencing, the effects of unprecedented technology developments that are forcing everyone to transform. And my thought is that the decisions that health plans make in the next few years about how to evolve their processes with technology will have a lasting impact, long lasting implications, not just for operational efficiency, but for how the broader health care ecosystem functions, how clinical workflows will evolve, and ultimately, how members experience their care journey.
00;05;03;25 - 00;05;13;26
Joe Ellis
So I want to take this time to just ground us in the current reality before we talk about where we're headed.
00;05;13;28 - 00;05;41;25
Joe Ellis
Right. So again, to me, clinical, prior authorization can still play an important role. It's fundamentally about ensuring that treatments are medically appropriate and grounded and evidence based criteria, helping patients stay as safe as as they possibly can as they navigate a fairly complex health care system. And I think everyone involved in these processes shares that same core objective and belief.
00;05;41;28 - 00;06;23;11
Joe Ellis
But the reality is that the current process as it exists today, it seems unsustainable. It's so expensive and frustrating and incredibly complex and often characterized by these, labor intensive manual processes that sort of slow things down. So in today's system, authorization requests typically require multiple handoffs and rely on this fragmented, disconnected set of systems. And each week, clinicians lose many hours navigating these complex rules or requirements and trying to locate the right information across hundreds of pages of clinical documentation.
00;06;23;13 - 00;06;57;11
Joe Ellis
And these are skilled professionals who really want to make good clinical decisions that help patients and members stay healthier. But they're spending a lot of time on administrative tasks instead, clinical staff on the provider side are trying hard to gather and submit the right documentation. Clinical side, clinical staff on the health plan side are trying to assess all this information and make, medically, medically derived decisions to, efficiently and accurately treat their, their patients and members.
00;06;57;11 - 00;07;25;24
Joe Ellis
So, with these systems connecting, they're creating friction points at every step of the as they exist today, with unclear requirements that require multiple, communication channels. Has missing information, things that require follow ups, all of that back and forth. It really extends the time that it takes to get to a yes. So all
00;07;25;27 - 00;07;26;00
Joe Ellis
of
00;07;26;00 - 00;07;42;10
Joe Ellis
that's happening, and on top of these sort of operational challenges that we're seeing, prior processes are also experiencing significant external pressures from both the public and regulatory bodies.
00;07;42;13 - 00;08;17;12
Joe Ellis
And I think in an effort to help, resolve and reduce some of the complexity of the process, CMS has imposed regulatory mandates that are designed to streamline this whole thing, specifically, starting in January of 2026, there going to be new timelines on which payers must deliver decisions back to their providers, and they're also going to be required to provide very specific reasons for any denials that have occurred at a level of transparency that will allow providers to take actionable steps toward a successful request.
00;08;17;15 - 00;08;46;25
Joe Ellis
All of this while reporting on their prior auth data annually to the public. By 2027, payers are going to be required to implement APIs that allow for much more fluid data exchange between different systems that are accessing these requirements related to, prior solutions or processes. And I know all of these compliance requirements are substantial and the pressures are real.
00;08;46;28 - 00;09;10;23
Joe Ellis
But I actually find, you know, this time what I find about it, you know, compelling about it, is that these mandates, which of course, feel like a constraint, I think they could actually create a significant opportunity, for transformation. And that sentiment comes from the technology moment that we're in.
00;09;10;25 - 00;09;36;29
Joe Ellis
So, as I said before, at the same time, we're experiencing these regulatory and operational pressures. We're also experiencing this unprecedented technology advancement. And so I want to take this time to walk you through what's changed and why it matters. The first box here is foundation models. So foundation models represent a fundamental shift in how AI works previously.
00;09;36;29 - 00;09;57;15
Joe Ellis
If you wanted AI to solve a problem, you'd build a model, for a specific task. You'd, go through all of this data labeling the hundreds of, or thousands or millions of documents, and then you'd have one model at the end of it after your training and all of that finished one model to extract diagnosis codes.
00;09;57;16 - 00;10;26;03
Joe Ellis
You'd go through the same process for another model to create summarize clinical notes, another to check drug interactions. But foundation models changed all of this. These are large, versatile models that can work across multiple workflows and clinical domains. And so you can apply the same underlying technology to different problems, which dramatically reduces the time and cost to develop new AI capabilities.
00;10;26;05 - 00;10;52;24
Joe Ellis
And so as these foundation models have become more and more capable and more and more accurate, we've seen a significant expansion in viable. I use cases, tests that were generally too complex or too risky for AI just a few years ago are now feasible with the right implementation approach and implementation. Brings me to my next point. Integrating has gotten easier.
00;10;52;24 - 00;11;23;21
Joe Ellis
Much easier. Because the entire AI engineering stack has matured and all the tooling needed to support AI lifecycle, the AI lifecycle, for example, data prep, model training, evaluations, monitoring, governance, all of these tools are becoming more and more sophisticated and more tailored to enterprise grade needs. This means that organizations can move much faster with greater confidence.
00;11;23;24 - 00;11;58;12
Joe Ellis
The fourth technology that's transforming things, is platforms. So platform approaches to product development are really enabling ecosystems at scale, through well-designed APIs and integrations, multiple partners and workflows can integrate very efficiently and quickly. And this creates a compounding value effect across all sorts of ecosystems where partners can coexist, especially when you start thinking about it from the perspective of a point solution that generally exists in isolation.
00;11;58;14 - 00;12;25;08
Joe Ellis
And then as for the APIs themselves, you can. I'm sure all of you are starting to get a glimpse of this with the Da Vinci fire standards. You know, being opposed. But as for the APIs themselves, they simplify integrations and data exchange. They decouple applications, enable standardization, and create interoperability that, that wasn't possible. And legacy systems.
00;12;25;11 - 00;12;55;01
Joe Ellis
And, oh, by the way, the documentation and APIs actually serves as an information highway for an Lim to understand what the API and data exchange represents. So this last box, it's looking a lot further ahead, right. So quantum computing I think, you know, if AI currently represents the wave of disruption that we're experiencing now. Quantum computing represents the next wave of disruption.
00;12;55;03 - 00;13;26;01
Joe Ellis
And while it's still emerging I think quantum will eventually eventually create computational capabilities that unlock use cases that may seem impossible today that we're not even thinking of. But here's why all of this matters right now. We have a unique convergence happening. Regulatory mandates are creating data standardization. Technology capabilities are advancing rapidly, and operational pressures are creating this sense of urgency.
00;13;26;03 - 00;13;44;06
Joe Ellis
This convergence, I think, unifies us and creates a once in a generation opportunity to fundamentally rethink how we capture and deliver value. And the end with technology and healthcare.
00;13;44;08 - 00;14;17;20
Joe Ellis
All right. So we've established that there's likely significant opportunity in this moment, in this moment. But I'm sure questions remain given all these possibilities and technical solutions. How do you identify where to focus your attention and your resources? How do you determine which opportunities are worth pursuing first, and which investments will generate real value versus creating expensive distractions?
00;14;17;23 - 00;14;48;19
Joe Ellis
So what I've got pulled up here is, a high level authorization workflow that we discussed earlier. But, you know, even though it's high level, you can see just how many individual steps are involved in the process. But here's the thing. Each each one of these touch points represents a potential opportunity. A place where you could apply technology to transform this step, or potentially remove it entirely from the workflow to help streamline turnaround times.
00;14;48;21 - 00;15;20;10
Joe Ellis
Because of all of the systems involved, and the numerous, there are numerous possibilities to enhance. Author authorization processes across payer and provider applications with AI and automation. But I think what's important to understand is that each one of these opportunities or possibilities is distinct. They each have different technical requirements, different impacts on the workflow, different risk profiles, and different value propositions.
00;15;20;17 - 00;15;53;22
Joe Ellis
So you cannot treat them all the same way. And this is where leveraging your existing data stack becomes essential. Your organization needs to use data. You know for example operational metrics, user feedback, any sort of, cost analysis or compliance logs and things like that to answer key questions that will guide your prioritization. And, you know, using data like that, your own internal data will help you to be able to, you know, answer questions like, where are the biggest time sinks and bottlenecks?
00;15;53;24 - 00;16;29;09
Joe Ellis
Which opportunities will most significantly improve operational efficiency? Which opportunities create the most value? Fastest, you know, things like that that really help prioritize the limited set of resources that we all have. So going through this mapping exercise and the questions themselves, they will naturally arise, from this exercise. Right. And they will be different for every organization based on your current state, your strategic priorities and your operational constraints.
00;16;29;12 - 00;16;45;12
Joe Ellis
And that's why you need a systematic framework for continuous opportunity discovery. So I know I've said a lot, and I want to just talk about how that actually works in practice.
00;16;45;14 - 00;17;13;02
Joe Ellis
Okay. So after we've gone through this mapping exercise, to answer the questions that teams might outline after, after that exercise, you need a framework that combines two complementary perspectives. A top down discovery and a bottom up discovery. Both are essential, and neither alone is sufficient for success with top down discovery. That's your strategic layer.
00;17;13;04 - 00;17;52;07
Joe Ellis
This is where organizational leadership sets the direction. Like where does your business need to go? What are our big objectives for the next 3 to 5 years? What pressures do we need to address if we're going to invest in AI capabilities? Where do we believe the biggest opportunities exist based on, you know, market dynamics and strategic positioning? This top down view provides direction and alignment across the organization and ensures that any AI initiatives, that we have, support broader organizational goals, you know, in the world of limitless possibilities.
00;17;52;11 - 00;18;18;03
Joe Ellis
This top down view constrains the set of choices that, you know, the people with boots on the ground might make. So next up is bottom up discovery. Bottom up discovery happens at your execution layer. This type of discovery often occurs when someone who has access to detailed information or is hands on and working in the systems has an insight.
00;18;18;06 - 00;18;51;24
Joe Ellis
And this frontline insights are absolutely critical because they tell you which technologies might actually fit into your workflows. Where are the real operational point points, the pain points that exist, and where trust and adoption risk might derail even a technically sound solution. So when you when you combine these perspectives, you know, strategic direction from the top and, operational reality from the bottom, you create a complete picture of where to start focusing your attention.
00;18;51;26 - 00;19;24;29
Joe Ellis
You identify opportunities that are both strategically important and operationally viable, and avoid the trap of building or implementing something that looks great in a demo or presentation but fails to actually, deliver, you know, helping clinical workflows. So this is also critical because it helps to build the foundation for continuous improvement, because this, this continuous discovery process, it doesn't actually stop after one initiative.
00;19;25;02 - 00;19;36;15
Joe Ellis
It's ongoing and gets better as your capabilities mature and your needs evolve.
00;19;36;18 - 00;20;09;17
Joe Ellis
All right. So now we've gone through all of this. And even with the solid Opportunity Discovery framework in place, I see many teams still struggle to realize value from their AI initiatives. And so this slide, we've identified three common roadblocks that sort of derail, otherwise promising projects. The first roadblock is technology led decision making. This happens when decisions are driven by tools rather than defined outcomes or opportunities.
00;20;09;19 - 00;20;44;15
Joe Ellis
You know, someone sees an impressive AI demo or reads about a cutting edge capability, technology capability. And you know, teams pursue it because it's novel and and it's exciting, but not because it solves a specific problem that fits into how you do your work and create value, or that's been identified through your discovery process. These technology first decisions make it much harder to demonstrate, value or align an initiative with a strategic objective.
00;20;44;17 - 00;21;11;28
Joe Ellis
You end up with expensive technology that doesn't actually address the needs that your organization has. So, continuing along that same vein with technology led decision making, there's also an other, end of the spectrum here to, sometimes when processes or systems that have been reliable and worked in the past, get brought into the discussion for transformation and analysis paralysis sort of sets them.
00;21;11;28 - 00;21;41;21
Joe Ellis
It's like endless debating whether an AI approach will work or won't work. We're trying to anticipate every possible scenario and require absolute certainty before moving forward. And this can stall projects indefinitely. And the reason is because the only way to truly assess and understand if an AI works in your environment is to test it systematically with proper evaluations and guardrails in place.
00;21;41;24 - 00;22;07;25
Joe Ellis
So moving on to the second roadblock, ambiguous technology integration points when workflows, data sources and decision rules aren't clearly mapped, and there's no understanding of how data flows from one system to the next, it becomes very difficult to figure out upfront where AI applications should fit in, and what data they'll have access to at a specific prediction point in the process.
00;22;07;28 - 00;23;01;09
Joe Ellis
So understanding deeply how the technology you want to incorporate fits into your existing systems, workflows and constraints is essential. This requires detailed process mapping and technical architecture assessment before you commit to an AI solution. The third roadblock that I commonly see is the cross-functional team skills and context. Again, without a shared understanding and ongoing collaboration between different functions, be it clinical operations, I.T. compliance, data science, product management, amongst others, you end up with solutions that sound great in theory, but mis operational realities or, compliance requirements when they're deployed.
00;23;01;11 - 00;23;41;12
Joe Ellis
And, you know, on this point, I actually, you know, I want to re-emphasize that clinical input in healthcare when faced with technology is absolutely critical. It's something we'll explore when we get to our, more when we get to our case study. But, for us, we engage directly with inter calls, clinical content creators, the physicians and clinical researchers who actually develop the evidence based criteria and they helped us to understand that there are large clinical nuances and edge cases and all the reasoning behind specific criteria.
00;23;41;14 - 00;24;11;16
Joe Ellis
And that clinical context shaped how we design the product and how we evaluated whether the AI was performing appropriate across multiple dimensions. And I don't think with or without that, cross-functional sort of collaboration and shared context, we could have built something that was technically sophisticated, but, you know, clinically inadequate. So avoiding all of these roadblocks, it requires discipline.
00;24;11;17 - 00;24;44;03
Joe Ellis
It requires resisting the temptation to chase shiny technology. It requires a resisting falling back on old processes. It requires you to do the hard work of mapping out your workflows and integration points. And it requires building teams with diverse expertise who can communicate effectively across functions. And when you get these fundamentals right, you set yourself up for AI initiatives that deliver value.
00;24;44;06 - 00;25;11;17
Joe Ellis
Okay, so now I've spent the first half of this sort of outlining several frameworks for assessing your own internal readiness, and defining outcomes that you want to drive. So it's now time that we turn our attention to, the fun stuff evaluating AI. my belief is that before you can assess whether an AI solution will, to improve your operations, you need to know where you are today.
00;25;11;20 - 00;25;23;25
Joe Ellis
You need to baseline data about your current performance to set a standard from which you can build upon. And that's what this whole section is about.
00;25;23;27 - 00;25;55;26
Joe Ellis
So, you know, I think sometimes the last in this era, people forget that AI, it's still a science. It's still a discipline that requires experimentation and iteration. But baselining is a fundamental principle for any improvement project. You cannot measure improvement without knowing where you started. And I know this seems self-evident. Yet teams routinely try to evaluate AI solutions without having clear baseline metrics for their current performance.
00;25;55;28 - 00;26;26;01
Joe Ellis
So what should you baseline? I'm showing you five broad categories here related to prior auth, throughput timelines, decision quality and compliance. Provider and member experience, operational efficiency and cost and technology efficiency. Within each of these there are numerous specific metrics that you could track. But this is not meant to be an exhaustive checklist of everything you should measure.
00;26;26;04 - 00;26;55;11
Joe Ellis
That would probably be very overwhelming and counterproductive for this webinar. But I want everyone I would like to everyone to start thinking of these, categories as props to help you identify what matters most for your organization based on your strategic priorities and your opportunity to discover your work. If CMS compliance timelines are your biggest pressure point, you should focus heavily on throughput metrics.
00;26;55;13 - 00;27;39;16
Joe Ellis
If operational cost is your constraint, you'll emphasize efficiency metrics. If provider satisfaction is driving your strategy, you'll focus on experience metrics. The key is to pick the dimension that aligns with your strategic objectives and your identified opportunities. Then establish clear baseline measurements for those dimensions and document them before you start evaluating AI solutions. These baselines become your evaluation criteria, the yardstick against which you'll measure, whether an AI solution actually creates value in your specific environment.
00;27;39;19 - 00;28;03;20
Joe Ellis
And, you know, before I actually move on from this slide. One thing I want everything to keep in mind is that Baselining isn't a one time exercise. Once you've established your starting point and begin implementing improvement solutions, what you'll notice is that your baseline moves too, It should improve as you go along. And so then you'll have a new baseline.
00;28;03;20 - 00;28;28;17
Joe Ellis
And with new baselines you want to measure again and iterate. And I think this brings us to the question of how to design an evaluation pipeline that lets us capture these measurements systematically, not just once, but continuously. As your AI capabilities mature.
00;28;28;20 - 00;28;58;03
Joe Ellis
So we understand the importance of baselining. And this slide is really about building and designing and evaluation. Pipeline. One of the core tenets of success in AI initiatives depends fundamentally on your ability to properly differentiate positive results from negative ones, and then quickly pivot based on what you learn. And I think this is where many organizations get stuck, right?
00;28;58;03 - 00;29;29;18
Joe Ellis
They implement an AI, they hope it works. And months later, they're still not sure whether it's creating value or just expense. And that uncertainty, it sort of paralyzes decision making. Do we expand this AI program? Do we shut it down? Do we invest more? No one really knows because there's no systematic way to evaluate performance. And that's why the pipeline exercise is important, you know, and part of the pipeline, you know, which will potentially vary depending on your situation.
00;29;29;18 - 00;29;55;20
Joe Ellis
You can take things out here or add components in, all of this at the At the idea is to be able to have this pipeline done upfront. So you have an artifact to refer to as you're doing all of these evaluations. But here are six components that we use, as an effective evaluation. Pipeline. And again, the point here is not that this is the rule or anything.
00;29;55;20 - 00;30;27;04
Joe Ellis
It's this is what you should be thinking about. You can add it, and change it as your needs see fit. So first, first we want to do is to define clear evaluation criteria from the beginning. Vague criteria obviously can lead to conflicting interpretation among stakeholders. So you need to be specific about what success looks like. Second, we need to tie evaluation metrics to business or clinical KPIs.
00;30;27;04 - 00;31;01;09
Joe Ellis
That organization that your organization already tracks. And this creates alignment between AI performance and organizational outcomes. Third, define your evaluation metrics like what specific aspects will you test using? What methods and that what scope and scale. This could be automated metrics, human review samples, AB testing, or any other approach that you decide. Depending on your use case. Fourth, annotate evaluation data.
00;31;01;12 - 00;31;33;20
Joe Ellis
Even if it's a small sample, you need a collection of annotated cases that allow you to benchmark AI performance in an apples to apples way against a known ground truth. And fifth, actually evaluate. So capture any performance early performance signals from the AI service and compare them against your established benchmarks. The goal here, you know, when you're doing evaluation, it isn't perfection from day one.
00;31;33;23 - 00;32;03;21
Joe Ellis
The goal is understanding is this AI service or process that I've implemented? Technology that I've implemented. Is it helping us to perform better or worse than our baseline? What are the gaps? And even if it's not up to par, how quickly can we tune it to improve? And that brings us to our sixth component iterate. The entire pipeline is cyclical here.
00;32;03;23 - 00;32;31;05
Joe Ellis
As your benchmarks improve and as your use cases evolve, as your organization needs change, you continuously revise and enhance your evaluation approach. So as you can see, you know, responsible AI adoption. It's not just like a one shot event. It's not about finding a perfect solution that works flawlessly from day one, right? That's where the science thing has to come into place.
00;32;31;08 - 00;33;03;09
Joe Ellis
It's really about building up this muscle, this capability to measure, learn and improve continuously. Organizations that succeed with AI are the ones that can quickly identify what's working and what isn't. Then make informed decisions about where to invest and iterate rapidly based on real performance data. And I think that's what separates, you know, the successful AI implementations from the expensive experiments that never deliver value.
00;33;03;12 - 00;33;18;02
Joe Ellis
And, you know, when we talk about our use case study next, our case study, next, you'll see exactly how we apply apply these principles in practice.
00;33;18;05 - 00;33;43;17
Joe Ellis
So I know we've covered a lot of ground, you know, talked about the current time. We're in continuous opportunity discovery for avoiding, you know, pitfalls, common pitfalls, all this stuff. These frameworks, I think they might sound intimidating, but I want to show you exactly how we apply them in practice to make them more accessible and help you understand how we use them.
00;33;43;17 - 00;34;20;23
Joe Ellis
When we were building the InterQual Authorization accelerator. It's a real implementation where we use these frameworks to make decisions, navigate trade offs, and build something that creates value, for organizations and customers and patients. And critically, we did this by working closely with the InterQual content team. These are physicians and clinical researchers who developed this criteria. And then we also worked with experienced clinical reviewers using, examples from their real authorization data from across health plans.
00;34;20;26 - 00;34;26;27
Joe Ellis
So let's dive into how this unfolded.
00;34;27;00 - 00;34;50;29
Joe Ellis
So this visual should look familiar. It's the same authorization workflow, process workflow we discussed earlier, but now with one specific step highlighted, the medical necessity review. This is where we chose to focus our AI effort. And I want to walk you through how we arrived at that decision, using both the top down and bottom up discovery frameworks.
00;34;51;01 - 00;35;16;08
Joe Ellis
From the top down, our organization has really gotten behind the push to adopt AI. But it's not just adopting AI for the sake of technology. Our leadership has mandated that we prove to our AI Governance board that we properly assess the impact the AI will have on our patients, customers and communities we serve, and we must implement appropriate guardrails to catch things.
00;35;16;10 - 00;35;44;26
Joe Ellis
This gave us the direction and executive alignment for exploring AI solutions, and in the space of really constrained, where we thought we could best be positioned to execute on this. From the bottom up, we were gathering signals from multiple stakeholders. We talked extensively with provider and health plan reviewers, learning more about their operational pain points and the things that slowed them down and over and over again.
00;35;44;29 - 00;36;13;28
Joe Ellis
Medical necessity review emerged as a consistent consistently as a bottleneck. It was time sensitive. Requires high clinical expertise in the data that supports it. It's hard to sort through. We also looked at our own organizational strengths from a bottom up perspective. We started working directly with frontline nurse reviewers as well as clinical research teams from from Interpol.
00;36;14;05 - 00;36;50;16
Joe Ellis
And that gave us a significant advantage because we could involve clinical expertise directly into the AI and product development process from day one. Rather than trying to reverse engineer clinical logic on our own, and guess what would be a good experience for them? You know, this ability to work directly with the clinical teams really helped us understand what sort of, appropriate guardrails we should also, integrate into the product, to make everything make sure that everything worked smoothly and was safe for everyone.
00;36;50;19 - 00;37;19;28
Joe Ellis
So we had this strategic alignment from leadership, clear signals from clinicians and market regulatory pressures, creating urgency and organizational assets that positioned us well to address this specific opportunity. And I think that convergence mid medical necessity review acceleration the logical starting point. Again the starting point. So I'm sharing this not as the end all be all.
00;37;19;28 - 00;37;57;05
Joe Ellis
I'm just sharing it as a blueprint for how to start thinking about opportunity, prioritization in your own organization. Your strategic priorities will be different. Your organizational strengths will be different. Your customer pain points will vary, but the process is the same. Combining top down strategic direction with bottom up operational intelligence to identify five where you have the right capabilities and conditions for success, and then focusing there rather than to rather than trying to solve, you know, everything all at once.
00;37;57;08 - 00;38;16;27
Joe Ellis
I think this disciplined approach, to use case selection, is really what enables us to move forward with confidence, and it's what set us up to build something that we actually will use to create value, rather than just demonstrate technical capabilities.
00;38;16;29 - 00;38;44;22
Joe Ellis
So after we identify that medical necessity review, was our target opportunity. Through the systematic discovery process, we started working on the bill. And so here's what we actually built. The Interpol authorization accelerator has really four main capabilities, that work together to accelerate reviews while keeping clinical expertise at the center of this process.
00;38;44;25 - 00;39;18;06
Joe Ellis
First, the system collects and organizes clinical documentation from authorization submissions. Regardless of whatever format, you know, whether it's unstructured, clinical notes, PDF documents, fax records or any other source. It normalizes this information so that reviewers aren't hunting through hundreds of pages, trying to find relevant details. Second, the AI, extracts clinical data from the documentation and maps it to appropriate inter qual evidence based criteria.
00;39;18;08 - 00;39;47;12
Joe Ellis
And this is where having Interpol's clinical content team involved was critical. They helped us to understand the nuances of how the criteria should be interpreted, and what clinical evidence actually satisfies different requirements and how edge cases should be handled. You know, so this means the AI is not just making up its own rules. It's operationalizing established clinical guidelines.
00;39;47;15 - 00;40;08;13
Joe Ellis
Third, and this is an important design decision that we made. This is the AI and able to review with the nurse in the loop. It's not an automated decision system, although it can be configured to do so after the AI. After you've seen verifiable proof that the AI is agreeing with your team most of the time.
00;40;08;15 - 00;40;40;25
Joe Ellis
So this AI analyzes the documentation, identifies relevant evidence, maps it to enter qual criteria, and provides a suggestion to the clinical reviewers. But the reviewer remains in control throughout the entire process. Their role becomes one of validation, confirming that the evidence that the AI, surfaced is in fact the right evidence, and that the suggested criteria is appropriate based on their own clinical judgment.
00;40;40;27 - 00;41;08;24
Joe Ellis
They're not rubber stamping AI decisions. They're using AI to accelerate their clinical assessment. So for us, every decision, that reviewers make, whether they agree with the AI suggestion, modified override it or entirely, it all feeds back into our evaluation pipeline. And we analyze these decisions to understand where the AI performs well, where it struggles, and where we need to refine our approach.
00;41;08;26 - 00;41;36;13
Joe Ellis
This is a continuous learning loop grounded in real clinical expertise, not just algorithmic pattern matching. And what I want you to notice here is that we were deliberate about where the AI fits into this workflow. We didn't try to automate this entire process. We didn't eliminate human judgment. We augmented clinical expertise with AI capabilities in specific, well-defined ways.
00;41;36;15 - 00;42;09;04
Joe Ellis
Data organization. Evidence extraction. Criteria mapping and suggestion generation, all while preserving nurse oversight and decision authority. I think this design reflects what we've learned from our opportunity discovery and evaluation frameworks that, you know, the value isn't in trying to automate this stuff. It's in giving clinicians better tools to do their work more efficiently and consistently. So.
00;42;09;07 - 00;42;13;06
Joe Ellis
I'm sure you were, kind of curious about what this looks like.
00;42;13;06 - 00;42;37;22
Joe Ellis
So I want to show you, so that these concepts are become more concrete to you. And more importantly, I want to show you the philosophy behind how we design this interface, because everything you see here was built with really one core principle, enhanced clinical expertise with AI until you can trust it to get to auto approvals.
00;42;37;25 - 00;43;08;19
Joe Ellis
Never bypass it. Right. So when we worked with our clinical reviewers during development, we asked them, what actually slows you down during medical necessity or reviews? And the answer wasn't that they couldn't interpret the criteria. You know, they're skilled clinicians. They understand medical necessity. The bottleneck was navigating hundreds of pages of clinical documentation to find specific pieces of evidence, and spending ten minutes scrolling through records just to find a single result or consultation.
00;43;08;19 - 00;43;37;11
Joe Ellis
No, that's where the time gets lost. So we build AI features specifically to address that friction while keeping the reviewer in complete control. So on this left hand side, you'll see an AI generated table of contents. When we run submitted documents through OCR, regardless of how they're, arriving, you know, scanned documents, faxed documents, uploaded documents, all of that.
00;43;37;13 - 00;44;07;14
Joe Ellis
We then use language models, to analyze that content page by page, identify, clinical concepts that are usually, you know, sections that you would see in a CDA document and organize them into a searchable index. This lets reviewers jump directly to relevant sections instead of scrolling endlessly to find specific information. In the middle is the document VR with intelligent highlighting.
00;44;07;16 - 00;44;27;05
Joe Ellis
So generally, when you're doing a medical necessity review, you're having from system to system to find data points. Here we're centralizing everything right in the center so it's available to you. And so when a reviewer clicks on a section in the table of contents, the system automatically navigates them to that content and highlights it for them. So it's right in their face.
00;44;27;08 - 00;44;52;00
Joe Ellis
So above that is a search function that works across all submitted documentation And reviewers can type in keywords and find them instantly. Regardless of the, you know, the format that the data came in. So these are AI powered features. But notice what they don't do. They don't make decisions. They don't force the reviewer down a particular path.
00;44;52;02 - 00;45;23;12
Joe Ellis
And if a reviewer wants to scroll through the document manually the all the way, they certainly can. These are power tools, not autopilot tools. On the right hand side is where I gets more involved in that partially autonomous mode and the actual review process. And this is where our design philosophy becomes critical. Although we're using rack pipelines to, you know, analyze that index documentation against enter qual criteria.
00;45;23;14 - 00;45;54;05
Joe Ellis
We're identifying specific passages that appear relevant to each question. And we don't auto populate the answers. We don't tell the reviewer what their decision should be. Instead, we provide evidence links, clickable breadcrumbs that say, hey, here's where we found information that might be relevant to this question. Then the reviewer clicks on that link, and it takes them to the exact passage in the documentation so that they can make their own clinical assessment.
00;45;54;07 - 00;46;26;18
Joe Ellis
Does this evidence actually support the criteria? Is it the right interpretation? The point is, they remain the final decision maker, and the AI is just surfacing information faster than manually searching whatever allowed. It goes back to my principle of improvement over baseline continuously. So you'll also notice that there's a little AI sparkle icon next to the suggested answers.
00;46;26;21 - 00;47;02;04
Joe Ellis
We were deliberate here not to pre-populate selections. The reviewer must actively choose their answer. This is intentional friction. We want them engaged in the validation, not just clicking through on autopilot. And that's where, you know, the kind of the big thing comes into play. The measurement component, it ties back to our evaluation pipeline. And as we track every interaction, when reviewers click on evidence link, we know that they're actively validating when they select answers that align with AI suggestions.
00;47;02;11 - 00;47;39;19
Joe Ellis
We capture this agreement, right. When they override suggestions, we understand where the AI needs refinement. This data feeds directly into our continuous evaluation pipeline and helps us report to any sort of auditing or, regulatory bodies that the AI reviewer or the reviewers are using AI responsibly and not blindly trusting it. So what I want you to take away from this is that every design choice here, it came from working directly with the clinical reviewers and intercall content experts.
00;47;39;21 - 00;48;08;29
Joe Ellis
We didn't build what we thought would be cool. We built what clinicians told us would actually help them do their jobs better. We gave them superpowers the ability to process these complex clinical documents much faster, while also preserving their clinical judgment and decision authority. And we think this is a significant, boost and increasing adoption. Right? That's what partial autonomy looks like in practice.
00;48;09;02 - 00;48;22;06
Joe Ellis
That's what responsible AI design means. And that's why this approach builds trust with people who actually use the system every single day.
00;48;22;08 - 00;48;35;22
Joe Ellis
So I'm not going to linger on this slide. I know we've got, we're running out of time here, but the point is that through this process, we were able to see
00;48;35;24 - 00;48;49;05
Joe Ellis
we have essentially doubled the speed at which, nurses can process information we've not perfected and gone into autonomous mode, but we have increased productivity.
00;48;49;05 - 00;49;24;28
Joe Ellis
We've improved over baseline. And that is what we aim to achieve. And we aim to achieve that consistently and continuously. Right. So after we've seen a level of performance, where the nurse and the human reviewer are always in agreement, we can choose to auto approve cases. And we explicitly never auto deny these cases. All of those types of cases will always be reserved for human clinical judgment.
00;49;25;01 - 00;49;56;01
Joe Ellis
And this reflects our philosophy that I should accelerate positive outcomes where evidence clearly supports them in and complex or borderline cases that are ambiguous. They deserve human expertise and consideration. So what we're doing is building sustainable capabilities that create value today, leveraging the technology that's available to us, but also still respecting that it's not perfect to lay the foundation for what we do tomorrow.
00;49;56;03 - 00;50;13;13
Joe Ellis
So when you apply AI responsibly, when you work with clinical experts, when you design for transparency, you can achieve a significant operational improvement without ever sacrificing safety or quality. And that's the promise of this approach.
00;50;13;15 - 00;50;49;01
Joe Ellis
Everything I've described today, all of it, stems from these six guiding principles for responsible use of AI within our organization. Reliability through rigorous testing and monitoring. Fairness by assessing, for bias and disparate impact. Accountability with governance that enables Swift for mediation when issues emerge in the explicit commitment that AI will not replace clinical judgment. And then transparency in how our AI solutions operate and are used.
00;50;49;03 - 00;51;27;08
Joe Ellis
Privacy and security, safeguarding, you know, throughout design and deployment practices and then continuous improvement applying, you know, the best practices and, regulatory guidance as the technology and the landscape evolves. These are not aspirational statements. These are, they're operational disciplines that shape how we built the inter qual authorization accelerator and how we'll continue to develop AI capabilities going forward.
00;51;27;11 - 00;51;55;18
Joe Ellis
So with all that being said, it's time to talk about what to look for when you're selecting AI vendors and partners. The frameworks that I shared today apply, whether you're building internally or working with an external vendor. Vendor selection does add another layer of complexity, and you need to be able to assess not just the technology, but the organization behind it.
00;51;55;21 - 00;52;05;21
Joe Ellis
Their commitments to responsible AI and their ability to support you long term as your needs evolve.
00;52;05;24 - 00;52;32;25
Joe Ellis
So there are some key questions that we should be asking. First, can they, can your vendor that you working with, can they clearly articulate the rationale behind their AI generated recommendations? This isn't really just about explaining how the technology works. It's about transparency and auditability. And then, you know, you need to understand why the AI made a particular suggestions so that you can validate it.
00;52;33;02 - 00;53;04;05
Joe Ellis
Your reviewers can trust it and so that you're protected as regulations evolve. If a vendor cannot explain their AI’s reasoning in clinical, meaningful terms, that's a red flag. Second, you want to understand, you want to standardize. They understand if they're engaging with clinicians in developing, and assessing their AI, the technology needs to reflect real world medical practice and correctly interpreting evidence based guidelines.
00;53;04;08 - 00;53;34;24
Joe Ellis
This work priors ongoing clinical expertise embedded in the vendor's development processes, not just one time. Clinical review at the end. So ask vendors like, who are your clinical advisors? How are they involved? How do you continuously validate that your AI appropriately handles clinical complexity and edge cases? And if clinical judgment isn't woven into their development practices, you're you're taking on a significant risk.
00;53;34;27 - 00;54;06;09
Joe Ellis
Third here, you want to ask, does their I actually enhance both speed and quality as, clinical decision making as defined by your evaluation pipeline? This goes back to our earlier discussion about augmentation versus automation. The best AI solutions give clinicians superpowers. They make their job easier, faster, more accurate. Without removing, their expertise from the equation.
00;54;06;11 - 00;54;24;19
Joe Ellis
And as healthcare evolves and your workflows change, you need a vendor who can adapt their solution to support your new requirements. Not one who built a rigid product and sort of moved on.
00;54;24;21 - 00;54;55;05
Joe Ellis
So building on those foundational questions, there are four additional considerations. That should be part of your evaluation process. First, we've you know, we've touched on this throughout, but, confirm that the vendors designing for clinical outcomes, not just automation metrics. Their workflows should keep clinicians in control with the ability to override AI suggestions. Next we have, we have to ask about bias testing.
00;54;55;07 - 00;55;27;06
Joe Ellis
Has the vendor tested their AI across a diverse patient, set of diverse patient demographics, ages, genders, racial and ethnic groups, locations, all that sort of thing because healthcare disparities are real. And I trained on non-representative data can perpetuate or even amplify those disparities. And testing once really isn't just enough. Ongoing bias monitoring during operations is essential as your patient population and clinical patterns evolve.
00;55;27;06 - 00;55;56;25
Joe Ellis
So ask vendors to show you their biased testing methodology and their results. And if they can't, that's a risk. So next is security and compliance. This should be obvious, but ensure that the vendors, are implementing robust, security measures and HIPAA compliant data management practices. Make sure you're asking about security certifications, incident response protocols, and how they handle data encryption.
00;55;56;28 - 00;56;32;26
Joe Ellis
All that sort of thing. Last but not least, thinking about long term viability and support. AI isn't a one time implementation. It requires ongoing refinement, updates and all of that. As regulations change, the data changes. Your workflows change. All of that. So can this vendor provide sustained support? Do they have an established presence and healthcare with an interconnected network of partners and customers, or are they a small startup that might not be around in a couple of years?
00;56;32;29 - 00;56;46;15
Joe Ellis
Both can be valid partners depending on your risk tolerance, but you need to be able to assess their staying power and their commitment to your success over the long term.
00;56;46;18 - 00;57;08;09
Joe Ellis
All right, so I know, I know we're running out of time here. So I'd like to leave you with a few final thoughts. So the regulatory mandates and operational pressures that you're facing right now are real, and the timelines are tight. I think the expectations are sky high and I won't sugarcoat it. I think this work is hard.
00;57;08;11 - 00;57;34;03
Joe Ellis
But here's what I want you to consider. All these pressures are creating the conditions for real and meaningful transformation. They're forcing the standardization, the process discipline and the infrastructure investments that will enable capabilities that we just couldn't build before. You know, it's like being told you need to start exercising and managing your health and diet more carefully.
00;57;34;06 - 00;58;05;21
Joe Ellis
And when you start doing that, you all of a sudden have more energy. You have the ability to run longer, faster, harder. It fundamentally changes what you're capable of achieving long term. The frameworks that I shared today, continuous opportunity discovery, rigorous evaluations, governance, clinical collaboration, transparent design, ongoing measurement. These things aren't theoretical concepts. They're practical tools that help you to move forward with confidence.
00;58;05;23 - 00;58;40;02
Joe Ellis
They help you to foster a culture where AI initiatives align with your ethics, objectives, and operational realities at every step in its lifecycle. I will only succeed in healthcare if it continuously earns human trust, not the other way around. And that doesn't happen automatically. It has to be purposefully and intentionally designed into your processes from the beginning. And the goal isn't to automate away any sort of clinical judgment or replace human expertise.
00;58;40;05 - 00;59;12;28
Joe Ellis
The goal is to give healthcare professionals superpowers to help them do their work better with a less administrative burden, so that they can focus on what they do the best. Taking care of patients. So we're at a moment where technology capabilities are advancing rapidly, and if we spend our time learning how to harness that technology thoughtfully and responsibly, now we don't run the risk of being overwhelmed by it later.
00;59;13;00 - 00;59;30;06
Joe Ellis
We have to approach this work with discipline, with clinical expertise, guiding our decisions and frameworks that prioritize safety and transfer and see, as we build AI systems that genuinely improve healthcare delivery for everyone involved.
Related healthcare insights
Article
Learn what to look for in an AI healthcare partner — from clinical rigor to transparency — to support safe, effective implementation.
Article
See how Optum follows the science where it leads, validating the research we find to uphold the clinical integrity of InterQual content.
On-demand webinar
Discover solutions that help you efficiently and consistently comply with CMS-4201 and CMS-0057 requirements.