1
00:00:00,280 --> 00:00:03,720
Good morning, everyone.
I'm Katie LeBlanc.
2
00:00:04,360 --> 00:00:08,080
In terms of the agenda
today, we will be
3
00:00:08,080 --> 00:00:10,310
going through a couple
of really key areas
4
00:00:10,310 --> 00:00:13,300
to share with you.
We'll go through the AI
5
00:00:13,300 --> 00:00:16,190
market drivers and
technical advancements,
6
00:00:16,420 --> 00:00:18,960
target areas for
automation, framing your
7
00:00:18,960 --> 00:00:21,660
roadmap, some key
methods of success,
8
00:00:21,660 --> 00:00:24,460
and then move into
some closing remarks.
9
00:00:34,730 --> 00:00:35,670
All right.
10
00:00:35,850 --> 00:00:38,110
So, you know,
for those of us
11
00:00:38,110 --> 00:00:39,310
who are privileged
enough to
12
00:00:39,310 --> 00:00:40,890
work in the
healthcare industry,
13
00:00:41,210 --> 00:00:43,330
we all know the
landscape here.
14
00:00:43,790 --> 00:00:45,590
We've got rising costs,
15
00:00:45,590 --> 00:00:47,170
reimbursement
compression,
16
00:00:47,350 --> 00:00:48,690
labor shortages,
17
00:00:49,290 --> 00:00:51,770
things like value
-based care, which are
18
00:00:51,770 --> 00:00:56,030
really challenging us to
focus on high-quality,
19
00:00:56,030 --> 00:00:59,230
complete documentation
capture, which is
20
00:00:59,230 --> 00:01:01,610
critical. And it's
a really good thing,
21
00:01:01,610 --> 00:01:04,070
right, because that is
driving better patient
22
00:01:04,070 --> 00:01:07,030
outcomes but it's a
challenge from the
23
00:01:07,030 --> 00:01:10,350
perspective that it's
really labor-intensive
24
00:01:10,470 --> 00:01:13,050
it adds additional
cost and staffing
25
00:01:13,050 --> 00:01:16,410
Challenges. So I
know that many of us
26
00:01:16,410 --> 00:01:19,790
as we're learning
more about AI we're
27
00:01:20,230 --> 00:01:22,790
really looking at
AI as one of the
28
00:01:22,790 --> 00:01:26,470
things that can help
us manage through
29
00:01:26,570 --> 00:01:28,410
all of these
challenges that
30
00:01:28,410 --> 00:01:30,410
we're facing
as an industry.
31
00:01:31,130 --> 00:01:34,850
You'll see here on the
slide talking about
32
00:01:34,850 --> 00:01:38,630
98% of leaders
mentioning that you're
33
00:01:38,630 --> 00:01:41,390
planning to implement
an AI strategy. I know
34
00:01:41,390 --> 00:01:44,370
here at Banner Health,
we're spending a lot
35
00:01:44,370 --> 00:01:47,690
of time thinking about
how do we deploy these
36
00:01:47,690 --> 00:01:50,090
technologies, these
new technologies in
37
00:01:50,090 --> 00:01:54,010
ways that are ethical,
in ways that are
38
00:01:54,010 --> 00:01:58,160
really helping us to
imagine what the future
39
00:01:58,160 --> 00:02:01,770
of our revenue cycle
can look like and really
40
00:02:01,770 --> 00:02:04,060
meet the needs of
addressing some of the
41
00:02:04,060 --> 00:02:07,270
major issues that are
plaguing us today.
42
00:02:09,730 --> 00:02:11,270
I'm going to
do a check-in.
43
00:02:11,270 --> 00:02:12,670
Lori, are you there yet?
44
00:02:14,510 --> 00:02:17,350
All right. We'll
move forward.
45
00:02:30,220 --> 00:02:32,940
So when I think about
all of the different
46
00:02:32,940 --> 00:02:34,900
things that have
been coming into
47
00:02:34,900 --> 00:02:40,500
our lens from an AI
perspective, we're
48
00:02:40,500 --> 00:02:42,560
all seeing this
flurry of headlines,
49
00:02:42,820 --> 00:02:45,020
whether that's
around all of
50
00:02:45,020 --> 00:02:46,920
these new product
releases, think
51
00:02:46,920 --> 00:02:49,900
of OpenAI,
Anthropic with Claude,
52
00:02:50,340 --> 00:02:53,460
Google releasing
Gemini. There's a
53
00:02:53,460 --> 00:02:56,500
lot of new words
and frankly, markets
54
00:02:56,500 --> 00:02:59,580
that have emerged.
So from a Banner
55
00:02:59,580 --> 00:03:01,440
Health perspective,
we're really taking
56
00:03:01,440 --> 00:03:03,460
this learn-before
-we-lead approach.
57
00:03:04,120 --> 00:03:07,220
I, as a leader
here at Banner, and
58
00:03:07,220 --> 00:03:09,340
from a revenue
cycle perspective,
59
00:03:09,560 --> 00:03:11,960
one of the things that
I've personally been
60
00:03:11,960 --> 00:03:14,420
doing and been
challenging my teams to do
61
00:03:14,420 --> 00:03:17,080
is to engage with all
of these platforms,
62
00:03:17,080 --> 00:03:20,060
engage with the AI,
whether that's from a
63
00:03:20,060 --> 00:03:23,600
professional or personal
perspective, until
64
00:03:23,600 --> 00:03:26,610
we understand more about
these technologies,
65
00:03:26,610 --> 00:03:29,540
it's really hard
for us to be able to
66
00:03:29,540 --> 00:03:33,700
Imagine, in a business
Setting, what we can
67
00:03:33,700 --> 00:03:35,980
use, how we can use
these things, and how
68
00:03:35,980 --> 00:03:39,560
it's really going to
transform our business.
69
00:03:40,720 --> 00:03:42,420
I think, yes.
70
00:03:43,320 --> 00:03:44,700
And just to add one thing
71
00:03:44,700 --> 00:03:46,000
here, I think the other
72
00:03:46,160 --> 00:03:49,260
important thing, and
sorry to the folks on
73
00:03:49,260 --> 00:03:51,000
the phone, I had
technical difficulties.
74
00:03:51,080 --> 00:03:54,180
Katie jumped right in,
but I think the other
75
00:03:54,180 --> 00:03:56,220
thing is that as you're
thinking about your
76
00:03:56,220 --> 00:03:59,340
future and selecting
vendors, that you've got
77
00:03:59,340 --> 00:04:01,880
to be careful because
technology is advancing
78
00:04:01,880 --> 00:04:04,540
so quick. You want
to make sure that
79
00:04:04,540 --> 00:04:08,760
vendor has both the
investments and the time
80
00:04:08,760 --> 00:04:11,460
to keep pace with
technologies. One of the
81
00:04:11,460 --> 00:04:13,340
things we're seeing
is we're bringing new
82
00:04:13,340 --> 00:04:17,340
models in, it feels like
at least monthly, if
83
00:04:17,340 --> 00:04:20,020
not more frequent, to
understand what they
84
00:04:20,020 --> 00:04:23,460
bring to the mix that
may be better than other
85
00:04:23,460 --> 00:04:26,220
models or complementary
to other models.
86
00:04:28,700 --> 00:04:29,880
Absolutely.
87
00:04:30,080 --> 00:04:33,380
So that whole
notion of really
88
00:04:33,380 --> 00:04:36,160
learning and
understanding this
89
00:04:36,160 --> 00:04:39,260
landscape, and the
challenge is that
90
00:04:39,260 --> 00:04:41,420
it is evolving
very, very quickly.
91
00:04:41,700 --> 00:04:44,980
So it's absolutely
one of the things
92
00:04:44,980 --> 00:04:47,020
that we keep top
of mind. And I
93
00:04:47,020 --> 00:04:48,680
think as industry
leaders, it's
94
00:04:48,680 --> 00:04:50,460
imperative for all
of us to do that.
95
00:04:54,640 --> 00:04:57,160
So as we talk about
the different AI
96
00:04:57,160 --> 00:05:00,800
capabilities out there,
there are all types of AI
97
00:05:00,800 --> 00:05:05,000
at varying levels of
application, especially
98
00:05:05,000 --> 00:05:07,620
for healthcare. But
they all seem to have
99
00:05:07,620 --> 00:05:10,040
their strength. And
frankly, some have
100
00:05:10,040 --> 00:05:12,600
weaknesses. And so
what we've known in the
101
00:05:12,600 --> 00:05:15,580
market for a while is
things like Natural
102
00:05:15,580 --> 00:05:18,760
Language Processing.
And it's really still a
103
00:05:18,760 --> 00:05:21,020
very important part
of the game because it
104
00:05:21,020 --> 00:05:24,000
understands and generates
human language with
105
00:05:24,000 --> 00:05:26,920
high accuracy. And in a
place like healthcare,
106
00:05:26,920 --> 00:05:29,820
where the difference
between the patient
107
00:05:29,820 --> 00:05:32,740
having a disease
versus having a history
108
00:05:32,740 --> 00:05:35,660
of a disease versus
family history of a
109
00:05:35,660 --> 00:05:39,340
disease can matter when
you're trying to apply
110
00:05:39,340 --> 00:05:42,720
AI on top of that.
And so we feel like
111
00:05:42,720 --> 00:05:45,200
that Natural Language
Processing still very
112
00:05:45,200 --> 00:05:47,160
much, while not being
the latest great
113
00:05:47,160 --> 00:05:50,820
technology, plays a
very important role in
114
00:05:50,820 --> 00:05:55,280
performing automation
tasks related to health
115
00:05:55,280 --> 00:05:57,740
care for that very reason.
So an example would
116
00:05:57,740 --> 00:06:00,260
be extracting insights
from unstructured
117
00:06:00,260 --> 00:06:04,400
EHR notes, automating
clinical documentation,
118
00:06:04,400 --> 00:06:07,500
powering virtual
assistants. One we've
119
00:06:07,500 --> 00:06:09,840
been hearing a lot
about because of some of
120
00:06:09,840 --> 00:06:12,860
the ambient capabilities
our clinicians are
121
00:06:12,860 --> 00:06:15,260
using for documentation
is Natural Language
122
00:06:15,260 --> 00:06:18,460
Understanding. And
the strength there is
123
00:06:18,460 --> 00:06:21,300
understanding conversational
context between
124
00:06:21,300 --> 00:06:24,280
clinicians and patients,
literally taking
125
00:06:24,280 --> 00:06:28,580
and automating a script
of that conversation
126
00:06:28,580 --> 00:06:31,660
and ultimately
translating that script
127
00:06:31,660 --> 00:06:35,260
into documentation that
is then sent back to
128
00:06:35,260 --> 00:06:37,860
your EMR. So that
example, again, would be
129
00:06:37,860 --> 00:06:41,300
things like converting
lay language to
130
00:06:41,300 --> 00:06:43,520
structured clinical
notes and some of the
131
00:06:43,520 --> 00:06:47,040
vendors here that are
in this space right now.
132
00:06:47,180 --> 00:06:50,100
Large Language Models.
So when we think about
133
00:06:50,100 --> 00:06:53,120
tools like ChatGPT
and Copilot and other
134
00:06:53,120 --> 00:06:55,580
Large Language Models
out there, their
135
00:06:55,580 --> 00:06:58,980
strength is it scales
across tasks with minimal
136
00:06:58,980 --> 00:07:01,900
fine-tuning and
contextual understanding.
137
00:07:02,140 --> 00:07:05,980
What I see is it can
process large amounts
138
00:07:05,980 --> 00:07:10,180
of information at a rate
other models haven't
139
00:07:10,180 --> 00:07:12,800
been able to do. The
other thing we saw
140
00:07:12,800 --> 00:07:17,000
in our own experimentation
was that while it
141
00:07:17,000 --> 00:07:20,720
can process information
quickly, that in
142
00:07:20,720 --> 00:07:23,340
fact, when we put our
experts, whether that
143
00:07:23,340 --> 00:07:26,260
expert be a call center
expert or a coding
144
00:07:26,260 --> 00:07:29,440
expert or someone else
in healthcare, that
145
00:07:29,440 --> 00:07:31,940
when you put the right
people with the right
146
00:07:31,940 --> 00:07:35,200
technology to create
those prompts to ask
147
00:07:35,200 --> 00:07:37,720
the right question to
get the right answer,
148
00:07:37,940 --> 00:07:40,380
in fact, they
evolve quickly.
149
00:07:41,370 --> 00:07:44,460
And it's used more for
summarizing patient
150
00:07:44,460 --> 00:07:46,940
records, answering
clinical questions,
151
00:07:46,940 --> 00:07:49,400
and drafting things
like discharge notes.
152
00:07:49,520 --> 00:07:53,520
Computer Vision is a
high precision in image
153
00:07:53,520 --> 00:07:57,480
recognition and pattern
detection. So examples
154
00:07:57,480 --> 00:07:59,600
of those would be
detecting tumors from
155
00:07:59,600 --> 00:08:03,100
radiology scans, analyzing
pathology slides.
156
00:08:03,100 --> 00:08:06,780
And then Generative
AI creates new content
157
00:08:06,780 --> 00:08:09,380
from learned patterns,
generating synthetic
158
00:08:09,380 --> 00:08:13,020
patient data for
research, drafting patient
159
00:08:13,020 --> 00:08:16,840
education materials,
simulating rare disease
160
00:08:16,840 --> 00:08:20,580
scenarios. A few more
I want to cover before
161
00:08:20,580 --> 00:08:23,700
we move on is Predictive
Analytics identifies
162
00:08:23,700 --> 00:08:27,100
trends and forecasts
outcomes from large
163
00:08:27,100 --> 00:08:30,100
data sets, predicting
patient deterioration,
164
00:08:30,340 --> 00:08:32,700
or readmission
risk, or optimizing
165
00:08:32,700 --> 00:08:34,900
hospital resource
allocation.
166
00:08:35,200 --> 00:08:37,260
Autonomous Systems,
which we're
167
00:08:37,260 --> 00:08:40,360
hearing a lot about,
operate independently
168
00:08:40,360 --> 00:08:42,900
with minimal
human input. So
169
00:08:42,900 --> 00:08:46,140
robotic-assisted
surgery is an example.
170
00:08:46,840 --> 00:08:50,640
Medication dispensing,
smart logistics in
171
00:08:50,640 --> 00:08:54,140
hospitals, and then AI
Agents and Assistants.
172
00:08:54,140 --> 00:08:56,140
And we definitely
should not
173
00:08:56,140 --> 00:08:59,180
underestimate the power
of those models for
174
00:08:59,180 --> 00:09:01,240
things like virtual
nurses for follow
175
00:09:01,240 --> 00:09:03,940
-up care, AI
triage bots, or co
176
00:09:03,940 --> 00:09:06,380
-pilots for clinical
decision support.
177
00:09:11,910 --> 00:09:14,570
So as you're evaluating
AI technologies,
178
00:09:14,570 --> 00:09:17,530
it's critical to understand
how these translate
179
00:09:17,530 --> 00:09:20,010
to cost-benefit for
your organization.
180
00:09:20,170 --> 00:09:23,830
Those savings range
from 5% to more than
181
00:09:23,830 --> 00:09:27,400
50% in savings that
can be achieved using
182
00:09:27,400 --> 00:09:30,630
AI. The excitement
around automation will
183
00:09:30,630 --> 00:09:33,250
tempt you to dive
in headfirst, but
184
00:09:33,250 --> 00:09:35,970
depending on methodology,
this can be time
185
00:09:35,970 --> 00:09:39,350
-consuming and costly.
So evaluating what
186
00:09:39,350 --> 00:09:42,190
success looks like
prior to your selection
187
00:09:42,190 --> 00:09:45,130
is critical to assuring
you are achieving
188
00:09:45,130 --> 00:09:47,990
your goals. It's not
just about picking
189
00:09:47,990 --> 00:09:50,410
the latest great
technology. It's about
190
00:09:50,410 --> 00:09:53,610
picking the right
technology for the use
191
00:09:53,610 --> 00:09:55,950
case that you believe
will drive the right
192
00:09:55,950 --> 00:09:58,100
benefits for you and
your organization.
193
00:09:58,710 --> 00:10:01,150
Katie, can you share
more on how you
194
00:10:01,150 --> 00:10:05,290
see ROI on the path
to AI maturity?
195
00:10:05,870 --> 00:10:07,730
Sure, Lorri. Thank you.
196
00:10:08,270 --> 00:10:11,660
You know, this continuum
is exciting, right?
197
00:10:11,660 --> 00:10:15,090
You know, we all see
these numbers, and we
198
00:10:15,090 --> 00:10:17,470
want to get to the
far right-hand side of
199
00:10:17,470 --> 00:10:20,530
this continuum as
quickly as possible. But
200
00:10:20,530 --> 00:10:23,290
really, we have to step
back and ask ourselves,
201
00:10:23,290 --> 00:10:25,950
what has to be true
for us to get there?
202
00:10:26,170 --> 00:10:28,890
Just chasing that
ROI without making
203
00:10:28,890 --> 00:10:30,810
sure that you have some
of the fundamentals
204
00:10:30,810 --> 00:10:34,290
down would be a
very costly mistake.
205
00:10:34,410 --> 00:10:37,130
So we think about
things like data
206
00:10:37,130 --> 00:10:40,550
structure. Do we
have the right data
207
00:10:40,550 --> 00:10:42,730
structure in place
to be able to truly
208
00:10:42,730 --> 00:10:46,250
leverage and be able to
trust these technologies
209
00:10:46,250 --> 00:10:49,890
that, as we go,
again, more towards
210
00:10:49,890 --> 00:10:54,650
this autonomous,
deep learning AI,
211
00:10:54,870 --> 00:10:57,390
do we have the right
data structure and the
212
00:10:57,390 --> 00:10:59,970
right data integrity
to really be able to
213
00:10:59,970 --> 00:11:02,670
move into that space?
Do we have the right
214
00:11:02,670 --> 00:11:06,450
governance that is
there? Do our vendors
215
00:11:06,450 --> 00:11:08,790
that we're relying
on have that right
216
00:11:08,790 --> 00:11:11,110
governance, which we'll
talk about a little
217
00:11:11,110 --> 00:11:15,350
bit later. But certainly,
as we're evaluating
218
00:11:15,350 --> 00:11:18,790
different vendor
solutions, and frankly,
219
00:11:18,790 --> 00:11:21,410
different ways that
we can leverage AI
220
00:11:21,410 --> 00:11:24,710
internal to Banner,
these are some of the
221
00:11:24,710 --> 00:11:27,830
things that we think
about. And wanting to
222
00:11:27,830 --> 00:11:30,910
make sure that wherever
we choose to make
223
00:11:30,910 --> 00:11:33,850
those investments,
we're very targeted and
224
00:11:33,850 --> 00:11:36,450
we understand what
success is going to look
225
00:11:36,450 --> 00:11:38,730
like and making sure
that we've got those
226
00:11:38,730 --> 00:11:41,930
critical elements that
I just talked about
227
00:11:41,930 --> 00:11:44,590
to ensure that the
outcome is going to be
228
00:11:44,590 --> 00:11:47,170
there. It's not
just a promise. It's
229
00:11:47,170 --> 00:11:49,350
something that we know
that we can deliver on.
230
00:11:50,150 --> 00:11:51,690
Thank you, Katie.
231
00:11:53,590 --> 00:11:56,450
So cost efficiency
for health systems
232
00:11:56,450 --> 00:11:58,290
is one of the most
common drivers we
233
00:11:58,290 --> 00:12:01,270
hear for automation.
You can see in the
234
00:12:01,270 --> 00:12:03,670
slide some of the
business use cases that
235
00:12:03,670 --> 00:12:06,050
are top of mind.
Autonomous coding,
236
00:12:06,070 --> 00:12:09,050
digital intake and
triage, chatbot
237
00:12:09,050 --> 00:12:11,990
support, ambient
listening are just a few.
238
00:12:12,170 --> 00:12:15,130
On the right, you
can see the impact of
239
00:12:15,130 --> 00:12:17,950
AI by level of
adoption, right? We all
240
00:12:17,950 --> 00:12:20,790
know we are in what
Gartner would tell us
241
00:12:20,790 --> 00:12:24,290
is the hype curve of
AI, but ultimately
242
00:12:24,290 --> 00:12:26,650
adoption is happening
at a very different
243
00:12:26,650 --> 00:12:30,330
scale than the
interest in the hype.
244
00:12:30,630 --> 00:12:34,110
Key here is this is a
journey. An organization
245
00:12:34,110 --> 00:12:37,490
should have a 5
to 10 year plan as AI
246
00:12:37,490 --> 00:12:39,850
matures. It doesn't
mean you wait 5 to
247
00:12:39,850 --> 00:12:42,770
10 years to implement,
but you recognize to
248
00:12:42,770 --> 00:12:44,970
reach the full cost
benefit you're looking
249
00:12:44,970 --> 00:12:48,190
for will evolve over
years. it'll evolve over
250
00:12:48,190 --> 00:12:50,990
years because the
technologies will get better
251
00:12:50,990 --> 00:12:53,630
it will evolve over
years as you learn
252
00:12:53,630 --> 00:12:56,570
to see the barriers of
automation within your
253
00:12:56,570 --> 00:13:00,430
own organization and
as adoption increases.
254
00:13:00,510 --> 00:13:03,130
And so as we think
about this 5 to
255
00:13:03,130 --> 00:13:06,730
10 year plan as AI
matures and organizations
256
00:13:06,730 --> 00:13:08,750
become more
comfortable with
257
00:13:08,750 --> 00:13:11,690
automation, you will
start to achieve the
258
00:13:11,690 --> 00:13:14,490
full benefits and then
by then, we'll have
259
00:13:14,490 --> 00:13:16,710
even new capabilities
in the market.
260
00:13:20,410 --> 00:13:24,170
So as we look
at all of the
261
00:13:24,170 --> 00:13:27,250
different activities
that have really
262
00:13:27,250 --> 00:13:28,970
been positioned
for automation,
263
00:13:29,300 --> 00:13:33,250
I think back to, you
know, think back five,
264
00:13:33,250 --> 00:13:36,490
even ten years when
before we were talking
265
00:13:36,490 --> 00:13:40,430
about AI and using that
vernacular, we were
266
00:13:40,430 --> 00:13:43,350
talking about things
like NLP. We were talking
267
00:13:43,350 --> 00:13:46,010
about machine learning
we were leaning into
268
00:13:46,010 --> 00:13:49,210
technologies like
computer assisted coding.
269
00:13:49,210 --> 00:13:56,530
We moved into broader
use of RPA, remote process
270
00:13:56,530 --> 00:13:59,750
Automation, and so
that's when chat bots
271
00:13:59,750 --> 00:14:05,130
and other bots
really came into play. We
272
00:14:05,130 --> 00:14:08,790
from a banner health
perspective, we leverage
273
00:14:08,790 --> 00:14:12,730
bots extensively. We're
leveraging them in
274
00:14:12,730 --> 00:14:17,330
some of the usual areas
like claim statusing.
275
00:14:17,330 --> 00:14:22,670
We use chatbots in
certain locations. But I
276
00:14:22,670 --> 00:14:24,650
think one of the things
that's exciting that
277
00:14:24,650 --> 00:14:27,970
we're exploring and have
some active projects
278
00:14:27,970 --> 00:14:31,230
around today, is really
learning from other
279
00:14:31,230 --> 00:14:33,730
industries around call
center automation.
280
00:14:33,970 --> 00:14:36,630
This is an area where,
you know, moving
281
00:14:36,630 --> 00:14:40,530
from simply chatbots
into more voice
282
00:14:40,530 --> 00:14:45,130
-enabled bots has a
lot of promise, excuse
283
00:14:45,130 --> 00:14:47,480
me. And it's not
just to drive cost
284
00:14:47,480 --> 00:14:49,630
reduction, but the
way we see it is
285
00:14:49,630 --> 00:14:51,670
really improving the
patient experience.
286
00:14:51,970 --> 00:14:55,070
So thinking about
those agents for self
287
00:14:55,070 --> 00:14:58,630
-service that instead of
waiting for a call back
288
00:14:58,630 --> 00:15:02,670
or, are being on hold
to talk to an agent,
289
00:15:02,670 --> 00:15:04,670
really moving into
this space where
290
00:15:04,670 --> 00:15:06,570
we can use some
of those spots
291
00:15:06,930 --> 00:15:09,370
to really meet our
patients where they're
292
00:15:09,370 --> 00:15:12,710
at. We're all seeking
a more convenient,
293
00:15:14,630 --> 00:15:17,170
timely interaction
to get some of
294
00:15:17,170 --> 00:15:19,330
those basic
questions answered.
295
00:15:19,470 --> 00:15:21,170
And frankly,
in the future,
296
00:15:21,170 --> 00:15:23,090
more complex
things like being
297
00:15:23,090 --> 00:15:24,990
able to schedule
an appointment
298
00:15:25,510 --> 00:15:28,870
through a chat or a
voice bot interaction.
299
00:15:28,870 --> 00:15:31,010
So these are some
of the things that,
300
00:15:31,010 --> 00:15:32,840
again, as we look
at that continuum
301
00:15:32,840 --> 00:15:35,230
that Lorri was
talking about, we're
302
00:15:35,230 --> 00:15:38,130
getting very
excited to watch not
303
00:15:38,130 --> 00:15:40,290
only the evolution
of the technology,
304
00:15:40,440 --> 00:15:42,890
but really
imagine how we can
305
00:15:42,890 --> 00:15:44,690
use that technology
in ways to
306
00:15:44,690 --> 00:15:47,600
make our patients'
lives easier
307
00:15:48,390 --> 00:15:51,450
and help our
agents across
308
00:15:51,450 --> 00:15:53,650
Banner Health
out as well. Some
309
00:15:58,630 --> 00:16:00,970
of the key
considerations in framing
310
00:16:00,970 --> 00:16:03,610
future ambition, how
does your automation
311
00:16:03,610 --> 00:16:06,160
vision fit into your
overall strategic
312
00:16:06,160 --> 00:16:08,640
plan? How are you
going to define,
313
00:16:08,640 --> 00:16:10,730
measure, and achieve
success for your
314
00:16:10,730 --> 00:16:13,030
revenue cycle or
clinical automation?
315
00:16:13,250 --> 00:16:16,390
What are the key
capabilities, governance,
316
00:16:16,390 --> 00:16:18,550
and infrastructure
that your organization
317
00:16:18,550 --> 00:16:21,450
needs to improve revenue
cycle and clinical
318
00:16:21,450 --> 00:16:24,610
operations moving
forward? And what's the
319
00:16:24,610 --> 00:16:26,790
most critical for your
health organization
320
00:16:26,790 --> 00:16:29,610
to achieve in order
to drive long-term
321
00:16:29,610 --> 00:16:33,410
sustained access within
your provider operations?
322
00:16:33,750 --> 00:16:35,390
How are you going
to address staff
323
00:16:35,390 --> 00:16:38,450
workforce shortages? And
where would you redeploy
324
00:16:38,450 --> 00:16:40,950
your talent? Do you
have the internal
325
00:16:40,950 --> 00:16:43,770
resources and expertise
to achieve cost and
326
00:16:43,770 --> 00:16:46,070
revenue targets? And
how will you determine
327
00:16:46,070 --> 00:16:47,970
what use cases
provide the greatest
328
00:16:47,970 --> 00:16:52,090
opportunity to improve
yield via automation or
329
00:16:52,090 --> 00:16:55,010
vendor partnerships.
And what revenue cycle
330
00:16:55,010 --> 00:16:57,630
improvement opportunities
will you prioritize
331
00:16:57,630 --> 00:16:59,810
across your organization
and how will
332
00:16:59,810 --> 00:17:03,170
you assess variables to
consider when
333
00:17:03,170 --> 00:17:05,690
evaluating improvement
tactics. I think
334
00:17:05,690 --> 00:17:08,860
we all know that
in this journey comes
335
00:17:08,860 --> 00:17:11,310
both benefits and
risk and i think it's
336
00:17:11,310 --> 00:17:13,610
really important to
understand and acknowledge
337
00:17:13,610 --> 00:17:16,450
and document those
and to be able to
338
00:17:16,450 --> 00:17:19,890
strategize around each
of those points. Katie,
339
00:17:19,890 --> 00:17:21,810
anything else you
would want to add here?
340
00:17:21,810 --> 00:17:24,470
All really, really
great considerations.
341
00:17:24,670 --> 00:17:27,190
The one thing
I would add is,
342
00:17:27,350 --> 00:17:31,390
look, we know
that we have this
343
00:17:31,390 --> 00:17:34,210
opportunity to
automate process,
344
00:17:34,350 --> 00:17:37,450
but I would challenge
us to go beyond that.
345
00:17:37,450 --> 00:17:39,990
It's really about reimagining the process
346
00:17:39,990 --> 00:17:42,770
versus layering any of
these AI tools on top
347
00:17:42,770 --> 00:17:45,470
of our traditional
ways of working. And I
348
00:17:45,470 --> 00:17:47,310
think one of the greatest
features of these
349
00:17:47,310 --> 00:17:50,150
new technologies is the
ability to pull large
350
00:17:50,150 --> 00:17:52,830
data sets and use that
knowledge to drive
351
00:17:52,830 --> 00:17:55,250
more effective outcomes.
So whether we're
352
00:17:55,250 --> 00:17:58,210
talking about clean
claims, reducing denials,
353
00:17:58,210 --> 00:18:00,590
or really
ultimately driving
354
00:18:00,590 --> 00:18:02,290
better clinical outcomes,
355
00:18:02,650 --> 00:18:06,350
this is our opportunity
to really harness all
356
00:18:06,350 --> 00:18:09,250
of our creative powers
and that art of the
357
00:18:09,250 --> 00:18:11,210
possible And it's not
just about allowing
358
00:18:11,210 --> 00:18:16,300
ourselves to work in
our old ways and making
359
00:18:16,300 --> 00:18:18,310
sure that we're not
constrained by those
360
00:18:18,310 --> 00:18:22,410
limitations of past
practice. So in addition to
361
00:18:22,660 --> 00:18:25,210
all of the things that
you mentioned, Lorri,
362
00:18:25,210 --> 00:18:29,390
I would just like to
really underscore and
363
00:18:29,390 --> 00:18:31,930
really challenge all
of us as an industry,
364
00:18:32,110 --> 00:18:34,890
let's reimagine
the way we work.
365
00:18:34,890 --> 00:18:38,570
We really have
this opportunity. I
366
00:18:38,570 --> 00:18:41,550
think this is our
moment and, you know,
367
00:18:41,550 --> 00:18:43,830
we shouldn't squander
the opportunity.
368
00:18:44,490 --> 00:18:47,250
We really do have
a significant
369
00:18:47,250 --> 00:18:48,590
one on the horizon here.
370
00:18:48,920 --> 00:18:51,270
I love that feedback,
Katie, and I'll
371
00:18:51,270 --> 00:18:52,990
share from a
personal experience.
372
00:18:52,990 --> 00:18:57,590
As we were in the
market with the upcoming
373
00:18:57,590 --> 00:19:00,370
ICD-10 looming and
putting computer
374
00:19:00,370 --> 00:19:03,430
-assisted coding out
to the market, One of
375
00:19:03,430 --> 00:19:06,290
the things that we
see in hindsight is at
376
00:19:06,290 --> 00:19:08,750
the time, it was all
about organizations
377
00:19:08,750 --> 00:19:12,190
wanting to survive
the impact of ICD-10.
378
00:19:12,230 --> 00:19:16,470
And after ICD-10 was over
379
00:19:16,470 --> 00:19:19,310
and change had evolved,
380
00:19:20,020 --> 00:19:23,210
they looked back and
thought, wait, this was
381
00:19:23,210 --> 00:19:27,110
so much bigger than
helping survive ICD-10.
382
00:19:27,110 --> 00:19:30,510
And so when you think
about AI or Natural
383
00:19:30,510 --> 00:19:33,230
Language Processing
at the point of care,
384
00:19:33,230 --> 00:19:36,790
processing information,
and understanding a
385
00:19:36,790 --> 00:19:39,890
potential view into a
claim long before a patient
386
00:19:39,890 --> 00:19:42,090
leaves the hospital
and the record is final
387
00:19:42,090 --> 00:19:45,230
coded, you can really
think your operations.
388
00:19:45,230 --> 00:19:47,370
And I talk a lot about
the middle revenue
389
00:19:47,370 --> 00:19:51,410
cycle and how you can
shift from that reactive,
390
00:19:51,410 --> 00:19:54,930
why was this coded on
a case, to proactive
391
00:19:54,930 --> 00:19:57,090
and orchestrator of
health information
392
00:19:57,090 --> 00:20:00,350
every step of the way,
confirming a potential
393
00:20:00,350 --> 00:20:04,370
quality event, closing
a documentation gap,
394
00:20:04,610 --> 00:20:08,010
getting the one
piece of information
395
00:20:08,010 --> 00:20:10,670
needed to accurately
code that PCS code.
396
00:20:10,670 --> 00:20:14,050
Those things were
really overlooked in
397
00:20:14,050 --> 00:20:19,090
our rapid race to
survive ICD-10. But
398
00:20:19,090 --> 00:20:23,110
looking back, it
tells us what a lesson
399
00:20:23,110 --> 00:20:25,670
learned for the future,
which is how do I
400
00:20:25,670 --> 00:20:28,270
rethink, as Katie
said, my operations
401
00:20:28,270 --> 00:20:30,750
besides just thinking
automation is a
402
00:20:30,750 --> 00:20:33,010
great way to do it
the way we always did.
403
00:20:33,030 --> 00:20:33,590
Yeah.
404
00:20:35,350 --> 00:20:38,460
So let's talk about
some key methods for
405
00:20:38,460 --> 00:20:41,890
success. And so you
want to commit to a
406
00:20:41,890 --> 00:20:44,710
standardized implementation
process to ensure the
407
00:20:44,710 --> 00:20:47,630
responsible use and
value realization that
408
00:20:47,630 --> 00:20:50,070
you're looking for. So
you want to assess the
409
00:20:50,070 --> 00:20:52,910
organization's overall
preparedness for AI
410
00:20:52,910 --> 00:20:55,530
adoption, focusing
on infrastructure,
411
00:20:55,530 --> 00:20:59,770
processes, and the
culture, to ensure a smooth
412
00:20:59,770 --> 00:21:03,490
integration process with
existing systems. To
413
00:21:03,490 --> 00:21:06,770
think you can put
automation in and all of
414
00:21:06,770 --> 00:21:10,090
these things just fall
in line is no different
415
00:21:10,090 --> 00:21:12,930
than the concept of
junk in, junk out from
416
00:21:12,930 --> 00:21:15,590
reporting, right? We
have to be prepared
417
00:21:15,590 --> 00:21:18,930
to understand what are
our people's bias to
418
00:21:18,930 --> 00:21:21,660
automation? How do we
make them comfortable
419
00:21:21,660 --> 00:21:24,370
when they have been the
ones scrutinized for
420
00:21:24,370 --> 00:21:26,210
a year or so and the
difference between right
421
00:21:26,210 --> 00:21:28,630
and wrong, right?
How do we think about
422
00:21:28,630 --> 00:21:31,010
our documentation
differently, and what are
423
00:21:31,010 --> 00:21:33,270
those things that we
have in our documentation
424
00:21:33,270 --> 00:21:35,190
templates that are
going to make it more
425
00:21:35,190 --> 00:21:38,510
difficult for AI to be
accurate? So assessing
426
00:21:38,510 --> 00:21:41,470
the enterprise readiness
across not just
427
00:21:41,470 --> 00:21:43,970
the people and the
process, but the people,
428
00:21:43,970 --> 00:21:46,610
process, and technologies
that surround it.
429
00:21:46,630 --> 00:21:49,810
Your AI legal, compliance,
regulatory, and
430
00:21:49,810 --> 00:21:52,850
governance readiness.
We know we're early in
431
00:21:52,850 --> 00:21:56,010
this journey and that
regulations are going
432
00:21:56,010 --> 00:21:58,550
to evolve. We all
know in healthcare, it
433
00:21:58,550 --> 00:22:01,850
takes an incident and
suddenly new regulatory
434
00:22:02,070 --> 00:22:07,810
items become evident.
And so you want to have
435
00:22:07,810 --> 00:22:10,550
your own ability to
adhere to what you
436
00:22:10,550 --> 00:22:13,090
believe is the responsible
and ethical use of
437
00:22:13,090 --> 00:22:15,670
AI, but you also want
to make sure that you're
438
00:22:15,670 --> 00:22:18,430
assessing your vendor
for responsible and
439
00:22:18,430 --> 00:22:21,610
ethical use of AI.
Testing against that AI
440
00:22:21,610 --> 00:22:25,410
model's bias, being able
to detect hallucinations
441
00:22:25,410 --> 00:22:28,650
that may create poor
outcomes, understanding
442
00:22:28,650 --> 00:22:32,350
if that vendor is
producing the full
443
00:22:32,350 --> 00:22:35,570
human capacity when it
was being touched by a
444
00:22:35,570 --> 00:22:38,410
human or just parts of
that human's capacity.
445
00:22:38,410 --> 00:22:41,130
And so understanding
your legal, compliance,
446
00:22:41,130 --> 00:22:43,130
regulatory, and
governance readiness is
447
00:22:43,130 --> 00:22:45,770
ideal. And you should
have a way that you're
448
00:22:45,770 --> 00:22:49,260
assessing your vendors,
but also a way you're
449
00:22:49,260 --> 00:22:52,030
assessing your
organization's use of AI.
450
00:22:52,030 --> 00:22:55,250
AI operational
readiness, organization's
451
00:22:55,250 --> 00:22:58,490
capacity to manage these
solutions effectively,
452
00:22:58,490 --> 00:23:00,690
including monitoring,
maintenance,
453
00:23:00,810 --> 00:23:04,170
adoption, and optimization
to ensure continued
454
00:23:04,170 --> 00:23:07,090
performance and
effectiveness. One of
455
00:23:07,090 --> 00:23:09,370
the things everybody
hates to talk about when
456
00:23:09,370 --> 00:23:13,910
it comes to AI is
while we want to bring
457
00:23:13,910 --> 00:23:16,590
down our costs, it
becomes a much harder
458
00:23:16,590 --> 00:23:19,790
conversation when we
talk about losing people.
459
00:23:19,890 --> 00:23:23,130
And Katie was very,
very distinct in how she
460
00:23:23,130 --> 00:23:24,930
was thinking about
this. So when you think
461
00:23:24,930 --> 00:23:28,090
about your AI operations
readiness, it's
462
00:23:28,090 --> 00:23:29,790
equally as important to
know how you're going
463
00:23:29,790 --> 00:23:33,090
to redeploy talent and
especially redeploy
464
00:23:33,090 --> 00:23:35,610
top talent. Maybe it's
top talent to help
465
00:23:35,610 --> 00:23:38,690
make sure that the
AI is performing in a
466
00:23:38,690 --> 00:23:42,990
compliant way, but
maybe it's also helping
467
00:23:42,990 --> 00:23:45,130
to reach goals that you
haven't been able to
468
00:23:45,130 --> 00:23:46,810
reach in your
organization because they
469
00:23:46,810 --> 00:23:49,930
were so busy with the
task that they couldn't
470
00:23:49,930 --> 00:23:51,910
focus on the outcome.
And I think that's
471
00:23:51,910 --> 00:23:54,310
just a really important
piece of readiness.
472
00:23:54,350 --> 00:23:57,010
And I think it's
important for the human,
473
00:23:57,010 --> 00:23:59,650
too, to know what this
means to their role.
474
00:24:00,050 --> 00:24:02,890
AI solutions, identify
the most suitable
475
00:24:02,890 --> 00:24:05,750
AI tools and
technology for your
476
00:24:05,750 --> 00:24:07,870
unique requirements
and challenges.
477
00:24:07,870 --> 00:24:11,230
I have seen healthcare
organizations
478
00:24:11,790 --> 00:24:14,430
lock into a deal
with a vendor,
479
00:24:14,550 --> 00:24:17,710
spend months, if not
years, training a
480
00:24:17,710 --> 00:24:20,870
model only to figure
out at the end there
481
00:24:20,870 --> 00:24:22,870
was not the cost-benefit they wanted
482
00:24:22,870 --> 00:24:26,130
to achieve. And so those
early on assessment
483
00:24:26,270 --> 00:24:29,450
of the right AI tools
are so critical.
484
00:24:29,450 --> 00:24:31,630
And then assuring
seamless deployment
485
00:24:31,630 --> 00:24:34,390
with existing systems
and processes.
486
00:24:34,410 --> 00:24:36,610
So you want to
empower with better
487
00:24:36,610 --> 00:24:39,610
insights, deliver
the answers faster,
488
00:24:39,630 --> 00:24:42,490
create seamless
experiences, and maximize
489
00:24:42,490 --> 00:24:45,170
your people's potential
are key takeaways.
490
00:24:50,230 --> 00:24:53,570
So, Lorri, you know,
as I think about
491
00:24:53,570 --> 00:24:56,390
these common
barriers to success,
492
00:24:56,430 --> 00:24:59,390
one of the things
that I think many of
493
00:24:59,390 --> 00:25:01,470
us are challenged
with is this notion
494
00:25:01,470 --> 00:25:03,830
of, you know, who
do we partner with?
495
00:25:03,830 --> 00:25:06,310
This is an incredibly
crowded field,
496
00:25:06,310 --> 00:25:08,490
and it gets more
crowded by the day.
497
00:25:08,770 --> 00:25:14,430
Everyone is racing to
determine how AI can
498
00:25:14,430 --> 00:25:18,630
improve their
operations, how they can,
499
00:25:18,630 --> 00:25:20,750
from a vendor
perspective, what are the
500
00:25:20,750 --> 00:25:26,230
new tools that can be
created, a new product
501
00:25:26,230 --> 00:25:28,670
that kind of spools
out of this. So super
502
00:25:28,670 --> 00:25:31,310
crowded field. So,
you know, as we kind
503
00:25:31,310 --> 00:25:33,210
of step back and
think about who are
504
00:25:33,210 --> 00:25:35,730
those partners that
we would like to work
505
00:25:35,730 --> 00:25:39,290
with, from my perspective,
I think about that
506
00:25:39,290 --> 00:25:41,410
adage that innovation
moves at the speed
507
00:25:41,410 --> 00:25:44,330
of trust. And so, who
are those partners
508
00:25:44,330 --> 00:25:47,190
that, for us, share
a common value set?
509
00:25:47,410 --> 00:25:51,230
We really focus on that
ethical use of data,
510
00:25:51,230 --> 00:25:54,610
the commitment to
improve and protect the
511
00:25:54,610 --> 00:25:57,850
patient experience.
Who are these vendors
512
00:25:57,850 --> 00:26:01,390
that are really focused
on data integrity?
513
00:26:01,390 --> 00:26:04,530
All of these things
are what is going to
514
00:26:04,530 --> 00:26:07,650
engender trust, not
just, you know, within
515
00:26:07,650 --> 00:26:11,990
our IT organizations or
operations, but within
516
00:26:11,990 --> 00:26:15,810
our talent pool, our,
you know, employee
517
00:26:15,810 --> 00:26:19,670
base, from our
executive teams, and our
518
00:26:19,670 --> 00:26:22,410
patients. You know, I
think that there's a
519
00:26:22,410 --> 00:26:26,230
lot of anxiety out
there about the use of
520
00:26:26,230 --> 00:26:28,630
these technologies,
and so it's really
521
00:26:28,630 --> 00:26:32,050
incumbent upon us to get
to that place of trust
522
00:26:32,050 --> 00:26:35,110
and make sure that we're
identifying partners
523
00:26:35,110 --> 00:26:40,270
who really share
that vision and who
524
00:26:40,270 --> 00:26:42,910
we can trust to go at
this journey with us.
525
00:26:46,460 --> 00:26:49,090
so we talked a little
bit about the concept
526
00:26:49,090 --> 00:26:51,750
of junk in and junk
out and the truth
527
00:26:51,750 --> 00:26:55,410
is none of these AI
models are perfect on
528
00:26:55,410 --> 00:26:59,650
their own. Clinical
documentation is complex
529
00:26:59,650 --> 00:27:02,930
and health care
regulations are complex
530
00:27:02,930 --> 00:27:07,510
coding language is
complex. This article
531
00:27:07,510 --> 00:27:10,250
is a great example
about how subtle shifts
532
00:27:10,250 --> 00:27:12,830
can create grossly
inaccurate outputs and
533
00:27:12,830 --> 00:27:15,510
i thought it was just
a fun article to share
534
00:27:15,510 --> 00:27:18,970
with you so this is
actually a scientific
535
00:27:18,970 --> 00:27:23,690
paper that was
published on the internet
536
00:27:23,690 --> 00:27:27,210
and recently this
article was published
537
00:27:27,210 --> 00:27:33,870
that talked about how
AI created its own
538
00:27:33,870 --> 00:27:38,990
language because of
how this presented in
539
00:27:38,990 --> 00:27:41,910
its internet source.
And so as you see, this
540
00:27:41,910 --> 00:27:44,430
article was meant to
be read in columns.
541
00:27:44,430 --> 00:27:47,130
But what happened
is because it
542
00:27:47,130 --> 00:27:49,290
was side-by-side
in the way that
543
00:27:49,290 --> 00:27:53,190
it was presented
on the internet,
544
00:27:53,510 --> 00:27:57,670
it actually read across
the columns combining
545
00:27:57,670 --> 00:28:02,670
words to create a
term that didn't exist
546
00:28:02,670 --> 00:28:05,990
in this industry.
And the reason behind
547
00:28:05,990 --> 00:28:10,190
this, right, is it couldn't
tell the difference
548
00:28:10,190 --> 00:28:14,250
between these two
columns read across
549
00:28:14,250 --> 00:28:17,990
the page. And ultimately,
this term has now
550
00:28:17,990 --> 00:28:21,190
been cited, I think it
was over 10 times in
551
00:28:21,190 --> 00:28:26,870
actual journal articles
because of this one
552
00:28:26,870 --> 00:28:30,250
misreading an
application by AI. So I
553
00:28:30,250 --> 00:28:32,390
thought that was an
interesting story. So when
554
00:28:32,390 --> 00:28:35,950
we start to step back
and look at our clinical
555
00:28:35,950 --> 00:28:38,230
documentation, right,
and we're so quick
556
00:28:38,230 --> 00:28:41,190
to insert things into
our templates in a
557
00:28:41,190 --> 00:28:43,310
rush because suddenly
we're part of a new
558
00:28:43,310 --> 00:28:46,610
quality program or
this physician wanted
559
00:28:46,610 --> 00:28:49,330
this box in this place
versus this place. And
560
00:28:49,330 --> 00:28:51,910
as we think about our
AI journey, part of
561
00:28:51,910 --> 00:28:55,230
that is understanding
how do we create the
562
00:28:55,230 --> 00:28:59,750
best possible content
to lay AI on top of,
563
00:28:59,750 --> 00:29:01,750
whether that is AI
on top of clinical
564
00:29:01,750 --> 00:29:06,740
information or AI
to translate
clinical information,
565
00:29:06,740 --> 00:29:09,970
it's important that
we build the best
566
00:29:09,970 --> 00:29:12,510
documentation structure
to help create the
567
00:29:12,510 --> 00:29:14,750
outcome we're looking
for with automation.
568
00:29:18,930 --> 00:29:22,110
So some key focus
areas for success I
569
00:29:22,110 --> 00:29:24,250
just wanted to
reiterate these, we've
570
00:29:24,250 --> 00:29:27,230
touched on them
already, but I think
571
00:29:27,230 --> 00:29:30,450
they're worth revisiting.
The people. Don't
572
00:29:30,450 --> 00:29:32,790
assume if you build
it they will come.
573
00:29:32,790 --> 00:29:35,110
Their perception
of AI is important
574
00:29:35,110 --> 00:29:39,810
the threat of AI on
jobs is top of mind
575
00:29:40,110 --> 00:29:43,490
to include my own of
course adoption of AI
576
00:29:43,490 --> 00:29:47,690
doesn't happen just
because we go live. How
577
00:29:47,690 --> 00:29:50,030
we're redeploying the
talent, as I mentioned
578
00:29:50,030 --> 00:29:52,550
before, how do we
educate? And what's
579
00:29:52,550 --> 00:29:54,830
the executive level
support to help them
580
00:29:54,830 --> 00:29:57,430
understand the vision
behind your automation
581
00:29:57,430 --> 00:30:00,630
strategy? We've
talked a lot about the
582
00:30:00,630 --> 00:30:04,550
importance of the experts
in their fields to
583
00:30:04,550 --> 00:30:07,090
help create the right
automation outcomes
584
00:30:07,090 --> 00:30:10,590
that we're looking
for. And how do we help
585
00:30:10,590 --> 00:30:14,630
to educate them
earlier on, help them
586
00:30:14,630 --> 00:30:17,430
understand their role
with the technology in
587
00:30:17,430 --> 00:30:20,480
place and the vision
at an executive level
588
00:30:20,480 --> 00:30:23,570
to help create the
buy-in. And so I love
589
00:30:23,570 --> 00:30:26,270
something Katie said
earlier about whether
590
00:30:26,270 --> 00:30:29,790
you do it for business
or personal reasons,
591
00:30:30,050 --> 00:30:33,370
actually engaging with
these Large Language
592
00:30:33,370 --> 00:30:37,810
Models will help them
see the vision of their
593
00:30:37,810 --> 00:30:41,910
role in this new
automated world. And I'll
594
00:30:41,910 --> 00:30:44,130
give you an example,
a simple example, where
595
00:30:44,130 --> 00:30:48,810
i can ask chap gpt to
tell me the code for
596
00:30:48,810 --> 00:30:52,770
a decubitus ulcer but
that doesn't and it
597
00:30:52,770 --> 00:30:55,150
will be correct and it
will be fast but that
598
00:30:55,150 --> 00:30:58,350
doesn't mean that it's
accurate because to
599
00:30:58,350 --> 00:31:01,310
appropriately code a
decubitus ulcer i need to
600
00:31:01,310 --> 00:31:04,390
know where it was
located on the body, what
601
00:31:04,390 --> 00:31:07,270
side of the body it
was located on, and the
602
00:31:07,270 --> 00:31:09,850
staging of that wound.
So when they start
603
00:31:09,850 --> 00:31:12,270
to see the importance
of asking the right
604
00:31:12,270 --> 00:31:16,450
question to get the
desired answer at a level
605
00:31:16,450 --> 00:31:19,210
of accuracy that works
for health care, it
606
00:31:19,210 --> 00:31:21,890
helps them envision
their role in the future.
607
00:31:21,890 --> 00:31:24,930
But it also helps you
plan for what it's
608
00:31:24,930 --> 00:31:28,110
good at and the things
that it's not good at.
609
00:31:28,230 --> 00:31:33,090
I recently used it to
try to help me write
610
00:31:33,090 --> 00:31:35,810
a video presentation.
And I quickly realized
611
00:31:35,810 --> 00:31:39,050
I right now am better
than AI at creating
612
00:31:39,050 --> 00:31:41,210
my own video
presentation so i think
613
00:31:41,210 --> 00:31:43,950
it's really interesting
to engage with it
614
00:31:43,950 --> 00:31:47,590
Yourself. The documentation
and data right
615
00:31:47,590 --> 00:31:51,490
there are still check
boxes that exist within
616
00:31:51,490 --> 00:31:53,690
many templates there's
still handwriting
617
00:31:53,690 --> 00:31:57,510
that exists. Fishbone
diagrams. Physicians
618
00:31:57,510 --> 00:32:00,070
love to document their
lab findings in the
619
00:32:00,070 --> 00:32:03,490
fishbone diagram. AI
has a really tough time
620
00:32:03,490 --> 00:32:06,150
with fish bones. Or
what about templates
621
00:32:06,150 --> 00:32:08,590
like the one on the
right where they say bold
622
00:32:08,590 --> 00:32:12,970
indicates a
positive and all else
623
00:32:12,970 --> 00:32:15,330
should be considered
negative. That's not going
624
00:32:15,330 --> 00:32:17,890
to work here. Clear
headers. Understanding
625
00:32:17,890 --> 00:32:20,490
where different
parts of your
626
00:32:20,490 --> 00:32:23,410
documentation start and
where
627
00:32:23,410 --> 00:32:25,790
it should end so we
can appropriately point
628
00:32:25,790 --> 00:32:29,250
AI to where in this
large amount of
629
00:32:29,250 --> 00:32:33,170
clinical data it's
appropriate to analyze
630
00:32:33,170 --> 00:32:36,170
and draw answers from
for your questions.
631
00:32:36,530 --> 00:32:39,990
Clear headers,
punctuation matters. Are
632
00:32:39,990 --> 00:32:44,450
we using a colon,
semicolon, the way we
633
00:32:44,450 --> 00:32:46,590
should so that it
knows it's at the end
634
00:32:46,590 --> 00:32:49,210
of a statement and
we don't create new
635
00:32:49,210 --> 00:32:51,730
terms, as we saw in
the previous article.
636
00:32:51,730 --> 00:32:54,290
And then insertions
of prompt
637
00:32:54,290 --> 00:32:56,850
content. So adding
to all that, how
638
00:32:56,850 --> 00:32:59,890
do we lay prompting
content on top
639
00:32:59,890 --> 00:33:02,230
of these Large
Language Models?
640
00:33:05,010 --> 00:33:07,850
Choosing the right
tool for your use case.
641
00:33:07,850 --> 00:33:10,810
you know for my
personal journey as a
642
00:33:10,810 --> 00:33:13,690
product management leader,
we were moving from
643
00:33:13,690 --> 00:33:16,310
a computer-assisted
coding world to this
644
00:33:16,310 --> 00:33:19,730
new autonomous coding
world and we ultimately
645
00:33:19,730 --> 00:33:22,090
started off with
a bake-off what do
646
00:33:22,090 --> 00:33:25,610
these models do better
than what we do today
647
00:33:25,610 --> 00:33:28,690
and so when we started
to compare those, like
648
00:33:28,690 --> 00:33:30,650
we talked about earlier,
there's
649
00:33:30,650 --> 00:33:32,950
a knowledge of the
internet scale that
650
00:33:32,950 --> 00:33:36,110
these Large Language
Models not only access,
651
00:33:36,380 --> 00:33:40,370
but they can rapidly
draw conclusions from
652
00:33:40,370 --> 00:33:42,970
that large amount
of data. The language
653
00:33:42,970 --> 00:33:45,810
understanding fuzzy
context-based understanding
654
00:33:45,810 --> 00:33:48,770
of text and natural
language prompts and the
655
00:33:48,770 --> 00:33:51,870
reasoning is an
ability to form limited
656
00:33:51,870 --> 00:33:55,430
reasoning on simple
multi-step problems. On
657
00:33:55,430 --> 00:33:59,910
the right-hand side
we believe that from a
658
00:33:59,910 --> 00:34:04,370
symbolic AI perspective,
that it's great at
659
00:34:04,370 --> 00:34:07,710
explainable and unverifiable
clinical detail,
660
00:34:07,810 --> 00:34:10,650
that the technologies
are supported because
661
00:34:10,650 --> 00:34:13,350
we have years of
cumulative experience
662
00:34:13,350 --> 00:34:17,110
reading and applying
context or translations
663
00:34:17,110 --> 00:34:20,510
to clinical
documentation. We use SME
664
00:34:20,510 --> 00:34:23,250
curated and clinical
knowledge resources
665
00:34:23,250 --> 00:34:25,870
versus large scale
of internet to draw
666
00:34:25,870 --> 00:34:29,130
conclusions from and then
that experience extensive
667
00:34:29,130 --> 00:34:32,870
breadth of domain
coverage. But what i
668
00:34:32,870 --> 00:34:35,870
wanted to tell you
here is that it wasn't
669
00:34:35,870 --> 00:34:39,330
about choosing one
model over the other in
670
00:34:39,330 --> 00:34:42,630
the end. It was about
a combination of
671
00:34:42,630 --> 00:34:47,650
tools that created
monumental differences in
672
00:34:47,650 --> 00:34:50,270
our results. The Large
Language Model on
673
00:34:50,270 --> 00:34:53,970
its own did not
outperform. But symbolic AI
674
00:34:53,970 --> 00:34:57,370
reached a peak, but
didn't scale over time
675
00:34:57,370 --> 00:35:00,290
as quickly as it should.
And so this hybrid
676
00:35:00,290 --> 00:35:02,590
AI approach means
more than one thing.
677
00:35:02,590 --> 00:35:04,110
And I think this is
a really interesting
678
00:35:04,110 --> 00:35:06,670
conversation as you're
evaluating vendors.
679
00:35:06,910 --> 00:35:10,650
A) as we talked about
earlier, technology is
680
00:35:10,650 --> 00:35:12,570
moving at a rapid
pace. So you don't want
681
00:35:12,570 --> 00:35:17,530
to be locked in to one
AI approach that can't
682
00:35:17,530 --> 00:35:20,250
evolve on pace with
this rapidly moving. So
683
00:35:20,250 --> 00:35:21,930
what if something better
comes out tomorrow
684
00:35:21,930 --> 00:35:24,250
Are you locked into 10
years with that vendor
685
00:35:24,250 --> 00:35:27,010
do they have the
investments to be able to
686
00:35:27,010 --> 00:35:30,570
take in and retire a
model or combine a model
687
00:35:30,570 --> 00:35:33,470
as we're talking about?
And then ultimately
688
00:35:33,470 --> 00:35:37,970
not just bringing one
weapon to our five but
689
00:35:37,970 --> 00:35:40,450
bringing multiple
things because the truth
690
00:35:40,450 --> 00:35:44,530
is each of these
different models have the
691
00:35:44,530 --> 00:35:47,810
ability to bring a
benefit to solving the
692
00:35:47,810 --> 00:35:50,330
larger picture of
healthcare use cases we saw
693
00:35:50,330 --> 00:35:52,990
on that other page.
One of the things we
694
00:35:52,990 --> 00:35:55,990
discovered in the power
of combination of these
695
00:35:55,990 --> 00:35:59,750
models is that if we
can create, take this
696
00:35:59,750 --> 00:36:03,410
large clinical record
and create a more
697
00:36:03,410 --> 00:36:07,130
condensed record of the
things that are pertinent
698
00:36:07,130 --> 00:36:10,430
for a use case, that
we can understand and
699
00:36:10,430 --> 00:36:14,050
interpret terms that
we can link terms to
700
00:36:14,050 --> 00:36:16,330
get to the right answer,
like I was talking
701
00:36:16,330 --> 00:36:19,150
about with that one
decubitus ulcer, that
702
00:36:19,150 --> 00:36:21,730
there is an ability to
create an output that
703
00:36:21,730 --> 00:36:24,410
not one of these models
can do on their own.
704
00:36:24,620 --> 00:36:26,850
And so I'll give you
an example. This is
705
00:36:26,850 --> 00:36:28,730
an example of what
we call our knowledge
706
00:36:28,730 --> 00:36:32,690
graph. So this is
where CLI, our Natural
707
00:36:32,690 --> 00:36:35,110
Language Processing
and knowledge graph
708
00:36:35,730 --> 00:36:39,810
created, gave the LLM
a better chance to
709
00:36:39,810 --> 00:36:42,430
derive the right
answer. So instead of
710
00:36:42,430 --> 00:36:45,170
presenting this large
amount of chart text,
711
00:36:45,170 --> 00:36:48,710
we processed that
chart text first, we
712
00:36:48,710 --> 00:36:51,230
extracted the important
elements for the
713
00:36:51,230 --> 00:36:54,590
use case, created the
linkages needed to
714
00:36:54,590 --> 00:36:57,330
get to the right
answer, and pre-auth is
715
00:36:57,330 --> 00:36:59,770
a great example of
where this works well,
716
00:36:59,770 --> 00:37:03,250
so that then the Large
Language Model can
717
00:37:03,250 --> 00:37:06,670
come in with its
strengths and help drive
718
00:37:06,670 --> 00:37:08,890
the outcome that
you're looking for. So
719
00:37:15,250 --> 00:37:18,910
this is a great example
um of sorry excuse
720
00:37:18,910 --> 00:37:23,150
me this is an
example of automation
721
00:37:23,150 --> 00:37:25,310
and a process to
help you achieve
722
00:37:25,310 --> 00:37:27,270
Success. You want to
have your strategy
723
00:37:27,270 --> 00:37:29,850
enablement know your
strategy when you go in.
724
00:37:30,370 --> 00:37:33,190
Do the data analysis
are you ready
725
00:37:33,190 --> 00:37:37,310
for applying AI to
your data how is
726
00:37:37,310 --> 00:37:39,590
this vendor going
to use your data how
727
00:37:39,590 --> 00:37:41,790
are you making sure
you protect your
728
00:37:41,790 --> 00:37:44,610
Data, what's the
value of your data?
729
00:37:45,290 --> 00:37:48,330
Automation development,
ROI forecasting
730
00:37:48,330 --> 00:37:51,210
and recognition,
workflow standardization,
731
00:37:51,210 --> 00:37:53,230
process
improvements, leading
732
00:37:53,230 --> 00:37:56,430
practice embedment
and functional
733
00:37:56,430 --> 00:37:58,990
team centralization
or reassignment.
734
00:37:59,310 --> 00:38:02,970
Automate and QC these
models are really
735
00:38:02,970 --> 00:38:06,020
Important. And then finally
performance measurement
736
00:38:06,020 --> 00:38:08,990
We may have lost
you. Are you there?
737
00:38:10,110 --> 00:38:11,710
I am here.
738
00:38:14,250 --> 00:38:15,130
Katie,
739
00:38:24,550 --> 00:38:25,670
can you hear me now?
740
00:38:31,850 --> 00:38:33,290
Okay. It looks like Katie
741
00:38:33,290 --> 00:38:35,570
lost me. I'll
keep going here.
742
00:38:36,930 --> 00:38:39,070
And so in the next slide,
743
00:38:40,490 --> 00:38:44,010
vendor accountability.
So how are you
744
00:38:44,010 --> 00:38:46,570
assessing the accountability
of your vendors,
745
00:38:46,570 --> 00:38:48,770
compliance and
responsible use of AI?
746
00:38:48,830 --> 00:38:53,410
We at Optum have an
AI review board that
747
00:38:53,410 --> 00:38:57,450
we use to assess not
only our technology
748
00:38:57,450 --> 00:38:59,970
for the outcomes
we're committing to
749
00:38:59,970 --> 00:39:03,730
the market, but we
also assess for things
750
00:39:03,730 --> 00:39:06,690
like bias across
different populations,
751
00:39:06,850 --> 00:39:13,010
compliance with regulatory
terms. And
752
00:39:13,010 --> 00:39:16,450
Processes. So when
you think of responsible
753
00:39:16,450 --> 00:39:19,410
use of AI a few things.
Building, deploying,
754
00:39:19,410 --> 00:39:23,610
managing and scaling
responsibility review
755
00:39:23,610 --> 00:39:26,030
and approval by a
steering committee. AI
756
00:39:26,030 --> 00:39:29,410
Scalability, standard
operating procedures
757
00:39:29,410 --> 00:39:32,530
model inventory
management, responsible and
758
00:39:32,530 --> 00:39:36,580
sustainable AI to ensure
adaptability, resilience,
759
00:39:36,580 --> 00:39:39,490
reliability and
robustness. And alignment
760
00:39:39,490 --> 00:39:43,030
with the human value.
From an AI governance,
761
00:39:43,030 --> 00:39:45,050
it's about compliance
with data protection
762
00:39:45,050 --> 00:39:48,050
regulations, data
quality and security,
763
00:39:48,410 --> 00:39:51,090
explainability and
transparency, and
764
00:39:51,090 --> 00:39:53,770
ongoing evaluation
for hallucinations
765
00:39:53,770 --> 00:39:55,630
in these models.
We want to address
766
00:39:55,630 --> 00:39:57,830
bias and fairness,
accountability,
767
00:39:57,830 --> 00:40:01,250
and disciplinary measures
for noncompliance,
768
00:40:01,250 --> 00:40:03,990
mechanisms for
addressing concerns.
769
00:40:03,990 --> 00:40:06,770
And then the AI center
of excellence is
770
00:40:06,770 --> 00:40:09,210
really about leadership
alignment to provide
771
00:40:09,210 --> 00:40:12,030
advocacy, strategic
direction and oversight.
772
00:40:12,030 --> 00:40:17,250
Robust data infrastructure,
agile methodology,
773
00:40:17,250 --> 00:40:20,410
knowledge management
processes, seamless
774
00:40:20,410 --> 00:40:23,670
Collaboration, training
Programs, strategic
775
00:40:23,670 --> 00:40:26,190
partnerships,
stakeholder engagement,
776
00:40:26,190 --> 00:40:30,270
assigning your KPIs
and defining those. And
777
00:40:30,270 --> 00:40:34,050
continuous evaluation
of those and then the
778
00:40:34,050 --> 00:40:36,830
culture of innovation
and research. Katie,
779
00:40:36,830 --> 00:40:39,250
anything you would want
to add to this slide?
780
00:40:45,130 --> 00:40:46,850
I'm going to keep
us moving. We seem
781
00:40:46,850 --> 00:40:48,770
to have some technical
difficulties.
782
00:40:52,450 --> 00:40:55,990
So successfully achieving
ROI from automation
783
00:40:55,990 --> 00:40:58,790
is a carefully
calculated journey. It is
784
00:40:58,790 --> 00:41:02,060
so attractive when you
look at the potential
785
00:41:02,060 --> 00:41:05,590
of automation and
the temptation is to
786
00:41:05,590 --> 00:41:09,130
dive in. But over 50% of
automation investments
787
00:41:09,130 --> 00:41:13,250
fail to achieve their
desired ROI. You
788
00:41:13,250 --> 00:41:16,610
need to be strategic and
diligent in identifying
789
00:41:16,610 --> 00:41:19,870
your use cases, test
your hypotheses.
790
00:41:19,990 --> 00:41:25,190
You know, as we look
at automating coding,
791
00:41:25,190 --> 00:41:27,970
one of the things we see
is it's very different
792
00:41:27,970 --> 00:41:31,230
based on the volume
of cases, what your
793
00:41:31,230 --> 00:41:35,430
cost to code a case
is, and ultimately what
794
00:41:35,430 --> 00:41:38,850
the risks are if you
bypass the human.
795
00:41:38,850 --> 00:41:41,310
And so we talk a lot
with data scientists
796
00:41:41,310 --> 00:41:46,470
about it's not just about
how to automate what
797
00:41:46,470 --> 00:41:50,510
has traditionally
taken a human role to
798
00:41:50,510 --> 00:41:54,730
perform, but it's also
about training the
799
00:41:54,730 --> 00:41:59,130
models to understand
when a human should touch
800
00:41:59,190 --> 00:42:03,370
a case. So the best
AI strategy isn't just
801
00:42:03,370 --> 00:42:06,430
about AI, and I think
that's so important.
802
00:42:07,290 --> 00:42:09,570
Maximizing the value
of your investments
803
00:42:09,570 --> 00:42:12,390
in automation and unlocking
the full potential
804
00:42:12,390 --> 00:42:16,510
of the AI you're
investing in. Is early
805
00:42:16,510 --> 00:42:18,830
adoption of AI
technologies worth the
806
00:42:18,830 --> 00:42:21,070
risk for you? It's
changing the health care
807
00:42:21,070 --> 00:42:23,510
industry but not all
organizations are
808
00:42:23,510 --> 00:42:26,110
adopting at the same
pace. And Katie talked a
809
00:42:26,110 --> 00:42:28,970
little bit about that
earlier and learning
810
00:42:28,970 --> 00:42:31,210
from other industries
that may have been
811
00:42:31,210 --> 00:42:34,410
using AI longer. And
then AI will be a
812
00:42:34,410 --> 00:42:36,570
key differentiator
for healthcare. It's
813
00:42:36,570 --> 00:42:39,270
likely not going away.
But I do believe we'll
814
00:42:39,270 --> 00:42:41,950
see use cases that
work extremely well,
815
00:42:42,150 --> 00:42:45,250
use cases that are
not quite ready. And
816
00:42:45,250 --> 00:42:47,690
ultimately, I think whether
you're the vendor or
817
00:42:47,690 --> 00:42:50,250
the healthcare
organization, we'll see a
818
00:42:50,250 --> 00:42:52,690
need to evolve our
people, our process, and
819
00:42:52,690 --> 00:42:56,370
our technologies to
reach the final journey.
820
00:42:56,370 --> 00:42:58,750
Katie, would you like
to add anything here?
821
00:42:59,590 --> 00:43:02,370
Well, first, I'd like
to add to everyone.
822
00:43:02,370 --> 00:43:04,810
I apologize for the
technology gremlins
823
00:43:04,810 --> 00:43:06,710
that we've had today.
It seems that both
824
00:43:06,710 --> 00:43:09,690
Lorri and I have had
a bit of a hiccup.
825
00:43:10,150 --> 00:43:13,970
You know, if you'd
allow me to, I'd love to
826
00:43:13,970 --> 00:43:16,350
go back and talk a
little bit just about
827
00:43:16,870 --> 00:43:19,650
governance. I think this
is one of the things
828
00:43:19,650 --> 00:43:23,190
that I'd love to
share with the
829
00:43:23,190 --> 00:43:26,030
group, just where
we are along our
830
00:43:26,030 --> 00:43:28,630
journey because I
really think as an
831
00:43:28,630 --> 00:43:32,630
Industry, and what
I'll call early innings
832
00:43:32,630 --> 00:43:35,230
Here, and it's one of
the things that will
833
00:43:35,230 --> 00:43:38,490
get better if we
share. So from a
834
00:43:38,490 --> 00:43:42,550
Banner perspective what
we've done thus far,
835
00:43:42,550 --> 00:43:46,410
we've really created
a process where we
836
00:43:46,410 --> 00:43:48,890
have our process
start with vendor
837
00:43:48,890 --> 00:43:51,800
evaluation, an AI evaluation.
It's a questionnaire.
838
00:43:51,800 --> 00:43:54,470
It's really 25
questions across five
839
00:43:54,470 --> 00:43:57,290
different categories.
And those categories
840
00:43:57,290 --> 00:44:01,160
are around usefulness,
usability, and efficacy.
841
00:44:01,210 --> 00:44:04,110
So we ask in that
section questions
842
00:44:04,110 --> 00:44:06,670
like, how have you
addressed potential concerns
843
00:44:06,670 --> 00:44:09,890
around AI reliability,
bias, fairness.
844
00:44:10,050 --> 00:44:12,930
Explain the measures
that are implemented
845
00:44:12,930 --> 00:44:15,150
to build user trust.
So a lot of the things
846
00:44:15,150 --> 00:44:17,310
that we've talked about
on this webinar
847
00:44:17,310 --> 00:44:20,750
thus far, around
trust and
848
00:44:20,750 --> 00:44:25,170
ethical uses of AI.
Another category we look
849
00:44:25,170 --> 00:44:28,290
at is fairness equity
and bias management
850
00:44:28,290 --> 00:44:31,710
so asking questions
like, how have you
851
00:44:31,710 --> 00:44:34,210
comprehensively identified
potential sources
852
00:44:34,210 --> 00:44:36,790
of bias,
mapped out socio
853
00:44:36,790 --> 00:44:39,670
-demographic groups at
Risk, developed robust
854
00:44:39,670 --> 00:44:42,270
strategies to proactively
monitor, detect
855
00:44:42,270 --> 00:44:45,410
and mitigate potential
discriminatory impacts
856
00:44:45,410 --> 00:44:48,190
across different
population segments.
857
00:44:48,410 --> 00:44:51,630
So, you know, Lorri
talked about, you
858
00:44:51,630 --> 00:44:55,150
know, before, it's not
just about, you know,
859
00:44:55,150 --> 00:44:57,190
do we have
hallucination, but are
860
00:44:57,190 --> 00:45:01,450
there any biases that
were not intended but
861
00:45:01,450 --> 00:45:03,910
are showing up
because of the use of
862
00:45:03,910 --> 00:45:07,050
that AI? How do we
monitor for that so
863
00:45:07,050 --> 00:45:10,450
that we have awareness
before we get to
864
00:45:10,450 --> 00:45:13,130
an outcome that we
weren't anticipating.
865
00:45:13,470 --> 00:45:16,370
Another area we look
at is safety and
866
00:45:16,370 --> 00:45:19,690
reliability. So questions
around like user
867
00:45:19,690 --> 00:45:21,910
controls and safety
mechanisms. What
868
00:45:21,910 --> 00:45:24,430
safety guards have
you put in place to
869
00:45:24,430 --> 00:45:26,710
make sure that there's
human oversight?
870
00:45:27,580 --> 00:45:29,620
We also look at
transparency,
871
00:45:30,310 --> 00:45:33,090
interoperability,
and accountability.
872
00:45:33,670 --> 00:45:35,830
Questions like
describing your
873
00:45:35,830 --> 00:45:38,130
comprehensive documentation
strategy. Lorri
874
00:45:38,130 --> 00:45:40,610
talked about that
earlier, including
875
00:45:40,610 --> 00:45:42,510
how will you
maintain transparency
876
00:45:42,510 --> 00:45:45,470
about the provenance
of that data,
877
00:45:45,470 --> 00:45:48,730
your model limitations,
decision thresholds.
878
00:45:48,730 --> 00:45:51,570
And then finally,
a lot of questions
879
00:45:51,570 --> 00:45:54,770
around security and
privacy. So ensuring
880
00:45:54,770 --> 00:45:57,590
that we've got third
-party risk management.
881
00:45:58,050 --> 00:46:01,730
These are things that
I think it's important
882
00:46:01,730 --> 00:46:05,810
from a governance
perspective that, A) we're
883
00:46:05,810 --> 00:46:09,470
understanding how
our vendors are
884
00:46:09,470 --> 00:46:13,470
leveraging this AI,
what are the controls that
885
00:46:13,470 --> 00:46:16,230
they have in place
and then from there we
886
00:46:16,230 --> 00:46:20,730
create a score on all
the information that we
887
00:46:20,730 --> 00:46:25,030
gleaned from this
questionnaire, we create
888
00:46:25,030 --> 00:46:27,630
a score so that we
understand where we're in
889
00:46:27,630 --> 00:46:30,110
alignment with our
standards and where there's
890
00:46:30,110 --> 00:46:33,410
some gaps and we have
an AI committee that
891
00:46:33,410 --> 00:46:36,650
reviews all of the
use of AI. They review
892
00:46:36,650 --> 00:46:41,250
everything that comes
through for new contracting.
893
00:46:41,270 --> 00:46:44,390
And there's robust
conversation around
894
00:46:44,390 --> 00:46:47,170
where those gaps are,
the use of our data,
895
00:46:47,390 --> 00:46:51,290
any gaps and
controls. So I wanted
896
00:46:51,290 --> 00:46:53,800
to share this
because I think it's
897
00:46:53,800 --> 00:46:56,250
incumbent upon
all of us to make
898
00:46:56,250 --> 00:46:58,860
sure that we're
continually tooling
899
00:46:59,690 --> 00:47:03,850
our governance,
it's for AI and
900
00:47:03,850 --> 00:47:05,750
for all of the
newness, right,
901
00:47:05,750 --> 00:47:08,590
that's coming into
our atmosphere.
902
00:47:08,590 --> 00:47:11,310
It's not just about,
you know, AI is
903
00:47:11,310 --> 00:47:13,550
going to happen. So
how do we do it right?
904
00:47:13,550 --> 00:47:15,490
How do we do it with
integrity? And how
905
00:47:15,490 --> 00:47:17,510
do we have the right
controls in place?
906
00:47:18,090 --> 00:47:19,330
Thank you for allowing
907
00:47:19,330 --> 00:47:20,790
me to go backwards, Lorri.
908
00:47:21,450 --> 00:47:22,710
Thank you, Katie.
909
00:47:26,700 --> 00:47:27,910
So as we come to the
910
00:47:27,910 --> 00:47:30,470
conclusion of
today's webinar,
911
00:47:30,470 --> 00:47:33,290
I just wanted Katie
to chime in and give
912
00:47:33,290 --> 00:47:35,410
us some tips and
tricks around
913
00:47:35,410 --> 00:47:38,110
evaluating your
automation capabilities.
914
00:47:39,570 --> 00:47:42,190
So, you know, we've
talked a lot about
915
00:47:42,190 --> 00:47:46,970
these notions of where
we can use AI to overcome
916
00:47:46,970 --> 00:47:49,290
some of our
documentation challenges.
917
00:47:49,410 --> 00:47:53,930
Lorri talked a lot
about the trust in the
918
00:47:53,930 --> 00:47:57,860
data. She gave a great
example of when AI
919
00:47:57,860 --> 00:48:02,410
might be reading things
in a way that leads
920
00:48:02,410 --> 00:48:05,690
to degradation in
trust, and frankly,
921
00:48:05,690 --> 00:48:10,510
beyond that, and outcomes
from a data perspective
922
00:48:10,510 --> 00:48:12,110
that we weren't
anticipating. So how
923
00:48:12,110 --> 00:48:14,850
do we put the right
controls in there to
924
00:48:14,850 --> 00:48:17,550
drive the outcomes
that we're looking for?
925
00:48:18,230 --> 00:48:21,110
We've talked about this
notion of resistance
926
00:48:21,110 --> 00:48:25,710
to change and how we
can overcome some of
927
00:48:25,710 --> 00:48:29,330
that, how we can help
our teams to lean
928
00:48:29,330 --> 00:48:33,990
into AI to make it
less scary and really
929
00:48:33,990 --> 00:48:37,830
identify ways that our
teams can engage in a
930
00:48:37,830 --> 00:48:42,930
way that opens up that
creativity and thinking
931
00:48:42,930 --> 00:48:45,290
about how do we really
use these things
932
00:48:45,290 --> 00:48:47,390
to transform the way
we work. It's not
933
00:48:47,390 --> 00:48:50,510
replacing, it's transforming
the way we work.
934
00:48:51,050 --> 00:48:54,010
And then in
terms of risk,
935
00:48:54,010 --> 00:48:57,130
we just talked about
some of the things
936
00:48:57,130 --> 00:48:59,190
that from a governance
perspective,
937
00:48:59,190 --> 00:49:03,390
some of the ways that
we can really open up
938
00:49:03,390 --> 00:49:06,590
deeper understanding
of the AI that we're
939
00:49:06,590 --> 00:49:09,930
looking to put into
place, really scrutinizing
940
00:49:09,930 --> 00:49:13,790
do we have the right
controls in place do
941
00:49:13,790 --> 00:49:16,430
we have the right
awareness of how this
942
00:49:16,430 --> 00:49:19,570
is going to integrate
into our workflows
943
00:49:19,840 --> 00:49:22,290
what are the change
management steps that
944
00:49:22,290 --> 00:49:25,550
need to occur as we
deploy some of this AI
945
00:49:26,070 --> 00:49:28,470
and then i think
most importantly
946
00:49:28,470 --> 00:49:32,190
talking about the
ethical use of AI. This
947
00:49:32,190 --> 00:49:36,090
is something that
again, we are
948
00:49:36,090 --> 00:49:40,050
all evolving in terms
of understanding
949
00:49:40,050 --> 00:49:42,050
all of the
different use cases.
950
00:49:42,550 --> 00:49:45,330
There's been a lot
of focus on revenue
951
00:49:45,330 --> 00:49:47,610
cycle because this
is one of the areas
952
00:49:47,610 --> 00:49:50,210
that I think has a
lot of early promise
953
00:49:50,210 --> 00:49:53,930
for how we leverage
these technologies
954
00:49:53,930 --> 00:49:57,430
to really transform
the way we work and
955
00:49:57,430 --> 00:49:59,530
making sure that
we're doing it, again,
956
00:49:59,530 --> 00:50:02,630
in a very ethical
and responsible way.
957
00:50:03,270 --> 00:50:05,950
So I appreciate
everyone's time today.
958
00:50:05,950 --> 00:50:08,310
Thank you, Lorri, for
partnering with me.
959
00:50:08,310 --> 00:50:11,190
And I will turn it
back over to Marie.
960
00:50:12,170 --> 00:50:14,290
Awesome. Thank you,
Katie. Thank you both
961
00:50:14,290 --> 00:50:16,790
for an excellent
presentation. And just as a
962
00:50:16,790 --> 00:50:18,850
reminder to our audience,
you can still submit
963
00:50:18,850 --> 00:50:20,830
questions through
the Q&A widget at the
964
00:50:20,830 --> 00:50:23,510
bottom of your console
player. It already
965
00:50:23,580 --> 00:50:26,450
may be open and appear
on the left side of
966
00:50:26,450 --> 00:50:28,870
your screen. And just
note that all questions
967
00:50:28,870 --> 00:50:30,630
are going to be
collected and shared with
968
00:50:30,630 --> 00:50:33,170
our presenters to follow
up after the event.
969
00:50:33,350 --> 00:50:35,570
So that's all the time
we have for today.
970
00:50:35,570 --> 00:50:37,850
I want to thank Lorri
Sides and Katie LeBlanc
971
00:50:37,850 --> 00:50:39,930
once again for an
excellent presentation
972
00:50:39,930 --> 00:50:42,110
and to our sponsor,
Optum, for making
973
00:50:42,110 --> 00:50:44,690
the whole program
possible. And finally,
974
00:50:44,690 --> 00:50:46,570
thank you to our audience
for participating
975
00:50:46,570 --> 00:50:48,630
today. We hope you'll
join us in the future
976
00:50:48,630 --> 00:50:50,630
for another HealthLeaders webinar. And
977
00:50:50,630 --> 00:50:52,630
that concludes today's
program. Thanks.
Related healthcare insights
Article
Optum won the 2023 North American Enabling Technology Leadership Award.
Article
Explore how evidence-based tools like The ASAM Criteria® Navigator and InterQual® Substance Use Disorders Criteria are helping payers and providers efficiently and effectively navigate the complex substance use disorder landscape today.
Article
Learn how the InterQual level of care team tackled a content development challenge following updated societal guidelines for acute coronary syndrome.