In this webinar, learn about how changes upstream can be made to protocol design, coupled with electronic health records and other data sources to really make vast improvements in patient recruitment to ultimately save time, money and effort.
Hear from senior clinical operation executives and innovative thought leaders on the latest innovations that are modernized in clinical research.
Danny McCarthy (00:01):
Hello, hello, hello, my name is Danny McCarthy and welcome to the Solving Trial Recruitment Challenges through Unlocking Data Webinar. If you are here for this webinar, you are in exactly the right place. We will be getting started now. So I'm really pleased to welcome you all to this webinar to learn about addressing recruitment challenges. So this webinar demonstrates how changes upstream can be made to protocol design, coupled with electronic health records and other data sources to really make vast improvements in patient recruitment to ultimately save time, money, and effort. So this webinar is being put on by DPHARM, the DPHARM Disruptive Innovations to Modernize Clinical Research Conference takes place in September. It'll be taking place September 20th to the 22nd in Boston. This is an unparalleled opportunity to hear from senior clinical operation executives and innovative thought leaders on the latest innovations that are really modernized in clinical research.
(01:04):
You can learn more about that for the full agenda for speaking faculty, you can visit dharmconference.com. So I want to take a second to thank Optum. This webinar would not be possible without Optum, and I'm really pleased and really excited for everyone to see how they are making gains in patient recruitment through the use of data. So just a brief overview of today's schedule. So I'll be introducing today's speakers. They'll be going over the current challenges facing trial recruitment, how data can be leveraged to solve for those challenges, and then a discussion between Optum and Biogen about what this looks like in practice, lessons learned, how this can be applicable to your own work in solving trial recruitment. We will have a live audience Q and A. So at the bottom of your screen there is a section for chat, that is where you can put in your questions, your comments throughout the webinar.
(02:02):
We will be getting to those questions. We will be scooping those and anything that we don't get to, we will be passing along to our amazing panelists to answer and be sent back. So please, as you're watching, put in those questions, put in those comments, we would love to be able to answer them and we will have again, the opportunity to answer them. So let's begin. So I am going to ask that our three panelists, Scott, Tracy, and Jenny, come on screen and introduce them. So Scott Morris leads the Clinical Trial Solutions business with Optum Life Sciences. He's previously held leadership roles at Change Healthcare and IQVIA. His expertise cuts across life science research and the provider market segments, and he is experienced working to address many of the challenges surrounding clinical trial conduct and operations, particularly when it comes to improving the diversity of patients that participate.
(02:55):
So welcome Scott. Tracy comes to us also from Optum. She runs clinical operations for their clinical trial solution business, including overseeing all of the research projects that run through the Optum Digital Research Network. Before Optum, she oversaw clinical trials at the University of Wisconsin Madison, where she gained experience on the trial side across a number of different therapeutic areas. She also has experienced implementing CTMS and EDC systems and she has deep expertise on the types of data and systems that are used to support clinical trial operations. And welcome Tracy. And then I'm really pleased to bring also Jenny Higley. So Jenny Higley comes to us from Biogen. She leads the Biogen Feasibility Center of Excellence. Her team supports operational and scientific teams in developing data-driven operational strategies, trial enrollment, modeling and performance projections In the early engagement clinical trial space.
(03:54):
Prior to her most recent role at Biogen, she worked in various aspects of clinical trial strategy including feasibility, data-driven protocol optimization and interactive strategy planning, as well as strategic business development. She held those roles at both spots, organizations as well as CROs. So she brings multiple vantage points to the conversations that we'll be having today. Again, so today's topic for the webinar is around solving clinical trial recruitment challenges and in particular leveraging data to do so. We've brought these speakers together because for the last year or so, Biogen and Optum have been partnering on addressing some of the challenges in this space. For the next 45 minutes, I'll be having a conversation about what they've been working on, some of the key tools they've used in the process, and then we can open it up for again, audience questions. So again, please utilize the chat feature for doing so. With that, I'm going to kick it off to Scott to set up the stage and lead us into the next part of the discussion. Scott, can you start by telling us a little bit about what the challenges in recruitment look like right now?
Speaker 2 (04:58):
Happy to, Danny, thank you very much for the intro, for DPHARM, for having us on this webinar and for all of those who have taken a little bit of time out of your busy schedule, I'm sure to join on a really important topic that we're dealing with in the industry and have been for some time. I'm sure many of you are familiar with the statistics on the left. Many trial sites don't hit their recruitment targets or don't recruit any patients at all. There's been an uptick in the number of eligibility criteria for a given protocol, and we're still seeing a considerable number of protocol amendments per trial, which we all know are expensive. So none of this is new news, but we're focused on what we're focused on at Optum is addressing the root cause drivers of those recruitment challenges. And that's what you see on the right hand side of the slide.
(05:55):
So the issues on the left around research site struggles on enrollment, complexity of eligibility criteria and amendments. No, right? So what's on the right are things we want to spend a little bit of time talking about, not just talking about, but so how are we solving for that? And I'll break it down into four or three simple areas. And the fourth will be sort of solution focused. The first is just dealing with the complexity of today's clinical trials, both in terms of inclusion, exclusion criteria, more components, challenges to drive more inclusivity and diversity and additional access points to get patients. So there's a complexity matter that we're dealing with. And then the site burden matter really relates to the increased expectations we have of the sites, not just from technology, but technology and workflow. And then finally, I'll call it the fewer, more and fewer issue.
(07:07):
You got arrows going in different directions. Fewer patients are eligible for any given trial, given the complexities of the trials, more trials are running, so there's an increased number of trials underway. And then finally the numbers of investigators are decreasing. So again, what we're dealing with is the fewer, more fewer as the third issue in this whole competition for patients and investigators. And so there's so many of you on the line, I'm sure could attest, it just feels like we've got these market forces underway and it's creating a significant mismatch between the supply and the demand in this space. So with that, Jenny, we'd love to get your take here because you've been in the shoes of professionals at sites, at CROs, sponsors. How does it feel to be in those roles and to have to deal with these challenges and how have you tried to mitigate them? How do you like that multipart question we throw at you right off the bat?
Speaker 3 (08:18):
Yeah, thanks for that, Scott. Yeah, I've certainly felt a lot of those challenges and I think anyone who has experience working at a site or a CRO or a sponsor, I know you guys have felt them too. So the issues really do cut across all segments and stakeholder groups involved in trials. It's not just feasibility, it's not just those who are doing protocols. It's really all of us and those of us who are recruiting patients, especially once we get the trial started. So in terms of mitigating these issues, one of the big things that my team has really focused on is trying to bring feasibility analysis further upstream in the planning process. So we're really trying to find ways to vet and pressure test our protocols in advance as much as possible and do more refinement of the protocol before it's finalized. So we're really a lot more confident when the protocol is designed that we're actually going to have a population that's recruitable and that we've got the data to back that up.
(09:25):
In some cases, as a lot of you guys might know who are on the call, you finalize your protocol, you run your feasibility, you start recruitment, and then that's when you realize you're running into some challenges finding patients that can actually participate. Some of that has to do with recruitment components themselves, but a lot of it actually has to do with the protocol design and maybe some features or design elements that could have been caught sooner. I know a lot of instances where you realize late in the game that there are fewer protocol patients who are eligible for your protocol than you expected originally, and that sometimes comes as a surprise. The more we can vet those criteria in advance and really evaluate whether we're targeting recruitable patients, we can really set ourselves up for success further down the line. So overall, I think we're aware of this issue, we're making progress, but it's still easier said than done.
Speaker 4 (10:35):
Jenny, can you unpack that a bit? I hear that from a lot of the pharma sponsors and clients that we work with. Why is this so persistent? Why is this still ongoing?
Speaker 3 (10:46):
Well, I'll speak from my personal experience Tracy, but I think probably a lot of other folks on the call are experiencing the same things, and I think a lot of it comes down to the processes that we've always used and the culture around change management. It's not always about the data or always about having the best design protocol. There's almost kind of a root cause challenge that you could add to your list, Scott, and that would be kind of shifting the way we operate, really the way we conduct feasibility. It's really difficult work for a lot of reasons. One of those is, part of that is who is involved in conducting the feasibility. So in general, folks like my team that do the feasibility, we're not necessarily the ones who are writing the protocol. So we're kind of in separate groups, separate teams, and we're not always in a situation or in kind of set up in a way that we can collaborate with each other effectively and as often as we should.
(11:51):
So you've got the folks who are running the feasibility analysis and then in some cases the teams that develop the protocol themselves, they might be a bit hesitant to make changes to the protocol because they're basing their protocol design on scientific knowledge and their expertise. So there are kind of competing priorities sometimes. The other part, again, like I said, it's a little bit more process related historically, and I think especially when I was on the CRO side of the business, feasibility often happened when a protocol was totally finished. So there really was no crossover between the protocol and the feasibility. So there was not that opportunity to look at protocol design before feasibility was really even started.
(12:48):
And just kind of to, I guess to finish out those thoughts. I guess the other big piece is the data component. That's something that having been in feasibility in the clinical research space for a lot of years now, it's really something I've seen change and evolve a lot over the last few years. The data we've used for feasibility has generally been historical data from past clinical trials, and what we're really kind of working toward and evolving toward now is real world data and especially electronic health world data or electronic health record data, claims data, and that's really giving us an additional vantage point that we haven't had before. So I hope that helps answer your question, Tracy.
Speaker 2 (13:37):
Jenny, by the way, those are all great points. I think we can all relate to the struggle of trying to establish new processes and really build new habits as a team because we as humans tend to struggle with change and adopting and adapting to new things, but we're being forced to and it's important. Tracy and Jenny, can you talk a little bit more about the data piece in particular? What is it about EHR data specifically that helps with these issues we're talking about here?
Speaker 3 (14:12):
Yeah, I'm happy to start on that one, Scott. So to me, I think one of the big advantages I've seen in working with EHR data in particular is that you, you're working with real world data, which is relatively new, but whether you're using claims or EHR, the real advantage that I've seen when you're comparing those two in particular for EHR is that you're getting a ton more detail. So one example I can give you is that if you're working with claims data, you can see whether, let's talk about labs for example. I can see whether patients have had a lab or not had a lab. That's kind of the level of detail I can get with the HR data. You can see not only whether they've had a lab run, but you can see the value of that lab. So when you're kind of looking at your eligibility criteria and your protocol, knowing that cutoff for a lab value and using EHR data for that purpose is going to be really valuable.
Speaker 4 (15:16):
And I agree, Jenny, I think that's a big one. I think the other key value of electronic health record EHR data is the unstructured physician notes. At Optum, we have a team dedicated to natural language processing referred to as NLP, that takes that unstructured information from physician notes and transforms it into a structured, searchable filterable data set, which we can use then for our feasibility to get an even clearer picture about those individual patient characteristics and likely who's going to qualify for a trial. I'll also point out, and maybe this is a bit of a teaser for one of the other ways that Optum supports study recruitment is with the right data permissions and the right relationships with the sites, we can utilize our electronic health record data to create a list of our site's own patients for a given study so they can approach those patients, recruit them into the clinical trial, like closing the last mile between the study and the participant.
Speaker 3 (16:20):
Yeah, yeah, I really, I agree, Tracy, kind of exactly what you said, closing that last mile between the protocol and the patient is a really big deal and I think that's huge. And then I think maybe it's the first part of that mile. Also, it's the goal is to start this process using this kind of data really early in protocol development and kind of not doing feasibility after your protocol is done so we can really optimize our protocols.
Speaker 2 (16:51):
Absolutely love those. You see a theme emerging here. So first mile, last mile start, well finish. Well, I think that's, that's really good framing. It's also a great segue by the way, Tracy, I'm going to turn it over to you and have you walk us through what all of this looks like in action going back to the first mile. And with that, Tracy, let's have you lead us through a little bit of a walkthrough here.
Speaker 4 (17:22):
I can do that. Thank you, Scott. Thank
Speaker 2 (17:24):
You. Great.
Speaker 4 (17:25):
So what I'm about to show you is a tool called Prospector, which we've already alluded to earlier today. And Prospector is a tool that we've built at Optum to support feasibility and protocol optimization. It allows us to work with the EHR data from over 110 million patient lives across the US, all statistically de-identified of course, and build those patient cohorts that reflect the study protocols. It supports various levels of protocol complexity. You can keep it simple if you want something quick and dirty, but you can also get really granular in how you're defining that inclusion and exclusion criteria. It doesn't require you to know how to code in SQL or how to program. It's pretty intuitive to use. I'll share my screen here and just to give you a sense of how this works, I'm going to walk you through an example that I've built of uncontrolled hypertension.
(18:27):
So this is what our prospector tool looks like. Each box is a data set. The orange boxes are criteria that you have selected and the blue boxes are how you connect or link them. You can see it allows the use of and typically used for inclusion or combination of different criteria. We have the or if there's multiple ways to select criteria. And then we have minus, which we typically use for our exclusion criteria. And little by little over here on the left side of the screen, you can see how the number of patients that meet the potentially that are potentially eligible that meet the criteria reduced little by little after each criteria is applied. Something we often refer to as the patient attrition funnel. And I think one of the great features in electronic health record data and in the tool that we have is all the different types of data that can be applied in one tool.
(19:27):
For example, I'm targeting patients with uncontrolled hypertension and I found patients with a diagnosis of hypertension that were also taking a high dose anti-hypertensive medication, but they still have high blood pressure measurements. So this is using all of those different data assets to find that one criteria. We have another example. We have hepatitis over here, make my screen a little bigger where we can utilize diagnoses or we can use their most recent lab results. And I think that's a big differentiator from other tools in the market. It allows us to get to much more granular detail and filtering. So I'm going to shift real quick and show you how these data sets are created. So we will start with the diagnosis and just to give you a sense of what that looks like, we'll go down here to the diagnosis area and I'm going to type in hypertension.
(20:39):
And there are a lot of different hypertension related diagnoses that you can see by the size of the little teeny tiny scroll bar on the right. First thing that we could do if we're looking for current or recent is we could filter that by ICD10 codes. We still have a pretty big number here. So one of the other features that we utilize, I personally utilize a lot is these different categories. So we have the ICD10 qualifier code description, but then we have different groups that these can be put into. ETG stands for Enhanced Therapeutic Group. We have a sub-ETG category, and there's lots of other different ways that the data can be sorted and categorized. So if I pull this up and I look at hypertension, for example, I have 31 codes that are in the hypertension category, but maybe I'm looking at more details than just hypertension.
(21:35):
So in that case, I could easily go in and look at hypertension and look at the different ways that hypertension could be broken down in pregnancy with heart disease, chronic kidney disease and so forth. So one of the great features of the tool is I could select all of them so I can take this one and pick them all, in which case I would have all of my hypertension codes that I have selected I'll picked at once, or I could even pick them all the way at the top. Any diagnosis that matches my tech search criteria, each of these data sets have a date range that can be applied. So I'll just click that and now I've picked my criteria, I've picked my age range, and all of that criteria is now going on the backend and it's applying it to the underlying data and it returns counts rapidly.
(22:31):
I was trying to finish my sentence, but it only takes a few seconds and already those patient counts are updated. So that is one I want to show you. Also, medications. Medications is one of my favorites because there's a lot more categories and ways that medications could be broken down. So if I start here and I just search for text, it searches that text in any of these different categories, and this is a whole bunch of anti-hypertensive medications that we're finding. And we can look by brand name, generic name, different classes of drugs. As you can see, if I'm looking for just all anti-hypertensive medications, I can see that and there are 3,693 there. Maybe I want to see the different types of antihypertensive medications, in which case I can see the different types here. And we can pick all 2,903 of these classifications. We can apply that same date range just as we can in those other data assets. And then we click apply study. So now we want to know how do these patients with a diagnosis and a medication, how do those look? So we use that and operator or connector, and we see hypertension diagnosis and hypertension medication, and our count is already there. So that's just a quick example of how these data assets are built. I can tell you I do not have any coding or programming experience. Jenny, does anybody on your team have any coding experience?
Speaker 3 (24:13):
No, we do not have coding experience. So this really comes in handy. I would say we have pretty deep knowledge of protocols and interpretation of protocols, but we are not coders. So yeah.
Speaker 4 (24:25):
I'm there with you. I'm there with you. I'm going to go back to my other study quick and go back here. Something I just wanted to allude to as we talk about my lack of coding and programming experience, this particular tool allows complex studies. It's not limited by the number of data sets, the number of connectors, links, operators, whatever we want to call that. And what I didn't show you earlier is that we also have a whole new set over here, so it can get real big real quick, and literally real quick, it doesn't take long to build. I'll just give you an example. These are my patients that I would consider not likely a good candidate for a study. Maybe they're on oxygen dependence, maybe they are on hospice care, for example. And there's many different ways that we can find those patients to remove them as maybe they're not an ideal candidate for a study. Jenny, I know you and your team have been using this tool for several months now. Can you talk about maybe what you've done historically and how similar or different that is from today?
Speaker 3 (25:40):
Yeah, I think I would say historically what we've been able to do is work with historical trial data and try to make inferences in that way just in terms of what enrollment rates might be or what our patient population might look like. But I think one of the things we're able to do with Prospector is we're actually able to get a lot more granularity, I would say, through the use of the EHR data and further vetting of the protocol and really allowing us to find areas where numbers of patients might be restricted that we didn't know about before. One example, Tracy, that you've worked with us actually on a protocol is we were looking at lab values and we found that our protocol might've been a little bit more restrictive on lab values than other similar protocols for a similar indication. We were kind of capping our inclusion criteria at the upper limit of normal for a certain blood test, and we kind of wanted to compare that to, and Tracy was able to use her knowledge of protocols and we were able to compare that to other protocols and kind of find out that we were a bit more restrictive maybe than we want it to be.
(27:02):
So I think it's really finding where you might be more restrictive than you think you are and allowing you to have the protocol design that lets you find the most patients.
Speaker 4 (27:12):
And thank you, Jenny. That's something that we've worked with your team and others before in the past. And I think if I look down at my little tree here and we see the patient count slowly dropping, but then we have a huge jump here. So if we were looking for liver function tests that were outside of the upper limit of normal, but to your point Jenny, maybe we can think about that for what we might see and traditionally is two times the upper limit of normal, for example. So I can quickly update that and see what impact that would have on the study, the patient population that is resulting at the end.
(27:58):
So as that's processing, I also want to point out another example I want to think about what I had described earlier is we were really targeting patients that were on a high dose of antidepressants. So we can easily quickly see what that looks like. If we broaden that before I jump up there, we've now only dropped 600 instead of half of our patient population with that one criteria. So I'll go back to my anti-hypertensives and what I had selected earlier when I was demonstrating to you guys was just all of them. So let's look what impact that has and then little by little you could very easily see, you could test the limits, you can look for a middle to high dose, you could look for any medication for example. There's lots of ways that you can evaluate the data and quickly see the impact and what that has on your patient counts. So our patient counts continue to populate as we go. We were starting at 8,000 and now we're starting at 55,000. So that's a huge jump that you could dig into little by little and have a big impact still recruiting that target patient population but not limiting yourselves so much in that particular scope.
Speaker 3 (29:22):
Yeah, Tracy, I would just say from a feasibility perspective, just to close it out from that perspective, it's really all about that patient funnel. We were thinking about the number, we're starting with a large number of patients and as we add eligibility criteria, you kind of see that getting smaller and smaller. And if you see a big drop off somewhere of patients that you don't want to, that's maybe where you can go in and take a look at that and see if that's a criteria that you can dig further into and see if there's anything that can be explored there. So for us, it's all about that patient funnel.
Danny McCarthy (30:16):
Wow, that was incredible. I want to bring on Scott again just to see if he has any final comments or anything that he wanted to add to that, but thank you so much and we're seeing some great questions and comments coming in. But Scott, is there anything else that you wanted to add or include?
Speaker 2 (30:30):
Yeah, absolutely. So first of all, thank you, Jenny, to share your experiences in multiple settings so far in your career. It's directly applicable to the topics that we've covered. Tracy, thank you so much for your expertise to show off a little bit with the prospector tool, the ease of use, the precision of the analysis, and really showing how access to real world electronic health record information in a usable format with the detail and granularity that is available, what that really does to inform a protocol design or feasibility analysis and candidly on down the line, the site selection. So with that, I think the summary here is back to our issues of complexity of trials. I think we've seen how there's a number of components here that really show off how a complex trial can be modeled. Secondly, we think about the site burden. I mean to the extent we can match very precisely and narrow the scope of potential patients that need to be screened for a trial, but also help that site with being really focused on where they should recruit or draw from.
(32:05):
I think that's really important. And then dealing with the challenges of down the line to fewer patients based on the criteria, got to be more targeted, increasing number of trials. So honing the eligibility criteria in these types of analysis and really coming alongside the PIs and the locations where studies are running to really provide them some key support. So casting a wider net's important, using tech to our advantage, using the data and the precision that can be realized through using natural language processing. Really important. And then ultimately that last mile ultimately to move from the analysis to activation with the last mile with the patient, really, really important.
Related healthcare insights
Article
Understand how routine clinical practice impacts information captured in real-world data (RWD).
On-demand webinar
Hear from experts in this Endpoints News webinar on the increasing importance of clinicogenomic data, including diverse phenotypic and genotypic profiles.
Video
Optum Life Sciences leaders break down common missteps when using RWD and how to create practical strategies to overcome them. Watch the video from STAT Summit.