Skip to content ↓
Please register to view this content

7 Ways to Compress Your Clinical Trial

*Required Fields

Transcript Collapse

Yoshi: Good morning, good afternoon. This is Yoshi from New York City, and I will be your moderator for today. On behalf of Medidata Solutions and Shyft Analytics, I would like to welcome you to our webinar, Seven Ways to Compress Your Clinical Trial Timelines. Today’s speakers include Barbara Eleshov from Medidata Solutions. After receiving a master’s degree in biotech statistics from Harvard University, Barbara worked at FDA for over 10 years as a statistical reviewer.

Yoshi: Prior to joining Medidata in 2014, Barbara also worked as a statistical consultant to several biotech companies. Currently, Barbara is a senior statistician at Medidata and is based in San Francisco. Good morning, Barbara.

Barbara: Good morning.

Yoshi: Our second speaker is Josh Ransom from Shyft Analytics. Josh received his PhD degree in biomedical sciences and genetics from the University of Texas. Josh has extensive life science knowledge and industry experiences from companies, including McKenzie and Quintiles. Currently, Josh is the head of clinical products at Shyft Analytics where he is co-leading the development of new tech platforms for top bio pharma companies. Welcome, Josh, from Boston. How are you today, Josh?

Josh: I’m doing well, thanks.

Yoshi: Great. So, over the next 40 minutes or so, Barbara and Josh will share with you ways to compress trial timelines. In fact, how you can use real world evidence and analytics to make better decisions faster and, ultimately, help patients. To frame today’s conversation, I would like to share with you some results from a recent study. Medidata sponsors a survey of executives within the life sciences industry to better understand analytics and clinical research. This online questionnaire was fielded in April and May of 2017 and received responses from 189 participants. Several interesting findings were identified from this study, and two of them are closely tied to our discussion today. So, here, in front of you, is one of the questions and the findings we want to share with you.

Yoshi: The first finding had to do with the top legal priorities. When asked what is your organization’s top clinical development priority, reducing trial timelines is identified as the highest with 32 percent responses for companies of all sizes ranging from $250 million in revenue to over $5 billion in revenue a year. So, for those attending today’s webinar, it’s no surprise that this is a top priority. The second finding had to do with the data assets. When asked what data could help you achieve your priorities, there are a couple of interesting insights. So, here, in front of you, you can see first, 73 percent of respondents selected data from other clinical trials or literature.

Yoshi: This tells us life science leaders clearly recognize the value of using historical data from past studies to inform future trial plans. As you will hear soon from Barbara and Josh, there are a number of interesting ways that historical data can be leveraged to compress trial timelines. There is also another insight from this question that may not be obvious initially, which we will put a circle around it. So, when you combine EHR data and claims prescription data, you will see that life science leaders view real world evidence as a base opportunity to be leveraged for compressing trial timelines. So, with that, I’m going to turn things over Barbara, our first speaker for today. Welcome, Barbara.

Barbara: Good morning. How many times have you been told that the study timelines need to be shortened? Or maybe you’re the person saying that they need to be shortened. Usually, that means squeezing the time between last patient, last visit, the data lock, or the time between data lock to submission. We think that you can actually start incorporating features to compress your timelines before the study even starts, when you’re coming up with the design of your study and keep that momentum going by picking amazing sights that give you really high quality data. The key is to use technology and data analytics all along the way in each of these areas. We came up with seven ideas to tell you about today, and Josh is going to start us off with the first one in the category of study design. Josh?

Josh: Thanks, Barbara. So, when we’re talking about trying to tune your population, we’re really leveraging the real world data sets to try, and laser focus down in on the correct patient population. So, real world data, from our perspective, provides the research community a rich source of information on patient populations and the diseases of interest.

Josh: The ability to use these data sets to micro segment the patient population of interest and fully better understand the key outcomes and unmet needs in these populations, being able to confirm the magnitude of those unmet needs, ensure the size of the patient population is sufficient for powering the trial, and understanding the shape or topology of the underlying trial variables and assumptions from those retrospective data sources, so that you can model out the trial, even going so far to use these assumptions and Monte Carlo simulations of the trial to be able to model and then, rapidly iterate on hypothetical trial designs to arrive on the optimal trial design.

Josh: It allows you to mitigate delays associated with operational and clinical risks for both the individual trial as well as the overall clinical development program and totality. And from this, we believe that it also then leads to – you also get the added on benefits of building a stronger case that there is a burden of disease to address. And, hopefully, that your investigational intervention will address said burden. Now, related to that, it’s not simply focusing on the genotypic data from real world data, but also, leveraging the phenotypic data and combining them.

Josh: Now, today, the real world data sets genotypic data is lighter on biomarkers, genetics, omics while the genomic data sets are generally lighter on the depth or at least the longitudinality of the phenotypic information.

Josh: And as the genomics becomes a greater part of the basic clinical practice that we see happening already in cancer and certain other specialty indications, that the integration, the overall virtuous feedback loop between the phenotypic and genotypic will really allow you to further refine in on that patient population based off of No. 1. But there are also added areas that are potential new ways. And I’m going to let Barbara, I think you’ve got some great examples to talk about here.

Barbara: Adaptive trial design is one that includes a pre-planned opportunity. One of the most famous studies that you can adapt to trial design is called the Eye Spy Trial. I’m sure you’re all familiar with it, or at least have heard of it because it received a lot of publicity being held as a groundbreaking trial. Basically, they used biomarkers to target the right drug to each patient and used data from one set of patients’ treatments to treat the other patients, which helped them to quickly eliminate ineffective treatments. It was kind of like a combination of adaptive trial design and biomarker identification all rolled up in one. I think that, in this new age of clinical trials, studies that aren’t personalized and adaptive are going to have trouble getting people to enroll. What’s in it for patients? Having a 50/50 shot at being on a sub optimal therapy? There are companies out there now who help patients find all of the studies that they’re eligible for.

Barbara: Patients are naturally going to gravitate towards the trials that are best for them. You should use adaptive trial designs to make your trial as attractive as possible to potential patients. Given that clinical trials are using biomarkers more and more, Josh and I wanted to pose this next question to you all. Have you ever been involved in using real world evidence to identify new biomarkers? It looks like only two people out of fifty-one have used real world evidence to identify new biomarkers. Josh, do you want to speak to that at all?

Josh: So, back in my life as I was one of – I can answer that I have done this before. And I can’t speak to the client. I can’t say their name. But we were leveraging information from the large claims data sets, as well as from public registries to better dive in on a risk algorithm for a certain cancer. And using that phenotypic profile to come up with a new, as I said, algorithm of identifying patients that were likely to progress in the disease, so they were pursuing that and looking into it further. And still, the effort is ongoing. But it was an exciting opportunity, from my perspective.

Barbara: So, the last idea we have for compressing your timelines in the study designs category is to make your study less burdensome on the site and the patient. We review a lot of data here at Medidata, and we notice that sites are often confused by the study protocol. They make mistakes, especially when the protocol is complex or has a lot of procedures. It’s important to minimize this on the sites, so that they understand the protocol better. Recently, I reviewed a study in patients with dementia. The patients were supposed to take a test, and they only had three minutes to complete the test. The site, apparently, didn’t remember or didn’t realize that the test was supposed to be timed, so they let everyone just take as much time as they needed. And they didn’t realize this until after several patients had done this. It was unfortunate because there was no way to go back and fix that data after the fact. So, now, we have a handful of subjects who have a perfect score at baseline. There’s no way to demonstrate that the drug is working on those patients because they have nowhere to go but down.

Barbara: Confusing protocols, or CRS, means the study data is messy and needs more attention. The sites will need more training and retraining, and your study will move at a slower pace. In addition, the sites may not be able to manage as many patients as they would, if the protocol were less demanding. On the other hand, patient burden is a huge factor to consider as well. I reviewed one study where I noticed a large fraction of missing labs at one site. I looked into it and saw that quite a few queries had been opened regarding missing labs. The site’s answers explained that the site had had a problem with a blood draw. They had to stick the patient several times. And sometimes, they gave up leading to missing labs. The study itself has more than your average number of lab draws.

Barbara: And, unfortunately, the missing labs wasn’t the only problem brought on by all of these messy lab draws. I noticed that some of the query answers from the site stated that the patients refused to come in for the last few visits. Well, it was no wonder. Drop outs are counted as treatment failures in a lot of studies. In your study, you need a good reason to do any procedure more than the average amount of time. If you don’t know what the average is for your therapeutic area, there are tools and metrics out there for you to compare your study design against.

Barbara: After your design is all set, the next thing to focus on is choosing your site. Like a lot of things, past performance for a site is a good predictor of future performance. At Medidata, we rank sites according to their performance across thousands of trials. In this graph, we plotted the mean ranking against the variability in the ranking. Some sites did well in some studies but not so well in others. They are shown at the top of the graph where you see their variability is pretty variable. The sites that you want to enroll are those in the lower right hand corner. These sits rank high in performance on a consistent basis. They have very low variability in the performance. It’s always good. But being good in enrollment is not enough. The sites should be located in an area where your patients are. The patients, of course, with the disease and interest in your study.

Josh: So, after we’ve had the chance to really ensure we’re focusing on the right sites, it’s our perspective that one can then leverage the power and inherent in the real world data sets. I’m looking across hundreds of millions of patients to be able to really dive in and ensure that the patients exist in those sites instead of focusing on self-reported surveys being able to confirm and match the cohort and identify the sites where those patients are being treated, so that you can validate the patient availability. Josh: Are there patients to recruit, and do the physicians have the ability to recruit them? So, this, from our perspective, is able to drastically reduce the number of non performing sites through that overall patient availability and recruitability.

Barbara: Ideally, we would all love to have every trial be a randomized controlled, clinical trial. However, conducting studies with a randomized control group can be ethically challenging or impossible in certain life threatening, serious, or rare disease settings in adults and especially in children. Think about it. we’re actually randomizing really sick patients to a sub therapeutic arm. We have to hope that they don’t get better on the control arm. That’s a terrible thing to wish for. Our alternative now is to use summaries of old clinical trials from publications to compare our drug against. We have to guess at how our drug would have faired against these active comparators leaving data that is fraught with bias. Imagine how great it would be if we didn’t have to put patients on the active comparator and, instead, every patient was allowed to get the experimental drug. We actually have enough data from previous clinical trials and from real world evidence to recreate a control arm that matches the patients’ in a clinical trial so closely that the results mimic those from a randomized clinical trial. Josh, can you tell us more about the real world evidence?

Josh: Sure. So, and just building on your example there, we are well aware today about examples where patients operate going off and joining patient communities and discussing the types of treatment profiles and the types of expected adverse events. And they are realizing that they are not actually in the investigational arm and because of that dropping out. So, we’re seeing a real problem where patients are deciding completely rationally for their own self interest to drop out of trials because they don’t have – their life is on the line regarding which treatment in the trial they’re trying to get exposed to. But by being able to leverage the retrospective data from these past clinical trials and real world data sets, we’re able to get to this point where we can actually perfectly match on the patient in the actual investigational arm using techniques like propensity score and matching to be able to truly confirm that the patient we are investigating matches to the patient from the real data set.

Josh: And look then, at their historical standard of care outcomes, unmet needs, and safety signals and their longitudinal data to be able to check. Now, today, it’s, obviously, still going to require quite a bit for getting the regulators on board with say a pivotal trial that they’re going to allow the use of a synthetic control arm for bringing a study actually to market – or sorry, bringing a drug to market based on its safety profile using synthetic control arms. But leveraging this data, running pilots today to be able to begin this process, is what we’re seeing happening with certain innovative companies that are looking to work with Shyft and Medidata on this topic. And Barbara, I think you had some great examples that you wanted to mention as sell.

Barbara: Yeah. So, we’re not unrealistic. We know that this idea probably won’t be adopted for definitive evidence of safety and efficacy for quite some time. But we can use it for other use cases. For example, you might have a drug that you’re trying to decide whether it should move forward from a Phase 2 to a Phase 3 study. You could use the synthetic control arm to feel more informed about your decision. Another example where you might use this is if you have a completed study already with two arms, but your control arm patients dropped out because they knew they were on standard of care, and they didn’t want to be on standard of care.

Barabara: You could use the synthetic control arm to supplement the data, so that your study was not a total loss. You get more power that way. Finally, you might have a study in which your patients are not well representative of a particular part of the world or age group or ethnicity. You could use the synthetic control arm to increase the power of your study in that situation. So, there are many ways that you can use this to shorten your timeline of a given study or an entire drug program. These are just a few of the ways. So, we were wondering do you have any plans to consider using historical data or real world evidence in place of control arm. And you can answer in the panel on the right. So, a lot of people didn’t answer. There’s eight people who said they’re either going to use it now – that they’re using it now, or they plan to use it in the future. That’s exciting.

Josh: Indeed. So, continuing the conversation, getting into the area of data errors and error identification. So, from Shyft’s perspective, we think that there is a real opportunity to leverage technology, leveraging the latest and user centric design and new technology frameworks to really both not just optimize the error identification but also providing the right kind of guided work flows in order to prevent errors in planning and analytic methods usage through the trial, once you’ve gone through data lock and are in. So, being able to empower these individuals who may not be as say familiar with the command line coding and being able to leverage graphical user interfaces and guided work flows we think will, and we see, are able to really powerfully reduce the amount of errors that are introduced through the data and analysis process.

Barbara: At Medidata, we review data for data quality on a regular basis. And we find all kinds of unusual errors in the data. Usually, the errors are easy to spot like this one here. This error was identified by an algorithm that looks for strong relationships between pairs of variables. They found a relationship between two dates in the study. The date of diagnosis and the date of tumor resection or surgery. Most of the time, the surgery date was on the same day as diagnosis, or a few days later. The two patients below the line actually had the tumor removed before their date of diagnosis. Claiming your data from Day 1 will speed up your clinical trial, especially at the end when you don’t want to be scrambling around trying to figure out what the inconsistencies in the data mean. It’s amazing to me how much data you all have to go through on a daily basis with more coming in each year. Now that we have mobile health devices like Apple watches and Fit Bits, the data is coming in much faster, and we are going to drown in it, if we try to use our old manual ways of monitoring. We need data analytics and technology to help us, which brings us to our next slide.

Josh: Now, obviously, Shyft and Medidata have a particular perspective, and you can say that we have drunk the Kool Aid. But we do believe that the use of technology, the use of software and to help guide the analysis, set up the design, the recruitment, the final analysis of the trial is going to be important for the future and sustain clinical innovation both due to the volume, variety, and velocity of data that’s coming in. Not just from the changes that are happening in the clinical trial space, but how real world data is becoming more and more integrated into the analysis, and how, not just from the design perspective, but creating a continuous feedback loop from patients who, as we said with things like the synthetic control arm are going to become part of the continuous process of clinical research being done even after the trial ends.

Josh: And being able to leverage technology in order to do so will allow certain companies that pursue this, we believe, to capture a strategic, sustainable, competitive advantage over those who don’t. Being able to bypass, overcome the inherent constraints and difficulties related to do it yourself command line coding. It’s cumbersome, as well as being able to overcome the cost and time associated with outsourcing to CROs for the bespoke trial running. So, just to wrap up the topic that we’ve been covering so far, in summary, it’s our perspective that it is imperative for clinical operations teams to really leverage real world data to be able to dive in and tune the population to laser focus in on the population of interest and really model out and design the trial and also leverage biomarkers in that effort increasingly, as well as increasingly leverage adaptive trial design to speed up the approaches.

Josh: As well as ensuring that you are optimizing the burden for both the site and patient, in order to reduce any errors or reduce attrition associated with the trial. Now, after that trial design phase and getting into the actual recruitment, ensuring that you are leveraging your historic information on past site performance to really ensure you’re focusing on the high performing sites, as well as understanding and confirming that those sites truly do have the patients of interest for the trial you’re considering, the CDP you’re considering.

Josh: And then, lastly, in the actual recruitment phase, leveraging new trial design approaches where one can leverage existing retrospective and historical data from other trials and real world data in order to effectively reduce the need for control arms or placebo arms in order to reduce the recruitment burden for trials and speed up the overall process. And lastly, once the trial is complete, and you’re into the data analysis phase, that you are leveraging technology, leveraging guided work flows, leveraging new advanced algorithms to ensure that you are avoiding both data quality errors as well as errors in analytic methodology to reduce any potential delays from either.

Josh: So, with that, I thank you all for joining. And I think we’re going to open it up now for question and answers. Yoshi, I’ll turn it over to you as far as any questions we’re receiving

Yoshi: Yes. So, we did receive our first question. This is probably more to you, Josh. Have you spoken to FDA regarding the use of real world evidence data and submissions?

Josh: So, we have definitely heard from FDA publicly, actually, that they see real world data and real world evidence as being not just an if but a when situation that will be necessary as part of the submission process. And you’re seeing this with the 21st Century Cures Act that there’s going to be this requirement for the framework to be developed. But we definitely see this as going to be an integral part of up front design confirmation of the pursuit of a given indication. And then, we do think – and we see that the framework is going to need to be then, mandated to be developed that the FDA said this, and that congress has said this as well that we’re going to have to be thinking about how to leverage and use real world data with regards to follow on indications, for tying it back and confirming that the efficacy and effectiveness of the given treatment actually is delivering what was promised. And, obviously, outside of just the regulators, we absolutely know and see that globally, payers have already been requiring this. So, outside of just the market getting regulatory approval that market access is dependent on this real world data, continuous market access is dependent on real world data for going on at least a decade.

Yoshi: Thank you, Josh. The next question is for Barbara. Do you have any experience using historical data in place of a control arm?

Barbara: Yeah. Actually, we have been working with clients. And we published the results of one study that we did at ASCO about a month ago. And it was – the example for them was that they wanted to – they were pushing their drug forward in the development process to a Phase 3 study. And they just wanted confirmation that that was the right decision.

Yoshi: Thank you, Barbara. Here’s another question for Josh. Any recommendations for publicly available data sources for real world evidence data?

Josh: Yeah. A couple of options there. So, truly public and open source. And I would also recommend, for groups that are truly focused on research, that there are both the physician societies and groups that are out there that, if you can prove that you are doing research on the data, that it’s not simply for commercial sales and marketing opportunities, that CPRD over in the UK, that the Medicare research data sets, that the physician societies with their registries are all very interested in being able to partner on how to leverage this from a patient centric way. If you’re willing to go a little bit off the reservation, there’s also some data sets out there from the CDC that one can work with on things like the violent death registry that, again, requires confirmation of research intent. But there’s a number of opportunities and data sets that are out there.

Barbara: There’s also the Fox Foundation for Parkinson’s Research. They provide all of their data publicly online.

Yoshi: All right. Thank you, Josh and Barbara. The last question is actually for both of you. The question is how can we better engage community physicians and patients in clinical trials?

Josh: Well, Barbara, would you like to start, or I can?

Barbara: I’m trying to think of a reason. I haven’t come up with anything yet. So, if you have something, go ahead.

Josh: Sure. So, a couple of things. And I think the, at least from my time at Quintiles, what we always heard was that the community physicians were – their stated resistance at clinical trials was regarding that they felt like they were going to lose control over the patients’ treatment, that they were going to be cut out. And they really, I think, ultimately, they do have the best of intent for their patients because, in the community setting, they know these patients well. They don’t want them to feel left out. So, to me, it’s about how to maintain the connection between the physician in the community setting who sees his patient’s day in and day out and their treatment in the clinical trial setting. And I think that actually the frameworks we’re seeing, and, again, I’m a techno wonk, so I’ve got a certain view on this.

Josh: But keeping them involved, there’s a number of technology solutions out there that can involve things like social media, that can involve real world data, so that the community physician can maintain their connection to the patient, can maintain their touch points and talking points with the patient while still allowing that patient to be participating in the clinical trial at the site. So, it really is, to me, about being more expansive and ensuring that that patient can maintain the connection, especially given that some of these patients are having to drive two, three, four hours to get to the actual clinical trial site, it becomes all the more important that the community physician feels like they can still be involved, and that patient feels like they’ve got their support network to continue to work with.

Barbara: The only other thing I can add is maybe make the trial easier of the community physicians to participate in. Maybe they’ve tried doing clinical trials in the past, and it hasn’t been a good experience. And maybe giving them more support, making the protocol easier, all of those things might help make them more willing to participate again.

Josh: Yeah, I agree with that.

Yoshi: All right. So, we are getting to the end of the webinar. Here, you can see here are the contact information for both Shyft and Medidata. We have large group of experts here. If you have any specific questions, we’re more than happy to work with you and address the challenges you face to compress the clinical trials. And thank you very much for joining today’s webinar. You will receive a photo app email to access the recording of the webinar. And enjoy the rest of your day.