Why a "One-Size-Fits-All" Approach in Clinical Trials is Naive
I recently came across the results of the eClinical Landscape survey study conducted online by Tufts University and sponsored by a vendor, Veeva Systems. The study examined perceptions of the time it takes companies to design and release clinical study databases, noting its negative impact on conducting and completing trials.
My first impression was that the Veeva-Tufts study’s conclusions were based on incomplete research that relied on (1) subjective information and (2) an underlying assumption that all studies are equivalent - across phases and therapeutic areas. The conclusions seemed to lack consideration of the wide variation in the nature and complexity level of today’s clinical trials. The use of subjective perceptions without factoring in trial complexity is a serious concern.
I realized immediately the need to assess the conclusions with scientific, objective rigor. My initial impression could have been influenced by my role within Medidata. My background in data and biometrics has shown me that our opinions and decisions can be prematurely swayed with provocative yet one-dimensional data. As researchers, we must hold ourselves and each other accountable to draw conclusions based on data analyses that are appropriate, reliable and repeatable.
The purpose of this blog post is to assess sources of objective data from several thousands of clinical trials in order to understand the underlying assumptions of the study, which on the surface seem naïve about something as complex as the clinical trial industry.
Correlation doesn’t imply causation
The underlying assumptions of the Veeva-Tufts study are a crucial component of its analyses and conclusions. Failure to take essential factors into account often produces erroneous results and is frankly careless. I will illustrate this with an example that highlights a valuable insight.
A 1999 study out of University of Pennsylvania, Myopia and ambient light at night, saw correlations between infants and toddlers who slept in rooms with a light on and the development of nearsightedness. Immediate conclusions were drawn about the danger of nightlights to infants. However, later studies showed that this was not a correct conclusion. Children who slept with the light on were not more likely to be nearsighted. Parents who were nearsighted were more likely to leave a nightlight on in their child’s room so that they could see when caring for them at night. There was actually a genetic link for nearsighted between the parents and their children. The original study had ignored a logical relationship.
This example demonstrates that failing to include crucial factors in the analysis can produce faulty, highly misleading conclusions. In the Veeva-Tufts study of clinical trials, drawing conclusions about database release time without taking into account the trial complexity is analogous to the University of Penn study that failed to account for genetic linkage of nearsightedness.
Complexity of trials is increasing
The Veeva-Tufts study’s primary assumption is that valid conclusions about cycle time differences can be reached without factoring in a clinical trial’s complexity, which varies widely depending on many attributes of the trial. These include the phase of research; the number of patients; the number and difficulty of procedures performed; the trial’s design; and numerous additional features. For instance, Phase I trials, which typically have a few dozen patients and take several months to a year, are usually much less complex than Phase III trials, which typically have hundreds or even thousands of patients and take several years.
To understand more fully the role of complexity, Medidata has reviewed existing data and relevant publications. These studies present convincing evidence that the complexity of clinical trials is increasing. A recent analysis published in Nature Reviews Drug Discovery explains this trend. We analyzed, from Medidata’s PICAS database, 9,737 clinical trial protocols that have received ethics review board approval between the periods of 2001-2005 and 2011-2015 and assessed those studies across phases of research. The results are compelling – the number of total procedures performed increased substantially for each phase: 53%, 67%, and 70% more procedures were performed for Phase I protocols, Phase II protocols, and Phase III protocols, respectively. Similar increases in the number of distinct procedures and the number of planned study volunteer visits for Phase I to Phase III trials are shown in this figure:
Cycle times are higher in the highest complexity studies
Another assumption in the Veeva-Tufts study is that data are entered and locked faster when the study is put into production prior to First Patient Visit. Again, the conclusion is drawn without assessing other variables that could logically impact cycle times.
To understand the potential relationship between complexity and cycle time, we assessed five years of Rave EDC operational performance data from 3,383 Phase II and Phase III studies. The findings are different from those highlighted in the Tufts survey in that they demonstrate that longer cycle times were experienced in trials with the highest complexity. In fact, the study design period (period from the date the study is first created to the time the eCRFs are pushed to production) is five weeks longer for the highest complexity studies versus lowest complexity studies.
*Note: Numeric measure of complexity for a given CRFVersionID is based on CRF Objects connected to the CRFVersionID. CRF Objects are Forms, Edit Checks, Custom Functions and Derivations. Low (0-575); Medium (575-950); High (950+); Phase II and III studies
Despite increases in complexity, individual performance parameters improving
Despite increases in overall complexity and the additional data regarding the differences between tiers of complexity, individual performance parameters are actually improving on the Medidata platform. Leveraging the same five-year period from 3,383 Phase II and Phase III studies, we saw Visit to Data Entry Cycle Time has gone down 21%.
*Note: * Decrease calculated by averaging the first four quarters and the last four quarters of the five-year period and calculating the % difference
Medidata has published research findings to help illuminate challenges the industry is facing. For example, “Using Data and Analytics in Clinical Development” and “Trends in Clinical Trial Design Complexity.”
Lastly, I reviewed additional supportive information to understand what sites, sponsors, and CROs were saying about the Medidata platform.
We recently conducted a blind life science industry survey of eClinical technology solutions and Rave was rated tops for product capabilities and professional services implementation. Furthermore, 13 of the 15 top-selling drugs globally in 2017 were developed on Rave; these represent some of the largest and most complex trials in the world. The data demonstrated widespread use and positive customer and site satisfaction.
The Veeva-Tufts study was based on subjective survey responses from 257 Sponsors and CROs. The use of these perceptions rather than actual data is a serious concern. Another flaw is the study’s failure to mention the issue of adjustment for the size and complexity of trials. If large, complicated trials and smaller, less complex trials are analyzed together without controlling for the differences between them, then the results will be highly misleading. If you don’t factor size, complexity, protocol amendments, and other complications into the analysis, then EDC providers who handle small, simple trials will appear to be the best performers.
If you’d like to have further conversation about this analysis and also look at how your performance metrics compare to the benchmark data I used in this analysis, reach out so we can show you how our unified clinical cloud platform can help you to optimize your expected outcomes