Continuing the Conversation: A Follow-Up to Our "Optimizing Study Design and Protocol Development" Webinar
Guest blogger Ken Getz recently shared some compelling findings from a new Tufts CSDD study on the frequency and economic impact of extraneous protocol data. As you know from previous blog posts, the need for well-aligned and optimal protocol design is something near and dear to my heart. So I was excited to co-present with Ken last week in a webinar on the subject hosted by Medidata for clinical research sponsors.
I hope those in attendance found the session informative, and I thank you for joining us and contributing questions that made the two webinar sessions a lively discussion.
We did not have time to answer all of the interesting questions that were asked, so I thought I’d take the opportunity to summarize the questions and comments and recap our responses to add a little more color to the topic.
The questions and comments fell into three main categories:
1. Procedure Inclusion Rationale
“Are we trying to run fewer studies and do more?” “Has the science gotten more complicated and endpoints more fuzzy?” “Do we rely on exploratory data for some decision-making?” “Don’t we need to assess impact of standard of care on results?”
Ken and I answered these questions in a similar vein: yes, these are all valid reasons why a study might have non-core procedures—and complexity isn’t a bad thing, necessarily—but the impact needs to be understood. In each case mentioned above, it is unlikely that organizations could tell you the amount of non-core data in the study and how much of the trial’s complexity was related to exploratory endpoints. Ultimately, there is a direct correlation between subject enrollment and retention and the operational feasibility of a protocol and the complexity, so while there may be valid reasons to consider including such data, it shouldn’t be a blind decision. We also discussed that even if you choose to include non-core procedures, organizations should be challenging themselves to consider handling the data that is generated from those procedures differently to reduce the impact of inclusion (e.g., reduced SDV or only auto queries).
2. Impact Assessment
The questions and comments on this area were in two areas: making sure that we don’t only consider reducing cost and overlook critical procedures, and the need for a consolidated view of the true cost of a procedure in a trial.
The same framework that is used for identifying non-core procedures can be used for categorizing those items that are truly critical to a study, from either an objective or safety perspective. By aligning each and every procedure to a purpose, the challenge and conversation regarding necessity can be focused on those items that are not critical. We also stressed that the methodology include categories for regulatory-required procedures and standard procedures needed in every study.
In terms of a consolidated view, the participant was entirely accurate. There currently is no consolidated view of costs, labs, grants, CROs, resourcing, etc. that can estimate the true impact of a procedure’s inclusion. However, the data is out there, and as the industry begins to understand the power of line of sight between objectives, endpoints and procedures, there will be more demand for this consolidated view. Companies like Medidata would be foolish not to provide such functionality.
3. Process Change
“Are organizations ready for the mind-set change and process shift needed to really embrace this way of thinking?” “Even simply changing the process to focus more review on the protocol synopsis and move away from a document is daunting.” “Have you seen evidence of studies being evaluated in the context of the development plan?”
The process change is indeed huge, and it will be a challenge for organizations to adopt. But proven from the data in the study, organizations can’t really afford not to! Additionally, we should be constrained by current thinking regarding what a synopsis document is; it could be in a different format, entirely contained in a piece of software and include more information on purpose and rationale, and have a more serious review that could (as we have seen) significantly increase the speed of the full protocol review.
As for good clinical development planning…. That’s a whole other conversation (another that I can talk on for hours!) and is just as essential.
I look forward to hearing your thoughts on any of these topics!
Additional Blog Posts by Michelle Marlborough:
• It Should Be All About the Patients
• Data, Data Everywhere…
• How Can We Accelerate Innovation with Standardization? Thoughts from the CDISC European Interchange
• Different Conference… Same Issues?
• TREND 10: Increased Demand for Technology to Support Overall Clinical Development Planning
• TREND 7: Protocol Design Will Triumph Over the Document
More about Michelle Marlborough