What’s Higher, Data Correction Rates or Interest Rates?
We’ve already talked about the trend towards targeted (or “risk-based”) site monitoring and data review. Despite the growing popularity of targeted source document verification (SDV), some organizations remain concerned about the potential impact on data quality. So we asked ourselves something: What percentage of data is actually corrected during the course of a typical study as a result of the intensive manual data reviews conducted by sponsor personnel? We delved into that question in our April contribution to the Data Analysis blog on Applied Clinical Trials.
We got our answer by looking at the Insights data on Post-Capture eCRF Data Correction Rates. This metric provides the percentage of all eCRF data fields that were observed to have one or more updates following the initial data entry session at the site.
The results were rather astounding. Just less than 3% of the overall eCRF data is updated after the initial capture session. Put another way, over 97% of all data provided in the study eCRF is in its final form—ready for analysis, reporting and submission—before any site monitor or data manager reviews the data! This metric is even more compelling when you consider that it reflects all data corrections, including “natural” updates to time-based patient event data, such as an update to the adverse event (AE) outcome fields once an ongoing AE has been resolved. Thus, it is quite apparent that significantly less than 3% of data is actually corrected!
So why are we manually scouring up to 100% of eCRF data—at a high cost—for such a meager return? What are some of the explanations (or excuses?!) you’ve heard? How can we as an industry move beyond some of the fears and myths that are hampering the trend towards lower SDV?
More about Stephen Young