Medidata Blog

How Will Clinical Trials Evolve?

Reading Time: 5 minutes

This article is the second half of a two-part series originally posted on our Forbes channel. In this piece, Lori Convy chats with Andy Lawton, the current Global Head of Clinical DataManagement at Boehringer Ingelheim. 

What are some of the metrics that can be useful for risk-based monitoring?

Andy Lawton: Lost to follow-up data, adverse event reporting, site quality, the number of queries outstanding, the speed of data entry into the system, it’s bringing together all these aspects, not just one.

This isn’t a cost reduction exercise, correct? You’re finding metrics that suggest, regardless of the cost, it makes sense to pursue RBM.

Lawton: Oh from a quality perspective, yes, and from a quantity perspective, no question about it. But also, in the long term, you will save money, and in particular in large-scale studies, you’re going to save lots of money. For smaller studies, you’re not going to save money, you’ll just increase quality.

When you’re trying to explain RBM to people, do you start with the cost or the quality?

Lawton: People have to change their process, and we know the pharma industry is slow to change. You’ve only got to look at remote data capture, started in the late 1990s, and we’re still processing sites the same way as if they were paper based. That’s almost 20 years. It’s slow.

If you think we’re going to change the process and everyone is going to immediately adopt it, no they won’t. Some people will still visit sites every 4-6 weeks. They are not seeing the big picture, so there’s a need for continuous change management.

Where does RBM stand now in terms of industry adoption, and what’s next?

Lawton: A lot of people saw RBM as a fashion that would go away. There wasn’t quite the emphasis that should be put on it. The code which we work to, called ICH E6 is our basic methodology for good clinical practice (GCP). There’s an addendum coming out at the end of 2016 to enshrine risk based monitoring. It’s not a question of shall we do it, it’s a question of shall we follow GCP or not? It will get the big push to implement RBM and tolerance limits.

What are the key tech considerations for RBM adoption?

Lawton: The whole integration area. If you haven’t got a data warehouse, you’re going to have to consider getting one, and not just for your clinical data, but all this other data surrounding it. What’s really important is the metadata – the data about the data.

How long did it take for that data to be entered? If a piece of data was entered immediately by the investigator, is that better quality than if it was entered two months later? If it was entered two months later, they better have very good documentation in their records to justify the value, so where do you want to focus your risk-based monitoring activity in making sure they’ve got good sources for their data. That’s one area, the warehousing, and the integration bringing it all together from different data sources.

Another key area is in defining what risks there are and then monitoring those risks regularly.

And yet another key area is issue management. If you find an issue, how do you communicate it out to someone who should deal with that, the onsite monitor? If you’ve got too many issues outstanding, that should then feed into your risk management, because it becomes another source of risk.

Are there any other RBM points you want to discuss?

Lawton: Tolerance levels. They made it into GCP briefly in the early 1990s, but they took it out. Now it’s coming back in. It’s really driven by the EMA (European Medicines Agency), and they put it in their 2013 document that we have to define error limits and tolerance levels. It’s about this quality aspect – the precision of quality when measuring endpoints and the expected number of protocol violations in a study.

We always get protocol violations, but when is it unacceptable? We don’t predefine that; we always just assess it after the fact and the EMA is saying, no, predefine it, and then if you’re within that, fine. If you’re outside it, explain if it has impacted your results. EMA is not saying write it off, it’s just saying you need to explain why.

You don’t view this as a negative?

Lawton: Oh I view it as an exceptionally powerful tool for us, and we should be adopting it. It’s really the basis of quality, because if you haven’t predefined what your quality level is, what are you doing?

How do you set those levels?

LawtonYou can look historically at the number of protocol violations you’ve had and the causes for those violations. If you’ve got all your metrics databases together you can then utilize those. You can get historical databases to get those predictive rates to determine the number of expected protocol violations or adverse events.

What if we get more adverse events than before. What if we get less? Does it mean we’ve missed some? Does it mean we’ve got a safer population? Or is it a more general population whereby they’ve got other diseases as well which are causing these events? It is about understanding what you would get and explaining it.

Many of the things we’re doing in RBM should apply to any step in clinical development, and we’re taking it as a time to refresh everything we are doing.

Would historical data be per company or an industry-wide tolerance level? How would you consider that?

Lawton: For an endpoint of a study, it would be very difficult to set industry-wide tolerance levels, because that’s particular to the study. Maybe it’s possible for the measurement device you’re using, maybe to your methods or protocol violations, but again, it depends on the protocol.

You can create more protocol violations by writing a bad protocol, so that’s going to be down to individuals. But you can define an industry-wide error rate for general data. Not for the critical data, but for general data we could use the paper TransCelerate did with Medidata on SDV (source document review) errors. We could use that as a basis for acceptable levels of transcription errors.

SDV could be one of those industry-wide predefined tolerance levels?

Lawton: Yes, transcription errors would be one aspect of it, because if you don’t predefine it, every transcription error you get could become a finding in a regulatory inspection. But if you predefine, say a one percent error level, as long as you haven’t exceeded that, you’re okay.

What is the difference between source document verification (SDV) and source document review (SDR)?

Lawton: The problem is historically we use one term, and that meant checking all the data. It was actually bad because you really didn’t know what different people did. Some people just did box checking – that’s what’s on the page, that what’s on the system. Checking that transcription is SDV.

Other people looked through all the sources notes, checked the process, and then checked the numbers. And that checking of the process, checking the whole documentation, is what we call source document review, or SDR.

So does SDV become irrelevant when the industry adopts risk-based monitoring?

Lawton: It largely becomes irrelevant. I think it will tailor off over time. In particular when we move to eSource, electronic source, where we bring in the data directly to the system, then SDV goes away completely, because what’s to check? You have computer system validation to check that.

Interested in learning more about RBM? Click here for more information.

Jacob Angevine