Clinical Trial Appraisal
< < Main page

When evaluating the pros and cons of adding a new drug therapy to a drug plan, it is best practice to consider all the available scientific evidence and be mindful of the quality of that evidence.
Clinical trials provide essential insight into the clinical value of a therapy and are the cornerstone for evidence-based medicine, which focuses on the integration of best available research evidence with clinical expertise and patient values. The challenge is that not all clinical trials are created equal. That makes it essential for decision-makers to understand the characteristics of good clinical trial design so they can make well-informed choices for members, employers and payers.
“Anybody who is in the seat of making decisions around adding or rejecting drugs needs to have a solid appreciation for what is the evidence being brought to the table [and] how do you judge value,” says Dr. Paul Oh, Medical Director, Cardiovascular Prevention and Rehabilitation Program, Toronto Rehabilitation Institute, University Health Network, who also serves on the WSIB Ontario Drug Advisory Committee. “It’s fundamental to decision-making.”
As part of that process, “you have to be able to sift the good from the bad in terms of trial design to know how much weight to give each trial in your decision,” says Mark Jackson, Consultant Pharmacist at TELUS Health.


What is the clinical trial design?

The gold standard for a clinical trial design is a randomized, double-blind, controlled study (RCT). Randomized means participants are divided by chance into treatment groups; it is a simple and powerful method of limiting differences between participants in each treatment group, beyond the treatments under investiga­tion. Double-blind means neither the participant nor the people running the study know which group each participant is in. Controlled means the new therapy is being compared against either a placebo or an existing therapy.
Both Dr. Oh and Jackson prefer “head-to-head” comparisons with existing therapies. As Jackson points out, all a placebo-controlled trial can tell you is, “the drug is better than nothing.” He adds, “The regulators like to see placebo-controlled trials, but from a payer perspectiveyou’re trying to make your decisions based on, is it better than the alternatives?”
A systematic review represents the pinnacle of evidence, because it identifies all available evidence and then compares and synthesizes study results to provide a more complete picture of the different treatment effects. A systematic review can incorporate various study types, including head-to-head RCTs, non-com­parative clinical trials and observational studies.


What are the patient characteristics?

Jackson emphasizes the significance of a trial’s population. The sample size must be large enough to make statistical comparisons, but beyond that he recommends asking, “Is it representative of the population that you’re covering?”
So, for example, do the results of a trial conducted in a different part of the world – for example, China – apply to the Canadian population? Or are the results of a study with an average population age of 70 years relevant to a population consisting of working-age Canadians?
Private payers need to know their population demographics and assess whether the clinical evidence is based on a sample of patients that aligns with that population.

What are the treatments?

Dr. Oh looks closely at all the attributes of a treatment. “How does it compare to the current standards: dose and duration and frequency and routes of administration?” he asks. Important, too, is whether the new therapy can be easily implemented in a real-world environment.
Dr. Oh adds, “I’m much more excited when the new therapy is being compared against a very good standard therapy – the most commonly used, the one with the best evidence.” In a head-to-head study, he emphasizes, two therapies are compared within a homogeneous population using the same outcomes mea­sured in the same way. “That’s how we can judge whether it’s the new therapy or the old therapy that is really making the difference, as opposed to many of these other variables that come into play when we start thinking about indirect comparisons [with] different populations, different co-interventions, different geographic locations and different ways of assessing the outcomes,” Dr. Oh says.


Are the results clinically significant?
Jackson points out that there is a difference between statistically significant results and clinically significant results – and that the latter is what formulary decision-makers need to see.
“If you have a drug that takes a day off your suffering of a cold, if you have enough people in the trial that’s going to be a statistically significant difference – but most people will probably say that a day off their cold is not going to make much of a difference to them,” he explains. That means the clinical significance is very modest.
Dr. Oh likes to see a well-defined minimal clinically important difference – the smallest change a patient would consider to be important and that would therefore lead to a change in the patient’s treatment. He also looks for clearly articulated outcomes that are measurable with validated tools. And, within the results themselves, Dr. Oh looks for consistency across subgroups and pays close attention to any differences based on sex, age, geographic region or comorbidities.

What is the safety profile?

Safety is as important as a new therapy’s efficacy. “I’ll want to see that there is a very good safety analysis that includes the entire population that was entered into this study,” continues Dr. Oh. “I’m looking for serious adverse events as well as patterns in the less serious adverse events… I want to know which safety events led to a discontinuation of therapy as well.”
The safety analysis helps to answer the question “does the benefit outweigh the risk?” says Jackson. It’s also important to tease out whether a serious adverse event was attributable to the drug, and not to other factors, he says.


A customized approach for rare diseases

The appraisal process for a clinical trial of a therapy for a rare disease – a condition that affects fewer than one in 2,000 Canadians2 – must take into account the disease’s low prevalence.
“In that space of rare disease, by definition, it’s harder to find people affected by the condition,” Dr. Oh says. As a result, RCTs may not be feasible and the best available evidence could be a non-randomized clinical trial or indirect treatment comparisons with sources such as an observation study or registry. Regardless, Dr. Oh expects rare disease studies to “define a good population, define therapies as properly as possible and have good rigour in the outcomes.”
A meaningful clinical effect is of course important as well. “You want to make sure that the drug is going to be making a significant impact on the person’s quality of life and how they’re able to function and stay at work,” Jackson says.
Another challenge with rare diseases is that there may not be a good comparison therapy, Dr. Oh adds: “It might just be against best supportive care or natural history – but, to the extent possible, I would still like some sort of comparative framework to really recognize if the new therapy actually adds some value.”

Check your conclusions

You’ve examined a clinical trial. You’ve drawn your conclusions. Now, it’s worth seeing what expert evaluators say. “You can do your own critical appraisal of trials, but it’s often help­ful to look at how other people have critically appraised the trials as well,” says Jackson. Ultimately, the decision about whether to add a new therapy to a formulary depends on more than the evidence from clinical trials – but understanding how to critically appraise clinical trials is a necessary first step. ■


1. www.who.int/topics/clinical_trials/en/
2. Orphanet: an online rare disease and orphan drug data base.
Copyright, INSERM 1999. Available on http://www.orpha.net.
Sponsored by: Takeda