Flawed Design
Search
Search

By Regina Nuzzo

Flawed Design

Are clinical trials failing promising drugs?

By Regina Nuzzo


Since it’s impossible for researchers to send every potential cancer treatment through expensive phase III clinical trials, the gatekeeping job falls to phase II trials. These phase II studies should allow only the most promising treatments to advance to phase III. But recent research suggests that many phase II trials don’t use a clear rationale for deciding which drugs make the cut. In fact, patients and researchers may be investing energy in dead-end treatments that ultimately fail in large clinical testing, while promising therapies are ignored.

In a phase II trial, investigators test whether a small group of patients responds better than expected to a new treatment, explains Andrew Vickers, a statistician at Memorial Sloan-Kettering Cancer Center in New York City and the lead author of the recent study, published in the Feb. 1 Clinical Cancer Research. The crucial issue, he says, is specifying ahead of time exactly what “better than expected” means in hard numbers—a certain survival rate among patients, for instance, or the fraction who experience tumor shrinkage. That number—known to investigators as the null response rate—is the hurdle a treatment’s results will need to clear in order to demonstrate that the therapy is potentially better than available treatment options. “Null rates drive everything in phase II trials,” Vickers says. A bar set low could allow too many ineffective drugs into phase III trials, he explains, while one set high could keep out many promising ones.

In the past, investigators could generally use mostly simple methods to set these null rates. But today’s cancer research environment is more sophisticated, Vickers says. One reason is that new molecularly targeted drugs may slow the growth of a tumor rather than shrink it, so measuring tumor shrinkage could be misleading.

To understand how investigators use historical studies to set the bar for new developmental drugs, Vickers and his colleagues looked at 70 phase II trials that required historical data and were published in two leading cancer journals between 2002 and 2005. They found that nearly half of these trials failed to identify the historical study against which the new drug was judged. Only 13 percent gave a clear rationale for the threshold of efficacy they chose, such as making reference to a specific historical response rate. What’s more, investigators of studies that provided no rationale or an unclear rationale were also more likely to declare their new treatments to be worthy of further testing in phase III trials.

Simple changes could greatly improve the design of phase II trials that require historical data, Vickers says. Investigators should explain how an older study helped them set the bar for the new drug’s performance and describe the mix of patients in the historical study. In addition, researchers should also consider using advanced statistical methods to adjust for any differences between that population and the current one. With the widespread adoption of practices such as these, Vickers says, perhaps fewer patients will be exposed to ineffective therapies in phase III trials.

These results and recommendations will likely lead to more critical and rigorous study designs, says oncologist Bruce Chabner, the clinical director of Massachusetts General Hospital Cancer Center in Boston. “It’s a brilliant article,” he says. “The authors are asking for more precision and more documentation in where our estimates come from. Although there are limits to how well we can do that, more attention certainly cannot hurt.”