I've been reading about the comparative effectiveness research debate and found my understated quote of the week in the normally staid New England Journal: "Developers face few incentives to conduct active-comparator superiority trials and understand that they benefit from the unacknowledged deficiency of evidence. The development or marketing of me-too drugs and devices may provide a greater return on investment than research aimed at true clinical innovation."
For those who might not have been able to keep up, the focus on comparative effectiveness became more urgent, given soaring healthcare costs, enormous budget deficits, and the strained economy.
While traditional trials usually centered on establishing the efficacy of a drug or device compared to a placebo (a non-inferiority trial), the new focus is on comparing the effectiveness between available therapies. This research is a congressional mandate as part of the American Recovery and Reinvestment Act (ARRA) of 2009. The law provided that the Institute of Medicine (IOM) should make recommendations for national priorities for CER funding—which they did in remarkably short time.
Not surprisingly, the CER plan has come under attack by pharmaceutical companies, despite the assurance, for now, that the research will not be used to restrict physician prescribing choices based on cost-effectiveness data. Others are concerned that, rather than supporting rational healthcare decision-making, the CER initiative is the first step down the slippery slope towards healthcare rationing. An interesting proposal—intended to close the evidence gap and more directly benefit prescribers and consumers—is to have the FDA require comparative effectiveness labeling on their products, to make the benefits and risks of each product clearly evident.
While the two sides are not evenly matched, making the likely outcome predictable, barring an upset, it will be interesting to watch this debate evolve.