It is all too easy for physicians to ignore or miss evidence, particularly when drug or device companies use aggressive marketing to counter reports that could harm sales. In 2002 JAMA published results of a huge study, called the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial, or ALLHAT, which looked at drugs used to lower blood pressure. The researchers concluded that inexpensive generic diuretic drugs were just as effective at controlling blood pressure and preventing heart attacks as were brand-name drugs. For some patients the diuretics were actually safer, with fewer side effects.
The study, which was funded by NIH’s National Heart, Lung, and Blood Institute, made headlines around the world. Given the strength of the results, its authors and the NIH believed it would encourage physicians to try diuretics first. Yet after eight years, the ALLHAT report has hardly made a dent in prescribing rates for name-brand blood pressure medications, according to Curt Furberg, a professor of public health sciences at Wake Forest University.
USC’s Hoffman says the scenario repeats itself time and again. “Some expensive new drug becomes a blockbuster best seller following extensive marketing, even though the best one might be able to say about it is that it seems statistically ‘non-inferior’ to an older, cheaper drug. At the same time, we don’t have any idea about its long-term side effects.”
After the publication of some ALLHAT results, Pfizer—one of the manufacturers of newer and more expensive antihypertensive drugs—commissioned a research company to survey doctors about their awareness of the results. When the company learned that doctors were generally in the dark about the study, Pfizer helped make sure they stayed that way. Two Pfizer employees were praised as “quite brilliant” for “sending their key physicians to sightsee” during Furberg’s ALLHAT presentation at the annual American College of Cardiology conference in California in 2000, according to e-mails entered into the public record after a citizen’s petition to the FDA. Pfizer sales reps were instructed to provide a copy of the study to doctors only if specifically asked. “The data from a publicly funded study may be good, but you don’t have anyone out there pushing that study data, versus thousands of people doing it for the drug companies,” says Kevin Brode, a former vice president at marketRx, a firm that provides strategic marketing information to the pharmaceutical industry.
THE DOCTOR PROBLEM
Misleading marketing isn’t the only issue. in many cases, physicians perform surgeries, prescribe drugs, and give patients tests that are not backed by sound evidence because most doctors are not trained to analyze scientific data, says Michael Wilkes, vice dean of education at U.C. Davis. Medical students are required to memorize such a huge number of facts—from the anatomy and physiology of every structure in the human body to the fine details of thousands of tests, diagnoses, and treatments—that they generally do not have time to critique the information they must cram into their heads. “Most medical students don’t learn how to think critically,” Wilkes says.
That was not true for Mount Sinai’s David Newman. “I grew up questioning authority—and it got me kicked out of kindergarten,” he says with a laugh. In medical school, he was surprised that his questions were often met with answers that were rooted not in evidence but merely in the opinions and habits of senior physicians. Over the years as a practicing physician, he says he has come to believe that most of what physicians do daily “has no evidence base.”
This was the gist of a talk Newman delivered on a cool, gray day last fall to a packed lecture hall in the cavernous Boston Convention and Exhibition Center, where more than 5,000 emergency physicians from around the world gathered for the Scientific Assembly of the American College of Emergency Physicians. Much of what doctors know and do in medicine is flat-out wrong, Newman told his colleagues, and the numbers tell the truth.
Newman started his talk by explaining two concepts: the “number needed to treat,” or NNT, and the “number needed to harm,” or NNH. Both concepts are simple, but often doctors are taught only a third number: the relative decrease in symptoms that a given treatment can achieve. For example, when an ad for the anticholesterol drug Lipitor trumpets a one-third reduction in the risk of heart attack or stroke, that is a relative risk, devoid of meaning without context. Only by knowing how many patients have to be treated to achieve a given benefit—and how many will be harmed—can doctors determine whether they are doing their patients any good, Newman says. In the best-case scenario, 50 men at risk for a heart attack would have to be treated with statins like Lipitor for five years to prevent a single heart attack or stroke. Stated differently, 98 of 100 men treated for five years would receive no benefit from the drug, yet they would all be exposed to risk of its potentially serious and fatal side effects, such as muscle breakdown and kidney failure.
Another example cited by Newman: Doctors routinely give antibiotics to people with possible strep throat infections in order to prevent heart damage that can, in rare instances, develop if a strep infection leads to acute rheumatic fever. In practice, doctors prescribe an antibiotic to more than 70 percent of all adults with a sore throat, says the Centers for Disease Control and Prevention (CDC), even though almost all throat infections are caused by viruses, for which antibiotics are useless.
Are doctors keeping their patients safe by freely prescribing antibiotics, Newman asks, or are they doing more harm than good? To answer the question, he dug up statistics from the CDC and found that the NNT was 40,000: Doctors would have to treat 40,000 patients with strep throat to prevent a single instance of acute rheumatic fever. Then he looked up how many fatal and near-fatal allergic reactions are caused by antibiotics. The number needed to harm was only 5,000. In other words, in order to prevent a single case of rheumatic fever, eight patients would suffer a near-fatal or fatal allergic reaction.
Finding the hard statistics for antibiotics is relatively easy, but sometimes data are literally withheld. Lisa Bero at the University of California, San Francisco, found that clinical trials producing positive outcomes were nearly five times as likely to be published, as those with neutral or negative outcomes, allowing health care providers to come away with rosier views of a drug’s value than might be warranted. As Bero and her coauthors so drily put it, “The information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.”