Science's Worst Enemy: Corporate Funding

And you thought the Bush administration was bad.

By Jennifer Washburn
Nov 20, 2007 12:00 AMApr 6, 2023 6:20 PM

Newsletter

Sign up for our email newsletter for the latest science news
 

In recent years there have been a number of highly visible attacks on American science, everything from the fundamentalist assault on evolution to the Bush administration’s strong-arming of government scientists. But for many people who pay close attention to research and development (R&D), the biggest threat to science has been quietly occurring under the radar, even though it may be changing the very foundation of American innovation. The threat is money—specifically, the decline of government support for science and the growing dominance of private spending over American research.

The trend is undeniable. In 1965, the federal government financed more than 60 percent of all R&D in the United States. By 2006, the balance had flipped, with 65 percent of R&D in this country being funded by private interests. According to the American Association for the Advancement of Science, several of the nation’s science-driven agencies—the Environmental Protection Agency (EPA), the Department of Agriculture, the Department of the Interior, and NASA—have been losing funding, leading to more “outsourcing” of what were once governmental science functions. The EPA, for example, recently began conducting the first nationwide study on the air quality effects of large-scale animal production. Livestock producers, not taxpayers, are slated to pay for the study. “The government is clearly increasing its reliance on industry and forming ‘joint ventures’ to accomplish research that it is unable to afford on its own anymore,” says Merrill Goozner, a program director at the Center for Science in the Public Interest, a consumer advocacy group.

Research universities, too, are rapidly privatizing. Both public and private institutions now receive a shrinking portion of their overall funding from government sources. They are looking instead to private industry and other commercial activities to enhance their funding. Last summer, an investigation by the San Jose Mercury News found that one-third of Stanford University’s medical school administrators and department heads now have reported financial conflicts of interest related to their own research. These included stock options, consulting fees, and patents.

Is all this truly harmful to science? Some experts argue that corporate support is actually beneficial because it provides enhanced funding for R&D, speeds the transfer of new knowledge to industry, and boosts economic growth. “It isn’t enough to create new knowledge,” says Richard Zare, a professor of chemistry at Stanford University. “You need to transfer that knowledge for the betterment of society. That’s why I don’t want to set up this conflict of interest problem to such a heightened level of hysteria whereby you can’t get universities cooperating with industry.”

Even many industry leaders worry that the current mix of private and public funding is out of balance, however. In 2005, a panel of National Academies (the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine) that included both industry and academic members (including Zare) concluded that corporate R&D “cannot and should not replace federal R&D.” Norman Augustine, the panel’s chairman and a former CEO at Lockheed Martin, noted that market pressures have compelled industry to put nearly all its investment into applied research, not the riskier basic science that drives innovation 10 to 15 years out.

Others fear that if the balance tips too far, the “public interest” side of the science system—known for its commitment to independence and objectivity—will atrophy. Earlier this year, former FDA commissioner Jane Henney remarked that “it’s getting much more difficult to get that pure person with no conflicts at all. . . . The question becomes both one of disclosure and how much of a conflict you can have and still be seen as an objective and knowledgeable reviewer of information.” More than half the scientists at the U.S. Fish and Wildlife Service who responded to a survey conducted by the Union of Concerned Scientists in 2005 agreed that “commercial interests have inappropriately induced the reversal or withdrawal of scientific conclusions or decisions through political intervention.”

Merrill Goozner argues that the danger runs deeper. “In many precincts of the scientific enterprise, the needs of industry have become paramount,” he says, turning science into “a contested terrain” where facts are increasingly contingent on who is funding the research. “The whole scientific revolution, which was a product of the Enlightenment, is threatened when you commercialize science,” he warns.

So is private funding a boon or a bane for American science? The answer, like good science itself, requires looking carefully at how the phenomenon is playing out in the real world.

Steven Nissen is perhaps the most prominent physician speaking out about the pharmaceutical industry’s growing influence over medical research. An esteemed cardiologist at the Cleveland Clinic, Nissen has written more than 300 articles and served as the immediate past president of the American College of Cardiology. Working in a bustling academia-affiliated medical center has given Nissen a unique perspective on the benefits and risks of privatization.

In the past, academic medical investigators strove to maintain “arm’s-length relationships with their corporate sponsors,” says Marcia Angell, a former editor in chief at The New England Journal of Medicine. That changed with the rise of biotechnology and the passage of landmark congressional legislation known as the Bayh-Dole Act. Passed in 1980, the act granted universities and their professors automatic rights to own and commercialize federally funded research. The goal was to unlock financial incentives that would speed the pace of American scientific innovation. Overnight, many of the cultural taboos associated with overt commercial profiteering on campus began to evaporate.

Nissen believes that interactions between academia and industry are crucial to the development of new treatments. He also accepts sponsored research grants from industry, both to test drugs and develop new treatments, although he tries to limit his personal financial conflicts of interest by requiring that any other consulting fees and honoraria be given directly to charity. Still, he is clearly troubled by the threat that privatization poses to academic autonomy—and to research objectivity. “We can only make good decisions in science when all of the information is available for physicians, scientists, and patients to review,” he says. But drug companies are increasingly keeping physicians and their patients in the dark.

Last year, Nissen grew suspicious about possible health risks associated with GlaxoSmithKline’s top-selling diabetes drug, Avandia. “We requested access to the original patient-level data,” he says, but “we were not afforded access.” Nissen wasn’t surprised; for years he has perceived a growing tendency by the drug industry to suppress negative research data.

Searching the Internet, Nissen stumbled upon a remarkable cache of data belonging to Glaxo. His search unearthed 42 Avandia clinical trials—only 15 of which had ever been published. Nissen didn’t know it at the time, but the reason Glaxo’s data were just sitting there on the Web was the outcome of a lawsuit filed by former New York attorney general (and current governor) Eliot Spitzer in 2004. The lawsuit alleged that Glaxo had concealed negative trial data associated with its popular antidepressant drug, Paxil. When the data were properly analyzed, they showed that children given Paxil were actually two times more likely to experience suicidal thinking and behavior than children given a placebo, or sugar pill. When Glaxo settled the suit, it denied having suppressed data and consented to posting results of all its clinical trials online—including its data on Avandia.

Nissen knew there were limitations to the public information he had. He lacked any original patient-level information, and a meta-analysis of prior drug studies is always less powerful than a large prospective, randomized clinical trial. This May, however, Nissen felt compelled to alert doctors and patients to what he had found.

Publishing in The New England Journal of Medicine, Nissen reported that Avandia raised the risk of heart attacks in patients by 43 percent. The news made front-page headlines. Two days later, the FDA, which had already been assessing the health risks of Avandia, imposed its toughest warning label, the “black box,” on the drug, as well as on Actos, another drug used to treat diabetes.

At a subsequent congressional hearing chaired by Representative Henry Waxman, it came to light that the FDA had known about Avandia’s risks for some time. Rosemary Johann-Liang, a former FDA drug safety supervisor, had recommended a black box warning label for Avandia due to its harmful effects on the heart one year prior to Nissen’s publication. Glaxo’s own meta-analysis, presented to the FDA in 2006, showed a 31 percent increased risk of heart attacks. Yet according to Johann-Liang, “my recommending a heart failure box warning was not well received by my superiors, and I was told that I would not be overseeing that project.” She was also told to obtain her supervisors’ approval before making any future black box recommendations. After the hearing, the FDA completed its own meta-analysis of the original patient data and found virtually the same heart risks Nissen had reported.

Nevertheless, Nissen found himself under attack, often by people with explicit financial ties to the drug industry. His challengers have included Valentin Fuster, who wrote a critique of Nissen’s work in Nature Clinical Practice Cardiovascular Medicine. Fuster receives Glaxo funding and serves as the chairman of Glaxo’s Research and Education Foundation. Peter Pitts wrote a stinging attack on Nissen in The Washington Times; he is a senior vice president at the PR firm Manning Selvage & Lee, which represents Big Pharma, including Glaxo. Douglas Arbesfeld, a senior communications consultant at the FDA, disparaged Nissen in a biting e-mail to the media. He formerly worked as a spokesman for Johnson & Johnson.

Press reports over the last 15 years detail how whistle-blowers inside academia and within the FDA who have attempted to expose drug-research and safety issues have been pressured. Some were threatened with legal action, others punished by their superiors and discredited. “Whenever we’ve raised safety questions about drugs,” Nissen says, “there’s always been a reaction like this. Exactly the same thing happened in 2001 when we published a manuscript that suggested that Vioxx might be causing excess heart attacks.” Nissen was coauthor of one of the first studies on the dangers of Vioxx. Three years later, Merck pulled the drug from the market. By that time, one FDA analyst estimates, the drug had contributed to up to 139,000 heart attacks. (A Merck representative states that the paper from which the estimate of 139,000 was derived had “serious limitations” and did not necessarily reflect the views of the FDA.)

Experiences like these have bolstered Nissen’s position that the independent research system needs to be protected and preserved. “I think having independent physicians leading the study and analyzing the data is the best way to protect against biases in the reporting of results.” But increasingly, he says, the pharmaceutical industry is farming out its clinical trials to for-profit entities, known as contract research organizations. Independent academic investigators are getting shut out.

The numbers bear Nissen out. Big Pharma now finances approximately 70 percent of the nation’s clinical drug research. In the past, most of this sponsored-research money went to academic medical centers; today an estimated 75 percent flows to for-profit contract research firms.

Even when academic physicians are involved, often they don’t enjoy anything close to true research independence, Nissen says: “Academic physicians are still involved in the leadership of the study, but not fundamentally in the design of the study, or in the key aspects of the execution of the study.” Often, he notes, the industry sponsor will prevent the academic investigator from performing any independent analysis of the complete raw data related to his or her research. “The physician gets a printout of the main results,” Nissen says, “but the actual analysis itself is done by statisticians within the companies.”

In 2001, the editors of 12 leading medical journals, including The New England Journal of Medicine and The Lancet, expressed their shock at what was happening to independent scientific inquiry. Many of these journals implemented new policies requiring authors to sign a statement verifying that they had unfettered access to the complete trial data, took full responsibility for the conduct of the trial, and controlled the decision to publish.

But cases of commercial influence continue to surface, often making headlines, prompting some editors, like Drummond Rennie, an editor at The Journal of the American Medical Association, to sound defeated: “You know, if people lie to us, all we can do is reveal that lies were told afterwards—and usually they’re lying on their way to the bank.”

Like medical researchers, university professors have long collaborated with private industry. In recent years, though, the nature and scope of these relationships have changed dramatically. The University of California system is a prime example.

Lisa Bero, a pharmacologist and health policy researcher at the University of California at San Francisco (UCSF), has immersed herself in studying the relationship between industry-funded science and research quality. She also chairs the internal UCSF committee that reviews professors’ financial conflicts of interest. “Corporate money is certainly moving into academia,” Bero says. “And now we have all these new models of funding. I mean, before, it used to be just the investigator who goes out and gets an industry grant. Now entire departments are being funded by one company.”

Early this year, BP (formerly British Petroleum) announced it was signing the largest proposed academia-industry research alliance in U.S. history: a 10-year, $500 million agreement with UC Berkeley, Lawrence Berkeley National Laboratory, and the University of Illinois at Urbana-Champaign to study biofuels and the production of genetically modified crops that might improve their energy efficiency. As of this writing, the deal is still being negotiated. However, according to Berkeley’s official proposal, released in early March, the deal is unusual in many respects. First, it is huge, spanning roughly 25 labs at three campuses. Second, it permits 50 BP employees to lease commercial research space on campus, side by side with Berkeley’s traditional academic labs. On the academic side, all research is publishable. On the BP side, by contrast, the research is proprietary; there is no obligation to publish.

Tadeusz Patzek, an engineering professor at Berkeley who formerly worked as a scientist at Shell, believes the deal compromises the university’s ability to look objectively at long-term energy solutions to global warming. He fears that professors, following the money, will steer their research toward BP’s specified area of commercial interest—biofuels—without adequately exploring other energy options. Patzek’s concerns are supported by survey research in the medical field conducted by David Blumenthal and Eric Campbell, policy analysts at Harvard University. Their research finds that academic scientists who receive industry funding are significantly more likely to select research projects that have a higher potential for commercial application. Industry ties, they report, are also associated with longer delays on publication, confidentiality restrictions, and a greater withholding of information from academic peers.

Richard Nelson, a professor emeritus of economics at Columbia University, finds these commercial restraints on the free flow of academic knowledge troubling from the standpoint of innovation. The biggest change resulting from the Bayh-Dole Act, he says, is the way that academic knowledge is transferred to industry. In the past, university research was picked up by industry mostly through open means: publications, conferences, consulting, et cetera. After the passage of Bayh-Dole, patenting and licensing became more common—not because it was always the only or best way to transfer knowledge to industry but because it enabled universities and their professors to share in the profits.

The rise in academic patenting and licensing also gives universities and their professors growing financial ties to outside companies, not to mention growing investments in their own research (including patent rights, stockholdings, and royalty shares). “What academic institutions always argue is that they have sufficient safeguards in place to protect against any influences on the academic research,” Bero says. “Here at UCSF I sit on what’s called a conflict of interest advisory committee, and believe me, I’m familiar with our gazillion policies. Universities do have a lot of policies, but I would argue that they’re not sufficient.”

Bero points to a large body of research by herself and others that shows industry-funded studies preferentially reach conclusions that favor sponsors’ products or interests. One meta-analysis published in BMJ (British Medical Journal) found that pharmaceu­tical-­industry-funded research was four times more likely to reflect favorably on a drug than research not financed by industry. Even when Bero controls for a variety of other factors, she finds that the effect of industry funding on the research outcome is huge. Research on secondhand smoke conducted by researchers with industry ties is 88 times more likely to find no harm; industry-funded studies comparing cholesterol drugs are 20 times more likely to favor the sponsor’s drug.

This happens, Bero contends, because private industry has become increasingly sophisticated about how it uses “science” to achieve its commercial objectives. “We’ve looked at the tobacco and pharmaceutical industries, and now we’re looking at legal documents pertaining to the asbestos, vinyl chloride, and lead industries,” she reports. The techniques they use are remarkably similar: Positive research gets published; negative research doesn’t. The sponsor’s drug is given at a higher dosage than the competitor’s drug. The sponsors control study design, access to data, and statistical analysis. They ghostwrite articles and pay prominent academics to sign on as “authors.”

Bero observes that many professors are desperate to find funding for their research, and a lot of them are naive about the potential for industry influence. “You never think you’re at risk for conflicts of interest,” she says. “You always think your coworker is.”

University policies governing conflicts of interest and research integrity vary widely from campus to campus—and most still have a lot of holes, Bero contends. One 2005 study examining more than 100 academic medical centers found that half would allow the corporate sponsor to write manuscripts reporting on study results and only allow faculty to “suggest revisions”—a policy basically authorizing commercial ghostwriting of academic research. Thirty-five percent allowed the sponsor to store clinical trial data and release only portions to the investigator; 62 percent allowed the sponsor to alter the study design after the researchers and the sponsor had signed an agreement.

“I think universities really need to think more carefully about protecting their investigators from their sponsors,” Bero says. “A lot of these things are not illegal; there’s no university policy against them.”

Most Americans rarely think of science as something crucial to the way government operates. Yet, as Seth Shulman explains in his book Undermining Science, “the U.S. government runs on information—vast amounts of it.” Scientists at the Department of Agriculture track airborne bacteria resulting from farm wastes, experts at the Centers for Disease Control examine samples to help guard against large-scale disease outbreaks, and regulators at the EPA set standards for pesticide use and exposure. By necessity, most of these federal agencies work closely with industry, but more and more their internal functions are also being privatized. Scientific advisory panels are frequently filled with experts who have close financial and other ties to the same industries that manufacture the products they are reviewing. Agencies also outsource their regulatory functions to private-sector contractors and forge new public-private research ventures.

Consider the Center for the Evaluation of Risks to Human Reproduction, part of the National Institutes of Health (NIH). The center has only two full-time employees and one part-time; until recently the rest of the center’s workforce was supplied by Sciences International (SI), a private consulting firm that has been funded by more than 40 chemical industry clients. For nearly a decade, the center had been outsourcing much of its work to SI, which assessed health risks and drafted reviews for 21 chemicals that the center was reviewing for their possible impact on human reproductive health. This April, NIH terminated its contract with SI after learning that the company or its employees had business ties to the chemical industry.

Another variant of privatization can be seen at the FDA, which currently draws more than 50 percent of its total drug review budget from user fees paid by the pharmaceutical industry. David Kessler, a former head of the FDA, recently told The Wall Street Journal, “There is no doubt that user fees give the industry leverage on setting the agency’s priorities. There are significant risks.” Marcia Angell, the former editor of The New England Journal of Medicine, puts it more bluntly: “The FDA has been captured by the industry it is supposed to regulate.”

What’s troubling about these trends is that most federal agencies are poorly equipped to protect themselves from undue corporate influence, says David Michaels, an epidemiologist at George Washington University and former assistant secretary for environment, safety, and health at the Department of Energy. Regulatory agencies must rely on large quantities of scientific evidence submitted to them by private industry. This evidence is needed to determine the hazards and characteristics of industrial chemicals, products, and wastes. But according to Michaels, most of these federal agencies lack even the most rudimentary tools that a medical journal editor would use to assess the quality and scientific integrity of industry-funded research.

At the EPA and the Occupational Safety and Health Administration, regulators do not have the authority to inquire as to who paid for the studies they receive. The Mine Safety and Health Administration, the Consumer Product Safety Commission, and the National Highway Traffic Safety Administration also lack any formal mechanisms for identifying potential conflicts of interest or for assessing the level of industry influence over the research.

In general, industry-funded studies are also subject to far less oversight than comparable federally funded studies. The data underlying private research do not have to be made public, unlike the data from federally sponsored research. A privately funded study can also avoid external scrutiny simply by being labeled “confidential business information.” One study by the Government Accountability Office found that a majority of the applications submitted to the EPA to market new chemicals contained science-based information that industry had labeled confidential.

As a result of these trends, Lisa Bero says, science has become one of the most powerful tools that private companies can use to fight regulation. The strategy they most often deploy was pioneered by the tobacco industry, which learned to foment scientific uncertainty as a means of staving off regulation. A famous tobacco industry document from 1969 spells out the strategy succinctly: “Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing controversy.”

In 2003, Frank Luntz, a political consultant to the Republican Party, recommended using the same strategy to combat public environmental concerns. “Voters believe that there is no consensus about global warming within the scientific community,” he wrote. “Should the public come to believe the scientific issues are settled, their views about global warming will change accordingly. Therefore, you need to make the lack of scientific certainty a primary issue in the debate.”

“Some policymakers fail to recognize that all studies are not created equal,” says Michaels, the author of a forthcoming book, Doubt Is Their Product: How Industry’s Assault on Science Threatens Your Health. “This results in the existence of what appear to be equal and opposite studies, encouraging policymakers to do nothing in the face of what appear to be contradictory findings.”

Virtually everyone interviewed for this ar­ticle agrees about one thing: The U.S. government must strengthen its investment in science. The members of Norman Augustine’s 2005 National Academies panel continue to call for an immediate doubling of federal investment in basic science, arguing that basic science is a quintessential public good that only the federal government can properly fund. The rewards of basic research are risky and diffuse, making it difficult for individual companies to invest in.

The question of whether privatization has tipped the balance too far from noncommercial, public-interest science is more contested. “I’m more worried about the converse: Not that industry is dominating science but rather that government is abdicating what I think should be its role,” panel member Richard Zare says. Augustine agrees, but he acknowledges that commercialization of the public sphere does present challenges. “The environment we’ve seen does apply pressure in the direction of more applied work at our universities and our government labs. It just requires balance and that’s what makes it difficult,” he says. “If the end product of basic research is articles published in scientific journals, that’s very commendable, but it doesn’t likely impact on the economy short-term. On the other hand, if scientific research is neglected to focus on near-term, quick payoff pursuits, then very quickly the supply of knowledge will be exhausted.”

Some critics of privatization say that more public funding is pivotal not just for basic science but for public health and regulatory science as well. Their argument is much the same: Who else, other than the federal government, can ensure that this science for the public good is reliably carried out? They also say stronger “public interest” protections are urgently needed: stricter conflict of interest rules in both academia and the government, and prohibitions against commercial ghostwriting and corporate control of statistical analysis and raw data.

Once the genie is out of the bottle, it isn’t easy to get it back in. The more that public­-­sector scientists become invested in the status quo—through industry grants, patents, and the like—the less likely they are to support reforms. Not so long ago, academics and government scientists insured that the basic building blocks of science were freely available to everyone. Today, the Columbia economist Richard Nelson points out, a sizable portion of this public knowledge is private property. Is this something that should—or even could—be reversed?

The dilemma, says Eric Campbell of Harvard, is that industry partnerships yield many positive benefits: funding opportunities, the conversion of knowledge into products that benefit the public, rewards for inventors, jobs, economic growth. On the other hand, “the fundamental reason the public invests in science is out of the belief that it represents truth, untainted by commercial interests,” he says. “So to call that into question, I think, is really one of the great risks.”

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group