Looking to responsibly pay for an increase in defense and security spending, President Donald Trump’s FY2018 budget proposal calls for cuts in taxpayer funding to non-defense discretionary spending, including funds to the National Institutes for Health.
Trump’s proposal would cut the NIH budget by almost 20 percent, for a savings of $5.8 billion.
That’s already drawing opposition from congressional Republicans, who are looking to pass a budget that keeps funding at current or increased levels.
Both Senate Majority Leader Mitch McConnell and House Speaker Paul Ryan publicly oppose the cuts.
But on this issue, Congress should listen to Trump. Not only is the NIH essentially corporate welfare for pharmaceutical corporations, history shows government funding of medical research produces fewer results than privately funded research.
For one, the NIH essentially operates as corporate welfare by putting taxpayers on the hook for research that benefits private drug companies.
The NIH then underwrites medical research that would normally be done by massively profitable pharmaceutical companies.
Such research, however, often yields no results, leaving companies with millions of dollars in costs and no product to market.
So the NIH conducts the research for them, and bills the taxpayer, for them.
Secondly, when a NIH research program is successful, the results are handed over to pharmaceutical companies to develop the product, and sell it for private profit.
As Taxpayers for Common Sense notes:
The story of Taxol, as told in a recent General Accounting Office report, is achingly familiar to observers of NIH. Researchers, heroes really, begin testing plant species for anticancer activity as early as the 1950s. They notice that extract from the bark of the yew tree works against tumors, and other scientists eventually isolate the paclitaxel and figure out how it prevents cell divisions. NIH conducts the clinical trials and proves that paclitaxel has potential. But since NIH cannot legally produce or market drugs, it enters into a transfer agreement with Bristol-Myers Squibb in 1991, after 30 years of research. Under the deal, the corporation takes all of NIH’s research, finishes the clinical trials, markets the drug, and prices it autonomously. NIH ignores its own rules that require the company to show evidence the drug would be reasonably priced. For their troubles, NIH and the taxpayers get a measly 0.5 percent of sales as royalties, even though NIH’s rules allow royalties of 5 to 8 percent.
Then Taxol takes off, becoming the highest-selling cancer drug in history and earning $9 billion for Bristol-Myers Squibb off of its $1 billion investment in development. People afflicted with breast, ovarian, and certain lung cancers as well as AIDS-related Kaposi’s sarcoma benefit from the new treatment… if they can afford it. In 2001, Bristol-Myers Squibb charges between $1,000 and $2,000 per dose for Taxol! America’s taxpayers lose twice, because NIH receives only $35 million in royalties for its $484 million investment in research, and Medicare pays $687 million for Taxol from just 1994 to 1999. Even though taxpayer money funded Taxol’s discovery and much of the research that would make it marketable, Bristol-Myers Squibb charges Medicare roughly $500 more per dose than it charges private doctors.
But not only is the NIH an unnecessary corporate welfare program, since corporations prefer government-funded research to that which they must pay for out of their own pocket, it is often less successful and less productive than privately-funded research.
In fact, history shows that nations with the healthiest economies and greatest advances in science are those that leave scientific research to the private sector.
In “The Case Against Public Science”, Cato Institute scholar and clinical biomedical researcher Terence Kealy notes:
The world’s leading nation during the 19th century was the UK, which pioneered the Industrial Revolution. In that era the UK produced scientific as well as technological giants, ranging from Faraday to Kelvin to Darwin—yet it was an era of laissez faire, during which the British government’s systematic support for science was trivial.
The world’s leading nation during the 20th century was the United States, and it too was laissez faire, particularly in science. As late as 1940, fifty years after its GDP per capita had overtaken the UK’s, the U.S. total annual budget for research and development (R&D) was $346 million, of which no less than $265 million was privately funded (including $31 million for university or foundation science). Of the federal and states governments’ R&D budgets, moreover, over $29 million was for agriculture (to address—remember—the United States’ chronic problem of agricultural over productivity) and $26 million was for defence (which is of trivial economic benefit.) America, therefore, produced its industrial leadership, as well as its Edisons, Wrights, Bells, and Teslas, under research laissez faire.
Meanwhile the governments in France and Germany poured money into R&D, and though they produced good science, during the 19th century their economies failed even to converge on the UK’s, let alone overtake it as did the US’s. For the 19th and first half of the 20th centuries, the empirical evidence is clear: the industrial nations whose governments invested least in science did best economically—and they didn’t do so badly in science either.