Ever wonder why some oncology research never becomes a clinical success? Two cancer researchers were curious and so they reviewed 53 so-called landmark papers, which were published in leading journals and emanated from reputable laboratories. What did they find? An overall poor quality of published preclinical data, and 47 papers - or 89 percent - could not be replicated.
"The scientific community assumes that the claims in a preclinical study can be taken at face value," wrote C. Glenn Begley, former vp and global head of hematology and oncology research at Amgen and now a senior vp at TetraLogic, and Lee Ellis, a professor of surgical oncology and cancer biology at the University of Texas MD Anderson Cancer Center, in Nature. There is an assumption that "the main message of the paper can be relied on... Unfortunately, this is not always the case."
"It was shocking," Begley tells CNBC. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers, we became convinced you can't take anything at face value." A team of about 100 Amgen scientists were involved in the effort.
The piece prompted some handwringing from the editors at Nature, who confessed to a growing unease with carelessness. In general, they cited such possibilities as "unrelated data panels; missing references; incorrect controls; undeclared cosmetic adjustments to figures; duplications; reserve figures and dummy text included; inaccurate and incomplete methods; and improper use of statistics — the failure to understand the difference between technical replicates and independent experiments" (read the editorial here).
Such gaffes were mentioned in the context of making mistakes, although fraud remains a big concern, especially amid the growing number of retractions that take place. Over the past decade, the number of retractions in scientific journals rose more than 10 times while the number of journal articles published has increased by just 44 percent. Why? Good science is not always the highest priority.
"Incentives have evolved over the decades to encourage some behaviors that are detrimental to good science," Ferric Fang, a University of Washington professor of laboratory medicine, microbiology and medicine told a National Academy of Sciences committee earlier this week about retraction issues, according to UPI.
The reason, according to Fang is simply that too many researchers are competing for too few dollars, creating a Darwinian contest for funding and prestige. To what extent this accounts for sloppiness is unclear. But the Nature editors confess the comment published by Begley and Lee "throws up many questions. Here are three of them. Who is responsible? Why is it happening? How can it be stopped?"
For their part, Begley and Lee tried to sort things out by attempting to contact original authors; discuss discrepancies; exchange reagents; and repeat experiments under author direction, sometimes even in the lab of the original investigator. Some authors, though, required them to sign a confidentiality agreement barring them from disclosing data that contradicted initial results.
"The world will never know" which of the 47 studies may actually convey incorrect information, Begley tells CNBC. So how to explain the discrepancies? Often, the authors of the papers would tell Begley and his colleagues that they simply "didn't do it right." He recounted one instance in which he met with a leading researcher for breakfast at a conference to review the issue.
What he was told upset him. "We went through the paper line by line, figure by figure," Begley also tells CNBC. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."
There are, however, few incentives for verifying research. A decade ago, most potential cancer-drug targets were backed by 100 to 200 publications, CNBC writes, but each one may have just a handful. "If you can write it up and get it published you're not even thinking of reproducibility," Ken Kaitin, director of the Tufts Center for the Study of Drug Development, tells CNBC. "You make an observation and move on. There is no incentive to find out it was wrong."
shock pic thx to ogimogi on flickr