Official Partner The Royal Society of Medicine Education Investor Awards 2017 Winner Feefo Gold Service 2019


Medical Sciences, Replicability

Hi everyone, I’m Joana and I am thrilled to be joining the Medic Portal. I will be contributing a monthly post on new developments in the field of biomedical sciences and I look forward to discussing these issues with you. My background is in biochemistry and I am currently wrapping up an MPhil in Medical Science at the University of Cambridge. I plan to spend the next year studying the philosophy of science and health before returning to the U.S. to pursue a combined medical degree and scientific doctorate specializing in cancer biology. I am passionate about scientific research, specifically as it intersects with medicine, and have dedicated myself to bridging these two fields.  In these blog posts, I hope to accomplish this by using current advancements and controversies in biomedical research to explore the relevance of science for the clinic.

In this, my first post, I’d like to draw your attention to an ongoing discussion in the life sciences on the subject of reproducibility. One of the central tenants of the scientific method, reproducibility demands that experimental results be replicable in order to demonstrate that they are not the product of chance. In fact, when I first learned about the scientific method, I recall my teacher emphasizing that careful explanation of every step of an experiment is essential so that someone else could read my procedure and repeat it. This standard of reproducibility is particularly important in biomedical science where publications not only inform further research in the field, but often also serve as the basis for the development of new experimental treatments.

It is alarming to learn, then, that a significant portion of published biomedical research cannot be independently replicated. For instance, in 2012, scientists at the biotechnology company Amgen chose 53 high-impact publications considered to be “landmark” studies in the field of cancer biology and set out to replicate their results, contacting individual labs for reagents and guidance when necessary (1). Of these, they were able to confirm the main findings reported in only 6 (11%) of the studies. This discrepancy demonstrates a failure of the scientific method, and comes at a high cost: an estimated $28 billion a year is spent on preclinical research that cannot be reproduced (2). In addition to the direct economic losses, irreproducibility in biomedical research carries significant costs ranging from time and resources spent pursuing false leads to the potential exposure of human subjects to ineffective drugs.

Considering these consequences, there have been a number of proposals to address the causes of this “reproducibility crisis.” Many scientific journals now require the publication of thorough procedures and raw data to prevent biases in the reporting and analysis of results which can lead to misleading and often irreproducible conclusions. Academic scientists and members of the biomedical industry alike are pushing for more opportunities to publish negative results in addition to exciting new findings. This would improve the “watchdog” capacity of the scientific community to catch erroneous or misleading conclusions. Lastly, the “Reproducibility Initiative” and others like it hope to encourage scientists to have their research replicated by independent laboratories before publication; although timely and costly, this would save time and resources spent by laboratories seeking to elaborate on published findings that cannot be reproduced.

Perhaps the greatest contributor to irreproducibility, variability among biological samples, also happens to be the most difficult to address (3). Immortalized cell lines, even when acquired from certified cell banks, quickly accumulate mutations and drift genetically. Subtle changes in culture conditions could lead to morphological and functional changes which cannot easily be replicated in another laboratory. Genetically engineered animal models similarly cannot be easily genetically replicated from lab to lab leading to variable results, often ones which contradict one another. Such issues can be resolved through genetic characterization and open sharing of biological reagents between laboratories. It is the use of poorly characterized and unreliable antibodies, however, which makes it impossible to replicate a significant portion of scientific publications; this culprit has yet to be adequately addressed.

Crucial for adaptive immune function, antibodies are Y-shaped proteins secreted by B cells. In the body, these molecules each specifically recognize and bind to an epitope on an antigen, often a foreign object or pathogen. The antigen-binding site is comprised of a hypervariable region. This allows for an enormous diversity of antibodies, each of which can recognize a different potential antigen. Scientists use antibodies in experiments to recognize and bind to target proteins with high specificity. Many assays commonly used in preclinical biomedical research are entirely reliant on antibodies to detect the presence, localization, and relative amount of proteins being studied. However, the production of these antibodies for scientific use leaves room for substantial variability and lack of specificity (4).

Commercial antibodies are made by exposing lab animals to a protein of interest and purifying the antibodies from their blood. Alternatively, the antibody-secreting white blood cells (B cells) can be collected, fused with immortalized cells, and grown in culture. With both of these methods, the antibodies produced can be cross-reactive, recognizing other similarly-shaped proteins in a biological sample. In order to validate the specificity of antibodies, scientists must perform control experiments using the antibody on two samples which are identical except for the presence or absence of the protein of interest. Often times, these conditions are difficult to achieve, and the results only hold true for the specific antibody tested, only in the application in which it was used. Additionally, there is substantial variability between batches of antibodies as each animal has a different immune response. As anecdotal evidence from scientists shows, batch-to-batch variability is enough to yield an entirely different result in an experiment (4). Efforts to standardize antibody design and production and compile validation information for the tens of thousands of commercial antibodies on the market have not yet been successful. It quickly becomes clear that unreliable commercial antibodies, crucial for biomedical experiments, are also a main source of the documented inability to reproduce results.

Although reproducibility is the cornerstone of the scientific process, published biomedical studies often fail to be independently replicated, potentially with substantial consequences: considerable time and resources are wasted by scientists who hope to pursue and expand on new findings and by pharmaceutical companies who hope to develop new treatments from published targets and potential drugs. The astounding failure rate of Phase II clinical trials for new medicines is due in part to publications serving as the scientific basis for this research. In most instances, the irreproducibility of these results are is not due to scientific fraud, but to incomplete methods sections, biased data analysis, and, most importantly, variation between biological reagents. Among these, commercially available antibodies used in biochemical experiments can be cross-reactive and unreliable from batch-to-batch, hindering efforts to reproduce and further investigate published results. Efforts to identify and address the causes of this “reproducibility crisis” hope to ensure the validity and progress of scientific research. For scientists, like myself, who hope to use research to inform and improve patient care, this will prove a sizeable hurdle to overcome when translating scientific discoveries to effective new treatments.


  1. Drug development: Raise standards for preclinical cancer research. 483, 531–533 (2012).
  2. The Economics of Reproducibility in Preclinical Research. 13, e1002165 (2015).
  3. Reproducibility: The risks of the replication drive. 503, 333–334 (2013).
  4. Reproducibility crisis: Blame it on the antibodies. 521, 274–276 (2015).

Loading More Content