The predictive validity crisis: Pharma’s productivity paradox – Part I
Posted: 14 October 2025 | Dr Jack Scannell (CEO of Etheros Pharmaceuticals Corp), Dr Raminderpal Singh (Hitchhikers AI and 20/15 Visioneers) | No comments yet
Drug discovery now costs 100 times more per FDA-approved drug than in 1950, despite vast advances in biology and computing. The core problem is the collapse of predictive validity in preclinical models, which sits at the heart of pharma’s productivity paradox.


The pharmaceutical industry faces a paradox that challenges conventional wisdom about technological progress. Despite revolutionary advances in molecular biology, genomics and computational power, drug discovery has become dramatically less efficient over the past seven decades. The numbers are stark: the average pharmaceutical company spent 100 times less per FDA-approved drug in 1950 than in 2010, adjusted for inflation. This increase occurred despite DNA sequencing becoming 10^10 times more efficient and X-ray crystallography improving by 10^4 times.
Dr Jack Scannell, an R&D productivity researcher, argues that this paradox arises from a fundamental misunderstanding of what drives pharmaceutical innovation. The answer is not technological capability but a concept called ‘predictive validity’ – the degree to which preclinical models accurately predict human therapeutic outcomes.
The accidental fall of predictive validity
In retrospect, the remarkable productivity of early drug discovery can be explained by predictive validity.
Biomarkers are redefining how precision therapies are discovered, validated and delivered.
This exclusive expert-led report reveals how leading teams are using biomarker science to drive faster insights, cleaner data and more targeted treatments – from discovery to diagnostics.
Inside the report:
- How leading organisations are reshaping strategy with biomarker-led approaches
- Better tools for real-time decision-making – turning complex data into faster insights
- Global standardisation and assay sensitivity – what it takes to scale across networks
Discover how biomarker science is addressing the biggest hurdles in drug discovery, translational research and precision medicine – access your free copy today
Some in vitro and animal models proved surprisingly good at predicting human outcomes, particularly for anti-infectives, blood pressure drugs and treatments for excess stomach acid. Between the 1950s and 1970s, low regulatory hurdles meant researchers could move quickly from lab tests to human trials. The ‘design, make, test’ loop was fast. In fact, some drugs were tested for efficacy in humans with minimal pre-clinical study. For example, every major class of antidepressants was discovered by administering compounds to people and observing effects. No antidepressant class has ever been discovered through model systems without prior knowledge of similar compounds’ impact in humans.
In this fast ‘design, make, test’ loop, humans – the ultimate target – served as their own model system far earlier in the R&D process than is the case today. Predictive validity was essentially perfect, since researchers were testing the very system they aimed to treat. As Scannell puts it, “people are a pretty good model of people.”
Poor models essentially become ‘false positive-generating devices,’ identifying compounds that appear promising in preclinical testing but fail in human trials.
However, this approach could not continue. As ethical standards tightened and regulatory requirements increased, more up-front work was required before human trials, which also became far more costly. In addition, the pre-clinical models that genuinely predicted human outcomes yielded effective drugs and so rendered themselves economically redundant. The world no longer needed endless new antibiotics, blood pressure medications or stomach ulcer drugs.
Today, for major untreated diseases, we cannot for ethical reasons conduct risky trials in humans. However, the preclinical model systems still in use routinely fail to accurately predict human efficacy, particularly in Alzheimer’s disease, cancer, and many psychiatric and neurological conditions.
The false positive trap
The mathematics of drug discovery reveals why poor preclinical models are so damaging. Given that the vast majority of randomly selected molecules or targets are unlikely to yield effective treatments, screening systems must have high specificity to be useful.
Poor models essentially become ‘false positive-generating devices,’ identifying compounds that appear promising in preclinical testing but fail in human trials. The faster and more efficiently these poor models are run – through high-throughput screening, combinatorial chemistry or AI-driven approaches – the faster false positives are generated, which then tend to fail at great expense in human trials.
The AI amplification problem
Current applications of artificial intelligence in drug discovery often fall into this trap. Some early AI-first firms essentially automated the process of running fundamentally non-predictive models faster. They selected model systems that were optimal for machine learning, but poorly suited to capturing human pathophysiology. As Scannell observes, this pattern has repeated across multiple technological waves over several decades, from computer-aided drug design to high-throughput screening and combinatorial chemistry.
Meet the authors
Dr Raminderpal Singh is a recognised visionary in the implementation of AI across technology and science-focused industries. He has over 30 years of global experience leading and advising teams, helping early- to mid-stage companies achieve breakthroughs through the effective use of computational modelling. Raminderpal is currently the Global Head of AI and GenAI Practice at 20/15 Visioneers. He also founded and leads the HitchhikersAI.org open-source community and is Co-founder of the techbio, Incubate Bio.
Raminderpal has extensive experience building businesses in both Europe and the US. As a business executive at IBM Research in New York, Dr Singh led the go-to-market for IBM Watson Genomics Analytics. He was also Vice President and Head of the Microbiome Division at Eagle Genomics Ltd, in Cambridge. Raminderpal earned his PhD in semiconductor modelling in 1997 and has published several papers and two books and has twelve issued patents. In 2003, he was selected by EE Times as one of the top 13 most influential people in the semiconductor industry.
Dr Jack Scannell is best known for his work diagnosing the causes of the progressive decline in R&D productivity in the drug and biotechnology industry. He coined the term “Eroom’s Law” (from computer science’s “Moore’s Law” spelled backwards) to describe the contrast between falling biopharma R&D output efficiency since 1950 in the face of spectacular gains in basic science and in the brute-force efficiency of the scientific activities on which drug discovery is generally believed to depend. Recently, he focused on the predictive validity of screening and disease models in drug R&D, which constitute perhaps the major productivity bottleneck. Dr Scannell is currently the CEO of Etheros Pharmaceuticals Corp, which is developing small molecule enzyme mimetics, based on fullerene chemistry, for age-related and neurodegenerative diseases. An associate of the Department of Science, Technology, and Innovation Studies at Edinburgh University, he led Discovery Biology at e-Therapeutics PLC, an Oxford-based biotech firm. He has experience in drug and biotech investment at UBS and at Sanford Bernstein where he ran the European Healthcare teams. He has a PhD in neuroscience from Oxford University and a degree in medical sciences from Cambridge University.
Related topics
Animal Models, Artificial Intelligence, Disease Research, Drug Discovery, Drug Discovery Processes, Genomics, High-Throughput Screening (HTS), Molecular Biology, Translational Science
Related conditions
Alzheimer’s disease, Cancer, Neurological conditions, psychiatric conditions
Related organisations
Etheros Pharmaceuticals Corp, Hitchhikers AI and 20/15 Visioneers