Beyond the hype: a veteran’s honest assessment of AI in drug discovery – Part 1
Posted: 14 August 2025 | Dr Raminderal Singh (Hitchhikers AI and 20/15 Visioneers | No comments yet
An interview with Thibault Géoui reveals why this technology wave might finally break through pharma’s productivity crisis – and why it will take longer than the optimists claim.


The pharmaceutical industry has witnessed countless technological revolutions that promised to transform drug discovery, yet Eroom’s Law1 – the inverse of Moore’s Law – continues its relentless march. Every nine years, it costs twice as much to develop a drug, even as computational power doubles every 18 months. However, according to Thibault Géoui, a scientist-turned-industry-veteran who has witnessed decades of technological promises, artificial intelligence represents something fundamentally different.
Géoui’s perspective carries weight. He holds a PhD in structural biology from Grenoble University, followed by roles spanning product management at scientific suppliers and over a decade at Elsevier managing life sciences data platforms. More recently, he has held strategic positions at Charles River Laboratories and, most recently, at the consulting engineering firm Zuhlke. Across these roles, he has occupied a unique vantage point from which to observe how technology adoption actually occurs in pharmaceutical R&D.
“I witnessed going from manual experimentation to a lot of automation during my PhD between 2003 and 2006. We had some kind of technological transformation where we accelerated the way we did experimentation. But I also witnessed that technology doesn’t solve everything – if you don’t look at it from end to end, it creates bottlenecks somewhere else.”
This observation would prove prophetic as Géoui moved through various industry roles, consistently seeing the same pattern: point solutions that optimised individual processes yet left the overall system unchanged, or even created new inefficiencies elsewhere in the pipeline.
The automation paradox – why previous technologies failed to move the needle
The story of modern pharmaceutical R&D reads like a catalogue of technological marvels that somehow failed to deliver on their transformative promises. Géoui witnessed this first hand during his doctoral work in structural biology, where the shift from manual crystallisation experiments to full automation was representative of the industry’s approach to innovation.
“We moved from doing manual setup of crystallisation experiments – taking a 24-well plate, mixing one microlitre drops, putting one millilitre of buffer in a well, covering with a fragile glass slide – to fully automated plate setup. You could do 1,000 times more experiments with 100 times less starting material – and you wouldn’t break anything.”
If you look at the efficiency of drug R&D since the 1950s, it’s going down.
This dramatic improvement in a specific process exemplifies what Géoui calls the “silo industrialisation” approach that has characterised pharmaceutical technology adoption for decades. Combinatorial chemistry, introduced in the 1980s, allowed chemists to synthesise 800 times more molecules per year. High-throughput screening methods delivered thousandfold increases in screening capacity. Computer-assisted molecular design platforms promised to revolutionise how drugs were conceived and optimised. Each innovation represented a genuine breakthrough in its specific domain.
Yet when Géoui steps back to examine overall industry productivity, a disturbing pattern emerges.
“If you look at the efficiency of drug R&D since the 1950s, it’s going down,” he says.
Every nine years, it costs twice the price to develop a drug. If you look at the cost of developing a drug today compared to the 1950s, it’s 100 times more, even adjusted for inflation.
This counterintuitive reality is captured in what researcher Jack Scannell termed “Eroom’s Law” – the mirror image of Moore’s Law.2
“Every nine years, it costs twice the price to develop a drug. If you look at the cost of developing a drug today compared to the 1950s, it’s 100 times more, even adjusted for inflation.”
Recent research from Alexander Schuhmacher et al,3 makes these figures even more sobering. “Last year, someone revisited this and looked at the top 16 pharma companies. For the top 16 pharma, the cost is six billion, not two billion.”
The implications are stark. According to this analysis, the only way large pharmaceutical companies remain profitable is through acquisitions – buying compounds developed by others rather than generating them internally.
“Essentially, what he’s saying is that R&D in pharma just doesn’t work. It’s not profitable.”
This productivity paradox has multiple explanations:
- Increased regulatory complexity
- The “better than the Beatles” problem (how do you improve on something that’s already excellent?)
- Diminishing returns from the low-hanging fruit hypothesis.
While these all contribute, Géoui identifies a more fundamental issue: the brute force approach.
Small research sites will always be more productive, more efficient. Same thing when you compare large pharma and biotech – on average, biotech are eight or nine times more productive than large pharma.
“I think what we’ve witnessed for the past several decades in pharma is a brute force approach, meaning that if I automate the hell out of something, I just solve the problem. That’s been an issue specifically when all this cheap genetic data generation came in. People said, ‘Fine, I’ll create masses of data, I’ll store masses of data, I’ll process massive data, I’ll move it fast and stuff will pop out.'”
The problem with this approach becomes apparent when examining productivity differences within the industry.
“If you compare small research sites and big research sites, small research sites will always be more productive, more efficient. Same thing when you compare large pharma and biotech – on average, biotech are eight or nine times more productive than large pharma.”
These statistics point to what Géoui sees as the core issue: “We can’t handle complexity. When you throw more money at something, it doesn’t necessarily make it better.”
This inability to manage complexity at scale helps explain why decades of technological advancement have failed to improve overall R&D productivity. Each innovation optimised a component of the system while leaving the fundamental challenge – managing the staggering complexity of biological systems and drug development processes – largely unaddressed.
Stay tuned for Part 2 – The AI Difference: why this wave might be the one that truly changes everything.
Thibault Géoui bridges the worlds of science, data and technology to help life sciences organisations bring better products to market – faster and smarter. With 20 years of experience in drug R&D and a passion for FAIR data and AI, he has led major initiatives such as QIANGE, Elsevier, Charles River laboratories and today at Zuhlke. Through his work, Thibault guides pharma and tech leaders in aligning data strategies, refining value propositions and accelerating digital transformation. A PhD in structural biology and an MBA back his systems-thinking approach, while his writing and coaching bring complex ideas down to earth – with a touch of humour and a strong belief in human connection.