article

Fixing failed drugs: AI solutions for toxicity in drug discovery – part 3

What role could large language models and AI agents play in drug safety? In Part 3, Layla Hosseini-Gerami of Ignota Labs discusses how emerging technologies might make toxicity analysis faster, more accessible and part of the drug discovery workflow from day one.

AI assistant in biotechnology science experiment. Developing vaccine with artificial intelligence augmented reality virtual interface helping scientist analyze lab data.

When discussing how newer AI technologies like large language models (LLMs) could contribute to toxicity prediction, Hosseini-Gerami identified two key applications:

1. Improved scientist–data interaction:

“Once we have all these datasets, have run these algorithms and have some outcomes and outputs, how can a scientist really engage with that information and use it to make a decision on what to do next?” she enquires.

 

Reserve your FREE place

 


Are you looking to optimise antibody leads in your drug discovery? Register for this webinar to find out how!

30 July 2025 | 10:00 AM BST | FREE Webinar

Join this webinar to hear from Dr. Lei Guo as she shares how early insights into liability, PK, stability, and manufacturability can help you optimise antibody leads in early drug discovery – and mitigate downstream risks later in development.

What You’ll Learn:

  • How to assess key developability risks early
  • How in silico modelling and in vitro testing can be combined to predict CMC risks earlier in discovery stage
  • How micro-developability strategies are tailored for complex or novel formats

Don’t miss your chance to learn from real-world leaders

Register Now – It’s Free!

 

She envisions interfaces where scientists could ask natural language questions: “Can they, for example, have a chatbot where they can say, ‘What’s the risk of liver toxicity for this drug?’ And then it will automatically pull out the data that it needs and the context that it needs to be able to answer that question.”

This addresses a fundamental challenge in data science: “As a data scientist, that is a really big challenge. How do you convey data in a way that a human being can really get the most value from it?” she says.

2. Agentic workflows for toxicity monitoring and analysis:

“With the rise of agentic workflows, a really cool system could be one where we take a drug that has had some kind of toxicity problem, ask the question ‘Why is this drug causing toxicity?’ and then the agent knows exactly what analysis needs to be run, what data needs to be gathered to make that decision.”

The conversation explored how AI agents could potentially run toxicity assessments in the background during early drug discovery. Proffering the notion that, if toxicity is more like a checking process – which can be done computationally but is a boring, annoying process – perhaps that could be put into the background with an agent deciding and running and maybe creating a dashboard?

Hosseini-Gerami agreed with the potential, particularly if implemented efficiently: “If we could show that we could do this in a way that is very low cost, and then it actually does show an impact downstream for the overall chances of success of that drug… I think that’s what we’re waiting for.”

Current limitations and future directions

Despite promising technologies, significant challenges remain in advancing in silico toxicology. Hosseini-Gerami identified the following key limitations:

1. Data accessibility and proprietary barriers:

“It is a shame that a lot of this isn’t shared. People talk a lot about how more of this data needs to be shared and we need to start talking about when there are failures, but ultimately companies are going to be keeping it under wraps.”

She suggests a pragmatic approach: “It’s almost like we need to show them that there will be a benefit for them if they share data.”

2. Limited knowledge for predicting rare toxicities: “We don’t know what the best off targets to measure are for predicting muscle toxicity, for example. That knowledge hasn’t been built up yet.”

3. Validation timeframes and translation challenges: “Any prediction that you make at that early stage, you don’t really find out until a decade later when it goes into a human being.”

4. Misleading public information: “The number of times I’ll look at a clinical trial report or a publication, and they’ll claim ‘this drug was well tolerated,’ and then when you actually dig into the data and the findings from preclinical, you see the drug was not well tolerated at all.”

She notes that companies often obscure toxicity issues in public communications: “The reasons that they give for why they’re not continuing with a programme, are things like ‘a strategic shift.’ They don’t want to say what they’re actually trying to say, which is ‘didn’t show any efficacy’ or ’caused toxicity issues in mice.”

The path forwards

Looking towards the future, Hosseini-Gerami emphasised that implementing earlier toxicity screening would require a significant change of mentality in the industry:

“This will really require a big mindset shift… but that is the ultimate goal, right? To have these things come in a lot earlier and be part of the process throughout.”

Data is always going to be a limiting factor in just how well we can understand and tackle these issues.

She raises the question of how to facilitate this change: “Perhaps other people have resonated with this issue; for example, a scientist who has worked on a project where they really wish that they had done some safety screening a lot earlier. Is there something that we can do to facilitate this mindset shift?”

The conversation concluded with agreement that while technology continues to advance, data remains the fundamental limitation: “It’s just so extremely complex and multifaceted. Data is always going to be a limiting factor in just how well we can understand and tackle these issues,” ended Hosseini-Gerami.

Conclusion

The conversation with Layla Hosseini-Gerami illuminates both the challenge and potential of AI-driven approaches to toxicity prediction in drug discovery. The statistics are sobering – 56 percent of drug candidates failing due to safety issues represents an enormous waste of scientific potential and investment capital, alongside delayed treatments for patients.

While recent advances in multiomics data generation, systems biology approaches and AI methods offer promising new directions, realising their full potential will require addressing systemic issues in how the industry approaches safety evaluation and data sharing.

The vision Hosseini-Gerami supports – of integrated, AI-driven toxicity assessment becoming a routine background process in early drug discovery – could dramatically improve success rates. However, achieving this vision will require not just technological advances but changes to incentive structures and company practices throughout the pharmaceutical ecosystem.

As computational methods continue to evolve and more data becomes available, in silico toxicology modelling may eventually help reduce the staggering failure rate of drug candidates due to safety issues – saving billions in development costs and, more importantly, accelerating the delivery of new therapies to patients in need.

Layla Hosseini-Gerami Meet the interviewee

Dr Layla Hosseini-Gerami bridges the worlds of chemistry, biology and AI in her role as Chief Data Science Officer at Ignota Labs. She has an undergraduate and masters degree in chemistry, winning Outstanding Graduate of the Year and was a finalist in the prestigious Salters awards. She then went on to earn a PhD at the University of Cambridge under the mentorship of Andreas Bender, a renowned figure in AI drug discovery. There she pioneered integrative cheminformatics and bioinformatics strategies for understanding drug molecular mechanisms, winning the Chemistry Theory Outstanding Thesis Award in 2023. She also produced several publications in collaboration with Eli Lilly, leading industry translation of her academic research. At Ignota Labs she has built upon these sophisticated methods to understand the mechanisms of drug toxicity and how to mitigate them. Her goal is to leverage AI not just to understand, but also to prevent adverse drug reactions, ensuring safer therapeutic options for patients.

Leave a Reply

Your email address will not be published. Required fields are marked *