Functional genomics is central to modern drug discovery, yet high attrition rates persist. In this article, Dr Salman Tamaddon-Jahromi, a postdoctoral research associate at the University of Cambridge, discusses how end-to-end CRISPR screening strategies, iPSC-derived neuronal models and layered quality control can convert functional genomics signals into actionable therapeutic hypotheses.

Functional genomics has become a foundation of modern drug discovery, giving scientists the tools to connect human genetics with disease biology at scale. Genome-wide CRISPR screens can now routinely identify hundreds of potential targets, particularly in complex areas such as neurodegeneration. Yet despite this progress, the gap between target discovery and actionable therapeutic insight remains wide, with many promising signals failing to translate into credible drug hypotheses.
In this article, Dr Salman Tamaddon-Jahromi, a postdoctoral research associate in the department of pharmacology at the University of Cambridge, reflects on his experience across academia and large-scale industry–academic collaborations. Drawing on his work leading neurodegeneration validation efforts at Open Targets, he explores how functional genomics platforms must be designed end-to-end. This includes considerations from screening strategy and induced pluripotent stem cell (iPSC)-derived neuronal models to quality control (QC) and mechanistic validation, enabling CRISPR hits to be converted into evidence that drug discovery teams can trust.
Moving beyond hit generation
During his time as neurodegeneration experimental lead in the Open Targets Validation Lab, Tamaddon-Jahromi focused on what happens after a hit emerges from a genome-wide screen. Rather than stopping at pooled CRISPR discovery, the focus was on building scalable validation workflows that could interrogate targets at higher resolution and greater biological depth.
That work centred on developing arrayed CRISPR platforms, where individual gene perturbations could be paired with deep phenotyping. The goal was to generate outputs that would withstand scrutiny beyond the immediate screening group.
“The output was industry-ready data packages and practices designed for target uptake with the overall objective: generation of orthogonal, decision-grade evidence that boosts confidence in prioritised targets and accelerates the path from an ‘interesting hit’ to a credible therapeutic hypothesis,” he says.
This shift – from ranking genes to assembling evidence – reflects a broader maturation of functional genomics within drug discovery.
Screening as an end-to-end strategy
Tamaddon-Jahromi’s approach to screening is led by the idea that it should not be treated as a single technical choice. Too often, screens are framed around library selection or throughput, rather than as integrated experimental systems.
He argues that credibility depends on how well the entire strategy hangs together, from hypothesis to analysis.
“A recurring pitfall is to treat screening as a discrete choice – most often the selection of a library – when in practice it is an end-to-end strategy. A screen is a linked system,” he says.
A recurring pitfall is to treat screening as a discrete choice – most often the selection of a library – when in practice it is an end-to-end strategy. A screen is a linked system.
In this view, robustness comes from embedding quality control throughout the workflow. Throughput alone does not confer confidence; rather, explicit pass–fail criteria at each stage decides whether data is interpretable, rescuable or should be discarded altogether.
The five pillars of functional genomics screening
To clarify these dependencies, Tamaddon-Jahromi frames screening as a system built around five pillars: cell model selection; CRISPR modality and delivery; library and guide design; phenotype and screen design; and analysis and interpretation. These are supported by an enabling layer of automation, process engineering and assay miniaturisation.
While the pillars can be discussed separately, in practice they are interconnected. Choices made early in the process, such as the biological model, inevitably constrain what is feasible further downstream. The value of this framework is in forcing teams to confront those trade-offs upfront, ensuring that the screen can answer the biological question it was designed to address.
iPSC-derived neuronal models: relevance with constraints
In neurodegeneration research, iPSC-derived neuronal systems are increasingly used as a validation layer because they offer human genetic context and disease relevance. However, they also impose significant technical constraints that shape screening design.
Differentiation introduces variability, neurons can aggregate and phenotypes become highly sensitive to factors such as plating density, morphology and timing. Even the choice of differentiation strategy has downstream consequences. Inducible transcription factor-based systems tend to be more homogeneous and amenable to screening, whereas morphogen-based approaches may better capture regional identity, albeit with increased variability.
“The model is therefore not just a biological choice; it is an engineering constraint that shapes format, scale and which phenotypes are realistically measurable,” says Tamaddon-Jahromi.
Editing efficiency and interpretability
CRISPR editing performance represents another major restriction, particularly in neuronal systems. Ideally, editing should occur late enough to avoid confounding developmental effects while still achieving high and consistent knockout efficiency across many perturbations.
In complex systems, success often hinges on interpretation: real datasets are messy, so you need to know when QC has failed, what can be salvaged and how to place statistics in biological context.
This requires treating editing as a system, rather than an assumption. Measuring editing efficiency therefore becomes essential, especially when interpreting negative results.
“In complex systems, success often hinges on interpretation: real datasets are messy, so you need to know when QC has failed, what can be salvaged and how to place statistics in biological context,” says Tamaddon-Jahromi.
Why iPSC-derived neurons still matter
Despite their limitations, iPSC-derived neurons remain key tools for target validation. They can be generated directly from patient material and disease-causing mutations can often be corrected to create isogenic controls. This allows for clean, genetically matched comparisons between disease and corrected cells, increasing confidence in phenotype interpretation.
“That yields a clean comparison – disease versus corrected wild type on an identical background – so phenotypes can be interpreted with far greater confidence,” says Tamaddon-Jahromi.
These systems also allow mechanistic experiments – such as targeted perturbations and time–series analyses – that are difficult to perform with many other evidence sources.
A pragmatic view of model limitations
Tamaddon-Jahromi is equally clear about what iPSC-derived neurons cannot capture. Neurodegenerative diseases are age associated and reprogrammed cells lack decades of accumulated stress.
Rather than viewing this as a fatal flaw, he frames it as a question of fitness for purpose: “The question is not whether the model is perfect, but whether it is useful for the mechanism under interrogation – all models are wrong, but some are useful.”
Designing platforms from the end goal backwards
Across his work, one principle consistently guides platform design: start with the end goal.
“I work backwards from the therapeutic hypothesis,” says Tamaddon-Jahromi. “What biology am I trying to modulate or rescue and what evidence would convince a discovery team that the target is real?”
Once that goal is defined, the design choices begin to take shape – including the model, CRISPR modality, readouts, format and analysis plan. Execution at scale then depends on an enabling layer of automation and process engineering. Without this, high-throughput screening risks becoming an aspiration rather than a reliable capability.
From hit to target: a gated validation process
In this framework, target validation is not a single confirmatory experiment but a progression through defined gates. Initial verification ensures that QC criteria are met and controls behave as expected. Validation then requires persistence across independent guides and orthogonal conditions. Mechanistic understanding follows, before translational considerations such as tractability and safety are integrated.
“We treat the journey from hit to target as a set of gates, not a single replication step,” says Tamaddon-Jahromi.
The outcome is not just a prioritised gene, but an evidence package that can support downstream drug discovery decisions.
The future of functional genomics in drug discovery
Looking ahead, Tamaddon-Jahromi sees functional genomics moving towards tighter integration across discovery and validation. Pooled screens will increasingly be paired with higher-resolution readouts, iPSC-derived systems will become more routine as validation platforms and end-to-end programmes will link screening directly to QC gates and data integration.
If these trends continue, target prioritisation may become less speculative and more systematic.
“Prioritisation becomes a structured accumulation of evidence rather than a leap of faith,” concludes Tamaddon-Jahromi.
Tamaddon-Jahromi notes that this work was supported by many colleagues, most notably Dr Panos Zalmas (Head, Open Targets Validation Lab), Dr Emma Duncan (Oncology Therapy Lead) and Dr Suruchi Pacharne (Immunology & Inflammation Lead). Their support, together with the team’s accumulated expertise, was instrumental in building the screening platform and translating it across therapeutic areas



No comments yet