article

The next phase of the multiomics evolution, powered by AI

Genomics laid the foundation for precision medicine, but on its own, it offers only part of the picture. This article explores how integrated multiomics can provide the deeper biological context needed to drive more effective therapies forwards.

Concept of healthcare and medical AI technology services,Medical worker touch virtual medical revolution and advance of technology Artificial Intelligence and technology for future healthcare.

In 2003, the completion of the Human Genome Project was celebrated as the start of a new era in medicine. However, even though genome sequencing can now be completed in a single day at a fraction of the original cost, and despite the exponential increase in data generation, nearly two decades later the clinical impact of genomics has been limited. The problem is not a lack of genetic insight; instead, biological processes operate on a level that goes far beyond genetic sequences.

Understanding health and disease requires more than reading the genomic code. A genome is only one layer in a highly interconnected system of molecular and cellular processes. The dynamic behaviour of RNA transcripts, the regulation and modification of proteins, the lipid composition of membranes, the structural organisation of cells and the spatial context of tissues all play critical roles in disease progression and therapeutic response. Without this broader context, even the most comprehensive genomic datasets offer an incomplete map of biology.

Enter multiomics

Multiomics is designed to solve this problem by observing multiple molecular layers simultaneously. Combining genomics with transcriptomics, proteomics, metabolomics, spatial profiling and cellular imaging, will enable researchers to move beyond single-variable analysis and begin to see the biological system as a whole. The result is a more holistic, actionable view of cellular and tissue dynamics that is essential for uncovering causal mechanisms in disease.

However, most so-called ‘multiomics’ today falls short of this integrated vision. Current implementations typically involve piecemeal workflows where DNA sequencing is done one week, RNA expression the next, and proteomics later still. The experiments often rely on separate sample sets or involve multiple technical replicates processed under subtly different conditions. This disjointed approach introduces biological noise and technical variability, making downstream data integration complex and frequently unreliable.

In practice, this fragmentation undermines the promise of multiomics. Even when great care is taken, data produced under separate workflows cannot always be effectively reconciled. Time-dependent cellular states, batch effects, instrument drift and sample degradation erode consistency. In drug discovery, where reproducibility, sensitivity and sample-limited contexts are paramount, these challenges pose significant barriers.

AI and multiomics

This problem becomes even more pronounced when artificial intelligence (AI) and machine learning enter the picture. Models are only as good as the data used to train them. Even the most advanced algorithms struggle to learn valuable patterns when fed fragmented or inconsistent data. This is one of the key reasons why many early applications of AI in biology have failed to deliver prospective, hypothesis-generating insights. It is not that AI lacks potential; it’s that it lacks the right data.

The solution

Solving this challenge requires a new generation of multiomic platforms that can capture multiple molecular and structural dimensions from a single sample in a single experimental run. These platforms must preserve spatial relationships, handle diverse molecular inputs and produce consistent, reproducible outputs across modalities. With such platforms, it becomes possible to simultaneously collect genomic, transcriptomic, proteomic and spatial data that is tightly aligned both technically and biologically.

As researchers seek to understand how molecular programmes function within tissue microenvironments, the ability to preserve and analyse spatial context has emerged as a game changer.

Spatial biology, in particular, is becoming an increasingly important piece of this puzzle. As researchers seek to understand how molecular programmes function within tissue microenvironments, the ability to preserve and analyse spatial context has emerged as a game changer. For example, a tumour’s resistance to therapy may not stem from its mutational profile alone but from its interaction with surrounding stroma or immune cells – invisible interactions in dissociated, bulk sequencing data.

By incorporating spatial data into multiomic workflows, researchers can maintain the native context of tissue architecture. This allows for studying disease mechanisms as they occur in vivo, improving our ability to identify tissue-specific expression patterns, cellular heterogeneity and microenvironmental cues that shape disease progression. Such insights are invaluable in oncology, immunology and neuroscience, where structure-function relationships are critical.

Integrated multiomics is the future

The impact on drug development could be profound. Integrated multiomics can accelerate target discovery by revealing causative pathways that span molecular layers. It can improve biomarker identification by correlating transcriptional profiles with protein localisation and cell morphology. It can enhance patient stratification by linking genotype to phenotype in a spatial and cellular context. When paired with AI, it can transform noisy biological complexity into structured, predictive frameworks – shifting the role of machine learning from a descriptive tool to a discovery engine.

In oncology, for instance, integrating transcriptomic and spatial proteomic data has helped researchers understand why certain tumours evade immune checkpoint inhibitors despite harbouring targetable mutations.

There are already early examples of this potential in action. In oncology, for instance, integrating transcriptomic and spatial proteomic data has helped researchers understand why certain tumours evade immune checkpoint inhibitors despite harbouring targetable mutations. Combining single-cell RNA-seq with imaging-based morphometric profiling in neurodegenerative disease has revealed distinct cellular subtypes involved in early disease pathology. These insights would be nearly impossible to obtain through single-modality analysis alone.

Of course, the challenge is not just technical – it’s also logistical. To make these platforms accessible and scalable, they must integrate seamlessly into existing lab workflows, require minimal sample handling, and produce data formats that are standardised and AI-ready. Robust software infrastructure is also essential to support automated analysis, cross-modality integration and model training across institutions. Democratising access to these capabilities is key to ensuring that they benefit not only elite academic labs but also translational researchers and clinical scientists worldwide.

AI’s power in other fields – like protein folding, image recognition and molecular docking – has shown what’s possible when the right models are paired with the correct data. Biology, however, presents a unique challenge: it is deeply contextual, dynamic and multilayered. Traditional data modalities have been ill-suited to capture this complexity. However, with next-generation multiomic platforms, that limitation may finally be lifted.

AI trained on truly integrated datasets can move beyond identifying patterns to suggest functional mechanisms, reveal unexpected correlations and generate new hypotheses. This fundamentally shifts its role in drug discovery from passive analysis to active participation in the scientific process. Combined with rigorous experimental design and high-fidelity data collection, this approach could unlock faster therapeutic cycles, better predictive models for clinical success and more personalised treatment strategies.

Conclusion

The scientific community has made remarkable progress in mapping the genome. The next leap forwards will come not from reading DNA more efficiently but from integrating what we read with how cells behave and respond. To move beyond incremental gains in therapeutic development, we need data that reflects the full complexity of living systems – consistently, coherently and at scale.

By unifying multiomic analysis in a single, accessible platform and pairing it with powerful AI models, we can unlock new pathways for understanding biology and accelerate the discovery of transformative therapies. This is not just the next chapter in genomics – it’s the evolution of how we understand life and treat disease.

 

FGMeet the author

Dr Francisco Garcia, SVP of software & informatics at Element Biosciences

Francisco brings over 20 years of experience leading the development of software and instrumentation for the life sciences industry. Before joining Element as Vice President of Software and Informatics, he served as General Manager at the GSWC Group, LLC providing technical consulting services for companies in software strategy, data analysis and cloud-native development.

Francisco’s experience includes almost 19 years at Illumina, most recently as Vice President of Software Engineering and Embedded Systems. In 2009, he won the annual Illumina Innovation Award for the development of RTA, the on-instrument real-time primary analysis software for sequencing. He was directly involved with the launch of every major system – from the BeadArray Reader to the NovaSeq – and spearheaded the re-invention of Illumina’s cloud platform.

Francisco holds a BS in electrical engineering from Cornell University, where he received numerous awards and scholarships, including being named a National Merit Scholar, an Eastman Kodak Scholar and a Merrill Presidential Scholar. He was awarded his PhD from the Space Plasma Physics Group in Electrical Engineering at Cornell University, where he was a National Science Foundation Fellow.