Advances in personalised medicine, leave data processing behind
The buzz around personalised medicine has long been felt in the halls of academic centres and pharmaceutical labs, and most people with an interest in health or medicine are aware of its potential. Here, Evan Floden shares what it will take to shore up these exciting possibilities.
According to the Personalised Medicine Coalition, 34 percent of all new drugs approved by the US Food and Drug Administration (FDA) in 2022 are personalised medicines. What’s more, investment in this sector is predicted to rise by 70 percent by 2030.
The reason for all this attention and investment is clear: significant benefits for patients. Tailoring medical care to a patient’s unique genetic makeup, alongside the close analysis of disease progression, leads to more effective treatments, reduced side effects and faster diagnoses. Ending the one-size-fits-all approach to medicine and embracing the fact that genomics sequencing can facilitate a deeply individualised approach to treatments and care is a revolutionary step.
However, this ignores one key issue – the technological barriers that must be overcome to make personalised medicine widely available. As any personalised medicine advocate will tell you, treatments that require genomic sequencing and analysis of patient data for individual patients quickly lead to spiralling costs. What’s more, it remains an open question as to whether our healthcare facilities are currently equipped to deliver such treatment programmes at scale. To facilitate the personalised medicine revolution, and ensure patients have access to its numerous benefits, we cannot let our investment in technological infrastructure and processing capabilities be left behind by the clinical innovation they could unlock.
The technology behind the revolution
Genomics analysis and genetic sequencing often sit at the heart of personalised medicine. In fact, an NHS England board paper 1 noted that genomics medicine services have laid the foundation to deliver personalised medicine and treatments, and the mainstreaming of genetic testing into routine practice will make genomics of central importance to clinicians across the country.
Yet, while advanced genetic sequencing and analysis makes a wide range of new treatment options possible where they previously were not, such sophistication comes with technological barriers.
It involves, almost by definition, the consolidation of extensive patient data: genomic information, health records and other external resources. Analysis of such rich data pools might lead to more robust clinical outcomes; however, the process is complex and costly. Interoperability issues between different healthcare systems and data standards create significant challenges.
What’s more, establishing the clinical utility of genetic markers and treatment strategies is a time-consuming and resource-intensive process. Many personalised medicine techniques are prohibitively expensive because sequencing genomes, proteomes and transcriptomes of individual patients is an immense task, especially when only one genetic marker or mutation is being investigated. Such resources are not always available, or available at the scale necessary for the personalised medicine revolution to unfold.
Facilitating the revolution
For such benefits to be truly widespread, we require advanced technological infrastructure that is responsive to this challenge. Fortunately, significant steps to meet this are already underway.
Firstly, scalable computing infrastructure is essential to accommodate the expanding datasets, algorithms and applications. Repeatable data pipelines which point to potential cures based on an individual’s genomic data are crucial to cut costs in personalised medicine treatments. Such pipelines are increasingly available via open-source platforms. Whether in small development labs or major players in global R&D, repeatable pipeline infrastructure is being shared across the scientific community, cutting down both the time it takes to discover a treatment, and the associated costs.
Next, data orchestration tools are becoming more streamlined and automated, making it easier for researchers to manage and process large datasets without requiring extensive programming skills. This has gone hand-in-hand with the adoption of cloud computing in the life sciences R&D space. Cloud computing provides scalable and cost-effective storage and processing power for the immense volumes of genomic and healthcare data. It also ensures healthcare institutions can securely store and efficiently manage this data, making it accessible to researchers and clinicians for analysis and decision-making.
This technological step-change has already yielded significant benefits over the past five years. Estimates suggest that up to forty exabytes of genomics data 2will be produced in the next decade and, with the advent of specialised technology, sequencing a genome may soon take less than 24 hours instead of the current average of two to eight weeks.3
Similar progress on genomics sequencing costs – a process at the heart of personalised medicine – has also been made. Up until twenty years ago, it was unimaginable to consider sequencing patients throughout their life, but recent innovations indicated that the cost can potentially be lowered to as little as $200.4 The potential impact on patient outcomes that this will enable is colossal, but further advancement must be made to make such therapies truly accessible.
The advancement already made to improve our technological infrastructure cannot be understated, but the job is not done. To ensure that more patients can reap the benefits of personalised medicines, the impressive investment on the clinical side must go hand in hand with a focus on always improving and optimising our data processing and analytics capabilities.
- Creating a genomic medicine service to lay the foundations to deliver personalised interventions and treatments [Internet]. Available from: https://www.england.nhs.uk/wp-content/uploads/2017/03/board-paper-300317-item-6.pdf
- Genomic Data Science Fact Sheet [Internet]. Genome.gov. Available from: https://www.genome.gov/about-genomics/fact-sheets/Genomic-Data-Science
- Navarro FCP, Mohsen H, Yan C, et al. Genomics and data science: an application within an umbrella. Genome Biology [Internet]. 2019 May 29;20(1). Available from: https://genomebiology.biomedcentral.com/articles/10.1186/s13059-019-1724-1
- Wosen J. Upstart Element ratchets up race for cheaper DNA sequencing with a $200 genome [Internet]. STAT. 2023 [cited 2023 Nov 15]. Available from: https://www.statnews.com/2023/01/11/element-dna-sequencing/
About the author
Evan Floden, Seqera
Evan Floden is CEO and co-founder of Seqera and the open-source project Nextflow. He holds a Doctorate in Biomedicine from Universitat Pompeu Fabra (ES) for the large-scale deployment of analyses and is the author of 14 peer-reviewed articles.