Most labs want to use AI, but few have the digital foundations to support it. Cenevo’s leaders explain why progress is slow and what laboratories must fix before AI can deliver real value.

Artificial intelligence (AI) has become an unavoidable theme in life sciences. Funding announcements, strategic roadmaps and conference agendas all highlight the transformative potential of machine learning, predictive modelling and natural language interfaces. Yet inside many laboratories the reality is different. Data remains fragmented, automation is uneven and digital maturity varies dramatically among teams and organisations. AI may promise insight, efficiency and speed, but its impact depends entirely on foundations that are not yet in place for most labs.
To understand why AI adoption remains slow in drug discovery, we spoke with Cenevo’s CEO Keith Hale and Chief Product Officer Jonathan Gross, who have unique vantage points on how laboratories work today and where digital transformation is still falling short.
A new integrated ecosystem for the lab
Cenevo brings together two established technologies under one brand, uniting Titian’s Mosaic platform with the Labguru workflow environment to create a single, integrated digital ecosystem. This alignment connects sample management, experimental processes and laboratory data into a coherent, connected framework.
Although understated, this combination materially changes how laboratory data and processes connect. By merging experimental workflows with sample and inventory operations, Cenevo aims to create a unified digital backbone that allows workflows, metadata, device outputs and decisions to flow without interruption.
Why laboratories are not ready for AI
Cenevo’s research shows that while AI is becoming an ever-increasing strategic priority across the sector, most laboratories are still far from ready to use it effectively. Hale, who moved into drug discovery after working in industries with far higher levels of digital maturity, immediately recognised the scale of the gap. In his previous industries, processes were already more highly digitised, which meant AI could be applied directly to rich, structured datasets. In contrast, much of drug discovery still relies on physical materials and devices, as well as manual steps, fragmented systems and limited digital continuity, making it far harder to build AI on top of the existing workflows.
When you apply AI to this industry, there are very specific requirements, physical assets, bespoke data and proprietary information. That's one of the reasons why we are at the nascent stage of the application of AI in the industry.
Another challenge is the immaturity of specialist scientific AI. Hale observes: “AI is still quite nascent, early in the technology adoption lifecycle. I'm old enough to have been through the dot-com period, seeing the boom then bust but eventually leading to where we are today in terms of eCommerce and SaaS deployed applications.” He notes that although general models such as ChatGPT, Gemini and Copilot have advanced quickly in terms of broader more generic large language models (LLMs), life sciences companies require more specific, proprietary and bespoke data sets. Scientific data comes with specific context, constraints and regulatory demands, while generic LLM models lack access to proprietary data sets. As Hale explains, “When you apply AI to this industry, there are very specific requirements, physical assets, bespoke data and proprietary information. That's one of the reasons why we are at the nascent stage of the application of AI in the industry.”
Despite the increasing attention AI receives, both leaders stress that the technology is not a replacement for scientists. Instead, AI augments scientific creativity, enabling teams to work more efficiently, generate robust insights and maintain reproducibility while keeping decision making firmly under human control.
Then there is the challenge the industry recognises but rarely solves: fragmented and inaccessible data. Hale points to the scale of the issue, describing “the silos of data in different systems, in instruments, in systems and getting the data out of those devices and instruments – that's a pretty big impediment.” These disconnected sources make it difficult to consolidate information or build reliable automated workflows. Gross adds that the problem is exacerbated by inconsistent or poorly captured metadata, which is vital for interpreting results and linking processes.
With incomplete, inconsistent or inaccessible data, AI cannot produce dependable insight. As Hale summarises candidly, many organisations say they have “lots of data, but not much insight so far.”
Concerns around data governance also shape the pace of adoption. Cenevo’s research indicates that nearly half of laboratories expect regulatory oversight of AI tools and their outputs to increase, reflecting the need for transparency, compliance and trust as new technologies move deeper into scientific workflows.
Mapping what matters: the first step towards automation and AI
Given these challenges, the question becomes: where should laboratories begin? Gross emphasises that meaningful transformation must start with a clear understanding of current workflows and data flows. As he explains, “Map the processes that you're currently having and map the data, map where the data is locked in. Where can we introduce automation? Where can we reduce data lock-ins from vendors? How do we harmonise the data into a data lake? Or provide MCPs into or allow access to this data.” His point is that organisations first need a detailed picture of how work is carried out today before they can decide what to automate, where to integrate and which technologies will deliver the most value.
This approach forces teams to focus on outcomes rather than trends. It encourages organisations to prioritise the scientific problems they want to solve, identify the bottlenecks that slow progress and choose technologies deliberately rather than reactively.
Cenevo applies this philosophy directly within its product strategy. Labguru’s AI assistant, now being extended into Mosaic, is designed to address practical problems rather than abstract possibilities. Gross points to features such as context-aware search, tools that flag repeated experiments and support for comparing protocols and outcomes.
Hale describes Cenevo’s approach to AI as operating across two complementary layers. One focuses on improving the scientist’s day-to-day experience within Cenevo’s platforms with capabilities such as faster search, automated report generation and guided agentic workflows. The other supports organisations that are developing their own AI initiatives, providing structured, reliable data from Mosaic and Labguru to feed into corporate data lakes and internal modelling programmes. With 15 of the top 20 pharmaceutical companies using these systems, this dual role has become increasingly significant.
Integration as a differentiator
Where Cenevo stands out most strongly is depth of integration. Many laboratory informatics systems connect to instruments only through limited flat-file exchanges, which require manual checks and introduce risk. These approaches can store data but cannot automate processes in a controlled, auditable way.
Cenevo positions itself differently, offering real-time device control and automated data flow across more than 150 different types of instruments and technology systems. This alignment ensures that sample data, instrument outputs and experiment metadata remain tightly linked. It also reduces manual hand-offs, eliminates transcription errors and preserves the audit trail needed for regulatory and scientific confidence.
This is particularly important in a sector where digital maturity varies widely. Large pharmaceutical companies may have cloud-based data lakes, AI teams and complex informatics architecture, but small and mid-sized biotechs often lack the resources to build their own integrations, data lakes and AI infrastructure. Cenevo’s unified environment allows each group to operate securely at its own scale while keeping pathways open for future automation and AI deployment.
The value of collaboration
Both leaders describe events such as ELRIG as essential forums for progress. Gross notes that Cenevo uses these gatherings to “share the vision and also share where they are in their AI journey” and to assess whether upcoming developments align with customer needs. These discussions also extend to instrument providers, whose choices around data formats and interfaces have a direct influence on laboratory interoperability.
Ultimately, we're all trying to find drugs that are life-saving as fast as we can and cost-effectively.
Hale notes that the broader market context cannot be overlooked. Funding pressures and wider economic uncertainty have affected organisations across the sector, from emerging biotechs to major pharmaceutical companies. These constraints shape what laboratories can realistically take on, even as they continue to push for faster drug discovery with greater efficiency at lower cost. For Cenevo, the challenge is to help teams make meaningful progress despite these pressures. As Hale puts it, “Ultimately, we're all trying to find drugs that are life-saving as fast as we can and cost-effectively.”
Towards agentic AI
Looking ahead, both leaders foresee a rapid evolution from today’s natural language interfaces towards more autonomous agentic AI systems. These systems will not simply answer questions but perform tasks, manage routine functions and support decision making. To reach that stage, however, laboratories must build the structures that make these systems viable. Cenevo’s research shows that only 15 percent of laboratories are fully digitised and half still depend on manual processes.
Without consistent metadata, harmonised workflows and reliable device integration, agents cannot operate safely or effectively. With them, the potential is dramatic: fewer experiments, more insight per dataset, reduced experimental repetition and more predictive modelling based on robust information rather than intuition.
A grounded, achievable vision for the AI-enabled lab
Hale and Gross offer a refreshingly pragmatic vision. The industry is not as ready for AI as the hype suggests, but neither is it stuck. With integrated platforms, disciplined data practices and carefully targeted automation, laboratories can make real progress today while preparing for more advanced AI capabilities in the coming years.
AI delivers value only when built on clean data and connected processes. Without that structure, the technology cannot operate reliably. Cenevo focuses on enabling connected laboratories that are automated, data-centric and AI-enabled, establishing the digital and operational foundations that allow laboratories to deploy AI with confidence and achieve measurable results.
About the experts
Keith Hale, Chief Executive Officer, Cenevo
Keith is Chief Executive Officer at Cenevo and joined in 2024. Keith has more than 30 years of experience and an impressive track record in growing software and technology businesses, primarily in the FinTech sector.
He co-founded Netik, the financial data management software platform; was EVP and then CEO at Multifonds, a leading fund administration software provider; and most recently, as Chairman and CEO, led the creation of TrustQuay in 2019 and the acquisition of Viewpoint in 2023, creating a leading trust administration and corporate service software provider now called Quantios.
Jonathan Gross, Chief Product Officer, Cenevo
Jonathan Gross is Chief Product Officer at Cenevo. He was the founder and previously CTO of Labguru. Jonathan has over two decades of experience at the intersection of technology, life science and innovation. By harnessing advanced technologies, Jonathan drives business growth through strategic partnerships and a culture of collaboration within the life sciences community. His mission is to empower researchers with cutting-edge solutions that elevate research continuity, ensure regulatory compliance and improve operational efficiency. Jonathan has spearheaded the development of solutions that reimagine how research is documented, shared and analysed.


