Insilico Medicine and Liquid AI have partnered to develop a lightweight artificial intelligence foundation model designed to support multiple stages of drug discovery.

A new partnership between artificial intelligence developers Insilico Medicine and Liquid AI has produced a lightweight scientific foundation model. The model is designed to support multiple stages of pharmaceutical research while operating entirely on private infrastructure.
The companies announced the release of LFM2-2.6B-MMAI (v0.2.1), a model intended to handle a broad range of drug discovery tasks within a single system rather than relying on separate specialised models. According to the partners, it represents a step toward more efficient AI tools that can be deployed inside pharmaceutical organisations without sending proprietary data to external cloud platforms.
Addressing data security in AI-driven research
One of the major barriers to adopting advanced AI systems in pharmaceutical research is the need to protect sensitive data. Drug developers often work with proprietary molecules, assays and biological targets that cannot easily be shared with third-party infrastructure.
One of the major barriers to adopting advanced AI systems in pharmaceutical research is the need to protect sensitive data.
The collaboration between Insilico Medicine and Liquid AI aims to address this issue by combining Liquid AI’s efficient large foundation model architecture with Insilico’s MMAI Gym training environment. The platform contains more than 1,000 pharmaceutical benchmarks used to train and evaluate AI systems for drug discovery.
The resulting model can be deployed on internal infrastructure while still delivering performance comparable with much larger cloud-based models.
A single system for the discovery pipeline
The model has been designed to support the full discovery loop within pharmaceutical research. Its capabilities include property prediction and ADMET endpoint analysis, multi-parameter molecular optimisation, target-aware scoring with protein-pocket conditioning, functional group reasoning and retrosynthesis planning.
Training involved approximately 120 billion tokens of pharmaceutical data drawn from more than two hundred distinct tasks across the drug discovery process.
Developers say this approach avoids the need for a patchwork of separate models focused on individual problems, allowing a single system to operate across different stages of discovery.
"With LFM2-2.6B-MMAI, we've shown that efficient architecture design, not just scale, is what makes foundation models practical for the sciences. A single 2.6B-parameter model now matches or outperforms systems ten times its size across the drug discovery pipeline, all on private infrastructure," said Ramin Hasani, CEO and co-founder of Liquid AI. "Our collaboration with Insilico is proof that you can reduce the cost of intelligence while raising the quality bar."
Performance across multiple discovery tasks
Despite having only 2.6 billion parameters, the model reportedly achieves performance comparable with significantly larger systems.
In molecular optimisation benchmarks, the system achieved success rates of up to 98.8 percent on multi-parameter optimisation tasks.
In property prediction tests covering pharmacokinetics and toxicology, the model outperformed the 27-billion-parameter TxGemma model on 13 of 22 tasks and achieved state-of-the-art results on three tasks when compared with specialist models designed for individual endpoints.
In molecular optimisation benchmarks, the system achieved success rates of up to 98.8 percent on multi-parameter optimisation tasks. Internal benchmarks from Insilico Medicine also showed improved correlation scores in affinity prediction compared with several frontier AI models across a dataset containing 2.5 million experimental measurements and 689 protein targets.
Additional testing also demonstrated strong performance in chemical reasoning tasks, functional group analysis and single-step retrosynthesis prediction.
Potential applications in pharmaceutical R&D
The partners say the model could be used immediately in several areas of pharmaceutical research, including high-frequency ADMET screening, medicinal chemistry lead optimisation and retrosynthesis planning designed to reduce wasted experimental work.
"We are pleased to collaborate with Liquid AI to develop the next generation of lightweight liquid foundation models capable of performing multiple scientific tasks with state-of-the-art performance across drug discovery benchmarks," says Alex Zhavoronkov, CEO of Insilico Medicine. "Highly efficient liquid science models will make it easier for more scientists to achieve their goals in order to compress discovery timelines and ultimately help patients."


