The global AI in drug discovery market is expected to reach US$24.7 million by 2029, growing at a rate of 53.3 per cent annually. But AI is not a healthcare panacea. If AI is meant to be a cure-all, then advancing technology becomes a new and more complex challenge. For example, generative AI has flooded the public consciousness, with tools like ChatGPT and Midjourney dominating mainstream tech conversations, but is the technology ready for healthcare applications?
The promise of generative AI
Generative AI techniques have the potential to significantly accelerate the drug discovery process by suggesting novel compounds, optimising molecular properties, and expanding chemical libraries. By leveraging the power of machine learning and advanced algorithms, researchers can explore infinite combinations of chemical molecules and uncover potential drug candidates that might have been missed using traditional methods.
Insilico Medicine, a clinical-stage AI-driven drug discovery company, launched the world’s largest AI-powered biotechnology research centre in Abu Dhabi this year, specifically to further healthcare applications of generative AI. The company released several AI tools, including PandaOmics (generative AI for target discovery), Chemistry42 (generative AI for drug design), and InClinico (generative AI for clinical trial outcome prediction), collectively called Pharma.AI.
While the Middle East does not have a record of pharmaceutical drug discovery and development, Dr. Alex Aliper, President of Insilco, believes the region has the ingredients to leapfrog innovation, with its extensive scientific and technological expertise and opportunities for multi-stakeholder collaboration.
“Just like ChatGPT can take input parameters and produce generated output, our platform can turn scientists’ directions for molecules with specific characteristics into brand new drug candidates that can then be synthesised and developed into new treatments for disease,” says Aliper.
However, in AI for health, there’s an added layer of urgency and accuracy, where bias could set research back instead of propelling it forward. “The trouble is that machine learning mainly focuses on prediction when what we need to recover is the truth,” says Kun Zhang, Director of the Center for Integrative Artificial Intelligence and MBZUAI’s Associate Professor of Machine Learning. “The system has to be infinitely more flexible and deliver the true relationships between genes to provide meaningful and accurate information.”
Related: ChatGPT to usher in a new era of patient care
Privacy, accuracy and regulations
According to the US National Institute of Standards and Technology (NIST), bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which these tools are used.
“Context is everything,” says Reva Schwartz, Principal Investigator for AI Bias and one of NIST’s report authors. “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organisations emphasised this point.”
The report is sector and industry agnostic but warns that organisations should engage in high-quality data curation, make sure that data sources are diverse, and cross-validate using a variety of overlapping datasets to reduce that risk. To reduce biases and increase the accuracy of the model, organisations should also work together to create standardised methods for data collection and sharing. They should also embrace transparent documentation of data processing.
Because generative AI depends on enormous volumes of data, privacy concerns can arise concerning the potential misuse of sensitive data. Companies considering the use of generative AI should establish explicit policies, interact with authorities, and create moral frameworks to guide its use.
To complicate matters even more, generative AI models such as LLMs can sometimes “hallucinate” facts and research papers, which can be catastrophic in healthcare. Last year, Meta unveiled its scientific LLM Galactica in a public demonstration, only to take it offline three days later. Using ALMs with improved dependability and reasoning systems might be able to solve this issue. Drug firms can use expert knowledge and validation procedures, including iterative feedback from subject-matter experts or reinforcement learning with real-world data, to increase the accuracy of generative AI models.
Related: Technology-driven convergence in the life sciences industry
Challenges in scaling generative AI
Generative AI in its current iteration does not take causal relationships between data sets into account. “Only then can we begin to entertain the notion that AI can be used to inform new areas of research, the development of new pharmaceuticals, or the treatment of an individual patient,” says MBUZAI’s Zhang. “If you assume, as many researchers do, that there are linear relationships between your variables, this might skew all your results on real problems. On the other hand, if you use flexible models, the learning process will be less efficient. This is why we often say that causal analysis does not scale.”
To address this issue, Zhang’s team is exploring how causal analysis can be scaled to analyse millions of complex relationships. This, in the context of personalised medicine, has the potential to revolutionise drug discovery while avoiding the technology hype cycle.
While using generative AI, researchers must assume fairly basic and frequently linear relationships between variables, where a change in one variable results in a direct and predictable change in another variable. With complex health issues, however, variables are intertwined and linked in ways that are hard to predict. As a result, Zhang says, scaling up understanding of causal relationships is a long-standing challenge in various disciplines of science and machine learning.
Generative AI in drug repurposing
Causal links are a challenge for generative AI in identifying new therapeutic applications for existing drugs as well. Machine learning models can analyse large databases of molecular structures, biological interactions, and pharmacological properties to predict the efficacy and safety of potential drug candidates. By analysing diverse datasets, including clinical trials, electronic health records, and scientific literature, AI algorithms can identify connections between known drugs and previously unexplored diseases. However, this is still mostly theoretical.
In 2021, Ohio State University (OSU) released research exploring a machine-learning method to determine whether certain drugs can be repurposed for new uses. “This work shows how AI can be used to ‘test’ a drug on a patient, speed up hypothesis generation, and potentially speed up a clinical trial,” senior study author Ping Zhang, PhD, an Assistant Professor of Computer Science and Engineering and Biomedical Informatics at OSU, said in a news release. “But we will never replace the physician — drug decisions will always be made by clinicians.”
Like MBZUAI’s Zhang’s research, the OSU researchers applied “causal inference theory” to group active drug and placebo patient groups that would be found in a clinical trial. “With causal inference, we can address the problem of having multiple treatments. We do not answer whether drug A or drug B works for this disease or not, but we figure out which treatment will have the better performance,” OSU’s Zhang says.
“My motivation is applying this, along with other experts, to find drugs for diseases without any current treatment. This is very flexible, and we can adjust it case by case. The general model could be applied to any disease if you can define the disease outcome.”
So far, this approach has not taken root in the Middle East, but its potential could be limitless.
While the industry is currently in its infancy, the region's investment in AI infrastructure, research collaborations, and partnerships between academia, industry, and government bodies can further accelerate the adoption and development of generative AI-based solutions for pharmaceutical R&D.
___
Back to Technology