AI programs spat out known data and hardly learned specific chemical interactions when predicting drug potency
Artificial intelligence (AI) is on the rise. Until now, AI applications generally have “black box” character: How AI arrives at its results remains hidden. Prof. Dr. Jürgen Bajorath, a cheminformatics scientist at the University of Bonn, and his team have developed a method that reveals how certain AI applications work in pharmaceutical research. The results are unexpected: the AI programs largely remembered known data and hardly learned specific chemical interactions when predicting drug potency. The results have now been published in Nature Machine Intelligence.
Which drug molecule is most effective? Researchers are feverishly searching for efficient active substances to combat diseases. These compounds often dock onto protein, which usually are enzymes or receptors that trigger a specific chain of physiological actions. In some cases, certain molecules are also intended to block undesirable reactions in the body — such as an excessive inflammatory response. Given the abundance of available chemical compounds, at a first glance this research is like searching for a needle in a haystack. Drug discovery therefore attempts to use scientific models to predict which molecules will best dock to the respective target protein and bind strongly. These potential drug candidates are then investigated in more detail in experimental studies.
Since the advance of AI, drug discovery research has also been increasingly using machine learning applications. As one “Graph neural networks” (GNNs) provide one of several opportunities for such applications. They are adapted to predict, for example, how strongly a certain molecule binds to a target protein. To this end, GNN models are trained with graphs that represent complexes formed between proteins and chemical compounds (ligands). Graphs generally consist of nodes representing objects and edges representing relationship between nodes. In graph representations of protein-ligand complexes, edges connect only protein or ligand nodes, representing their structures, respectively, or protein and ligand nodes, representing specific protein-ligand interactions.
“How GNNs arrive at their predictions is like a black box we can’t glimpse into,” says Prof. Dr. Jürgen Bajorath. The chemoinformatics researcher from the LIMES Institute at the University of Bonn, the Bonn-Aachen International Center for Information Technology (B-IT) and the Lamarr Institute for Machine Learning and Artificial Intelligence in Bonn, together with colleagues from Sapienza University in Rome, has analyzed in detail whether graph neural networks actually learn protein-ligand interactions to predict how strongly an active substance binds to a target protein.
How do the AI applications work?
The researchers analyzed a total of six different GNN architectures using their specially developed “EdgeSHAPer” method and a conceptually different methodology for comparison. These computer programs “screen” whether the GNNs learn the most important interactions between a compound and a protein and thereby predict the potency of the ligand, as intended and anticipated by researchers — or whether AI arrives at the predictions in other ways. “The GNNs are very dependent on the data they are trained with,” says the first author of the study, PhD candidate Andrea Mastropietro from Sapienza University in Rome, who conducted a part of his doctoral research in Prof. Bajorath’s group in Bonn.
The scientists trained the six GNNs with graphs extracted from structures of protein-ligand complexes, for which the mode of action and binding strength of the compounds to their target proteins was already known from experiments. The trained GNNs were then tested on other complexes. The subsequent EdgeSHAPer analysis then made it possible to understand how the GNNs generated apparently promising predictions. More