top of page

Model mayhem

Martin Karplus of Harvard, Michael Levitt of Stanford and Arieh Warshel of USC were awarded the Nobel Prize in Chemistry in 2013 for thinking “what if we put peanut butter on our chocolate?” Or the super nerd version of that anyway: What if we use COMPUTERS to visualize and manipulate chemicals and other molecules?

While chemistry was fairly mature at that point, computers were in their infancy and techniques for visualizing molecular structure were tedious and limited. Thus, it was a laudable goal and effort by these three to envision molecular modeling and stick to their vision through many years of painstakingly slow progress. That’s not to say they actually accomplished much: Their main approach, molecular dynamics, remains an extremely limited approach that hasn’t done much more than pleasingly shake molecules on the computer, even at today’s immense computing power. Indeed, while the number of algorithms and software packages promising to basically be able to hand biochemistry over to the machines has exploded and become ubiquitious, it turns out even the simplest molecular calculations remain unsolved.

For drug development, one of the most crucial is the simple ability to have the computer predict the real biochemical affinity of the drug for its target given the structure of both. Seems simple right? Just ask the computer to fit the drug into the target and measure how well it fits and that measurement should correlate with binding affinity measured in a test tube. But no. After all these years, this still cannot be done to the point that we can blindly trust the computer’s assessment for any random drug and target. Notably, that’s why one hears little about an experiment most pharmacologists think about all the time: Can you predict which proteins expressed by the human genome will be bound by my drug by docking it to all of them? That’s because you won’t be able to predict affinity and tell the real dockings from the false positives. So, despite the Nobel, much of the promise in this area remains a pipe dream.

Nevertheless, some in the field have taken a scientific approach: blinded challenges to the users and makers of the software that was lauded in the Nobel. In this kind of experiment, affinity is measured reliably in the test tube for drugs and targets whose 3D structures and compositions are known. This affinity is kept a secret and a challenge put out for anyone to try to predict it. Contestants try to predict it by computer modeled physical chemistry (docking) and/or by so called “knowledge-based approaches,” where a table of chemicals similar to the drug with known affinities are used to extrapolate towards the drug. Since the contestants do not know the answer, key biases are eliminated and demonstrations of consistent accurate predictions are likely to be real successes in affinity prediction. An important caveat is the method of evaluation, which is often a ranking of who got closest. This can be problematic: For example, ranking who is closest to the Moon on Earth will yield a single person who is closest. That doesn’t mean that that person is actually close to the moon.

The recent DR3 Grand Challenge 3 did exactly this experiment for affinity prediction. Interestingly, one group consistently ranked near or at the top of the prediction for all the targets in which they participated. More intriguingly, some of their affinity predictions were actually close to the real one. Most intriguingly, their method involved a previously unheard of fusion of docking and QSAR, a knowledge-based approach. Maybe our dream of scanning the druggable genome for the full polypharmacologic ensemble of a drug or drug candidate using a computer is coming true! Now if only we could project that ensemble to body tissues to visualize the complex, in vivo mechanism of action of drugs and drug candidates. Oh wait . . .

Single Post: Blog_Single_Post_Widget
bottom of page