Skip to main content
SHARE
Blog

Wigner lecturer: Martin Karplus

Martin Karplus is co-recipient of the 2013 Nobel Prize in Chemistry, awarded for the development of multiscale models for complex chemical systems.

A native of Austria, Karplus received a bachelor’s degree from Harvard College and a Ph.D. from Caltech before holding faculty positions at the University of Illinois, Columbia University and Harvard University. He is the Theodore William Richards Professor of Chemistry, Emeritus, at Harvard and director of the CNRS/Strasbourg University Biophysical Chemistry Laboratory.

Over the years, Karplus has conducted research in many areas of computational and theoretical chemistry and biochemistry and has presented his results in over 800 journal articles and book chapters, as well as two books. His work since the 1970s has focused mainly on the use of theory and high-performance computers to understand biomolecular structure, dynamics and function.

Karplus delivered the Eugene P. Wigner Distinguished Lecture November 7, 2017, on the topic “What Does the Future Hold?” This is an edited transcript of our conversation following his lecture.

You were using high-performance computing to solve questions in chemistry as early as the 1950s. How has supercomputing contributed to your work and the discoveries you’ve made over the years?

It’s obvious that what was a supercomputer in the 1950s—with 3 mega ops [3 million calculations per second, or 9 billion times slower than ORNL’s current Titan system] or even less—was limiting what we could do in terms of studying complex systems in biology, proteins and such, which is my primary interest. And, as the computers have become faster and faster following Moore’s law, we have been able to do more and more things and been able to look at problems that in the 1950s we knew were there, but there was no way of doing them with the computing power that was available.

How have the attitudes of experimentalists and theorists toward numerical science evolved in the intervening decades?

It depends of course in what area you are. In physics, theory and numerical methods have been accepted for a much longer time than in chemistry, and again from chemistry to biology. As I said in my lecture, in the first studies we did of how proteins’ internal motions occur, the chemists—I am a chemist officially, though I’ve been working in biological areas for much of my life—the chemists said we can’t even look at the very simplest molecules and understand what’s going on. To look at a protein makes no sense. And the biologists said, even if we could do it, it wouldn’t be of any interest.

Since then things have evolved, and I think probably a very important element has been that there was a Nobel Prize in this area, so people said, well, if there’s a Nobel Prize, it must be good for something. And I think in the last 10 years or so, there’s been a real revolution in terms of the acceptance of computing results by experimentalists.

**In the 1980s you were involved in the first work combining high-performance computing simulations with neutron scattering to understand motions in proteins. What role does simulation play in interpreting experiments on biological molecules?

Actually, it’s not quite true. I was involved indirectly. [University of Tennessee–ORNL Governor’s Chair for Molecular Biophysics] Jeremy Smith, who is on the staff here, has been trying to use neutron scattering to look at the motion of biomolecules, and he worked with me as a postdoctoral fellow and brought this project of neutron scattering to Harvard.

Interpretation of experimental results based on computations, it’s a routine part of trying to understand what is going on in biology. One of the points is that crystallographers may be able to determine the structures of a protein under one condition, and another condition, and they are different. So they determine these two end points. But what one needs is computations to interpolate between them. And that’s being accepted now.

Experimentalists are, in fact, beginning to use the techniques that we developed. We have a program, CHARMM [for Chemistry at Harvard Macromolecular Mechanics], which is widely used, and the experimentalists are now doing their own calculations rather than waiting for us to do them in many cases.

I might mention that we were looking for a name for our program and a student, Bob Bruccoleri, suggested the name HARMM—for Harvard Macromolecular Mechanics—and I said that didn’t really sound very good. So I added the “C” for chemistry at Harvard. But now I sometimes wonder whether having kept the old name wouldn’t have been a good idea, because it would warn people that you can’t just run the program and use the results without realizing what the limitations are of the computations that you’re doing.

Why was it important to visit ORNL, meet with researchers here, and participate in the Wigner Lecture Series?

The people here will have to decide whether or not it was important. I’m a great admirer of Wigner and felt that having the opportunity to expose the idea of what we will be able to do in the future with computational methods as supercomputers become faster and faster would be a worthwhile thing to do. My lecture was titled “What Does the Future Hold?” And, as I realized, the whole 25th anniversary meeting [celebrating the Oak Ridge Leadership Computing Facility] was really directed toward not the past so much as trying to understand what one would be able to do in the future. So, I think my take on when we will be able to do what studies in what year, this prediction which I did in my talk, made a very worthwhile contribution to this celebration of supercomputers here.