Nobel Prize: Chemistry

Computational chemists

Print edition : November 29, 2013

Michael Levitt of Stanford University School of Medicine. Photo: handout/AFP

Arieh Warshel of the University of Southern California. Photo: DAVID MCNEW/AFP

Martin Karplus of Harvard University. Photo: DOMINICK REUTER/REUTERS

Fig. 1. In today's computational models of chemical processes, calculations are based on quantum physics at the core of the system. Further away from the action, they are based on classical physics, and at the outermost layers, atoms and molecules are even lumped together in a homogeneous mass. This is like a digital image (right) processed at different pixel resolutions; higher resolutions at the centre and lower resolutions on the periphery.

Fig. 2. The mirror symmetric molecule 1,6-diphenyl-1,3,5-hexatriene studied by Martin Karplus and Arieh Warshel.

Fig. 3. To understand enzyme action, Levitt and Warshel investigated how lysozyme cleaves a glycoside chain. They modelled only the relevant parts of the system using quantum chemistry and treated most of the surrounding using classical models.

Three American scientists share the Nobel Prize in Chemistry for developing a multi-scale modelling approach that can simulate all kinds of chemical processes, from molecules of life to industrial processes to drugs for optimal health care.

RAPID developments in chemistry and biochemistry in the past five decades or so have meant a paradigm shift in the way chemists do research today. The focus of present-day research is more on function than on structure. In earlier days, chemists used to model molecules using plastic balls and sticks. The dawn of the digital age has paved the way for structure modelling with computers aided by vastly improved crystallographic and spectroscopic techniques.

When computers began to be applied in the 1960s, calculations could be done only for simple molecular systems. Methods of X-ray crystallography, or nuclear magnetic resonance (NMR) spectroscopy, and associated techniques to enable the determination of structures of proteins and other large biomolecules evolved in the 1970s. Although enormous progress had been made in the field, these techniques could not provide enough experimental data to determine uniquely the structures of molecules, particularly those in biological systems. These had to be supplemented with theoretical modelling of interactions between atoms in the studied system.

Even so, the approach largely yielded only a static view of the molecules and could not provide any information on the dynamic state of a chemical reaction involving electrons and atoms in motion and the making and breaking of bonds. Now, with improved computer-modelling software, algorithms and techniques and the availability of high-speed computing systems, chemists seek answers for “how does it happen?” rather than “what does it look like?” They want to be able to simulate and visualise every tiny step involved in chemical reactions, especially those involving large molecules such as proteins and drugs.

But chemical reactions occur at lightning speeds. Electrons jump from one atomic state to another in a fraction of a millisecond. So all the tiny steps involved in any chemical reaction could not be directly observed through experiments. While the advent of spectroscopy techniques using ultrashort laser pulses to study chemical reactions at extremely short time scales—nanoseconds (10, billionth of a second) to femtoseconds (10, millionth of a billionth of a second)—have enabled probing of very fast chemical processes, they still cannot provide a complete enough picture of reactions to make predictions and optimise processes such as catalysis, enzyme reactions and drug action.

Chemists today use the method of computational chemistry to understand aspects of chemical processes that cannot be investigated experimentally, and the computer has become a ubiquitous tool. Today, chemists are able to simulate chemical reactions to visualise the various extremely transient intermediate steps of a catalytic process or a host of biochemical processes.

To a large extent, this has become possible because of the methods developed in the 1970s by the trio of this year’s Nobel laureates in Chemistry. They pioneered computer-based modelling tools to study molecules in motion in a chemical reaction. The three naturalised American scientists—the 83-year-old Austria-born Martin Karplus of Université de Strasbourg, France, and Harvard University, United States; the 66-year-old South Africa-born Michael Levitt of Stanford University School of Medicine; and the 73-year-old Israel-born Arieh Warshel of the University of Southern California—have been chosen for the Nobel award “for”, as the citation says, “the development of multi-scale models for complex chemical systems”. That is, the methods developed by the three are useful for simulating the behaviour of compounds at various scales, from single molecules to large biomolecules such as proteins.

As the blogger-chemist Ashutosh Jogalekar says on the Scientific American’s blog site: “It enables chemists like me to calculate a variety of things, from the rates of chemical reactions and the stability of molecules to the probability that a drug will block a crucial protein implicated in a disease.”

Significantly, Jogalekar further points out that the prize recognises a field rather than a particular work of an individual. “It really tells you how pervasive modelling and calculations have become in solving all kinds of chemical problems. In addition, for all three chemists, it is really a life-time achievement award rather than one for a specific discovery,” he adds.

In the mid-1940s, scientists did try to simulate chemical reactions using theoretical models for the forces of interaction between molecules—classical, empirical ones such as Coulomb and van der Waal’s potentials—and calculated how molecules moved and reacted and how molecular structures formed. This approach dates back to 1946 and was pioneered by scientists such as F.H. Westheimer and J.E. Mayer. This theoretical modelling approach based on classical physics enabled investigations into only the simplest of molecules.

Classical models

Classical physics is applicable only at scales greater than a nanometre (a billionth of a metre). At nanoscale and lower dimensions, quantum physics comes into play. Roughly, the size of an atom is about a tenth of a nanometre; that is, classical physics can be applied to a group of 10 atoms or more in a molecule but not down to the atomic scale. In the early days of computers, the same classical approach was built into the computer code to simulate somewhat more complex chemical processes. Even today, this is the basic theoretical model that is used in computer simulations. Classical models treat atoms and bonds in molecules as balls connected by springs, which chemists call the “force-field approach”.

N. Allinger developed the first computer code in the mid-1960s. He developed a set of molecular mechanics methods called MM1, MM2, and so on, and used computers to optimise the structure of molecules by minimising the energy of the system.

But the real action in a chemical process happens at the atomic scales, where the energy state of the system undergoes changes and molecules are formed or destroyed. Chemical reactions are characterised by the transition state, a state with the lowest possible energy, which links the reactants with the products. But this transition state is not accessible to experiments, and classical physics has no explanation for these states. Scientists had to turn to quantum physics to simulate actual chemical reactions. Thus, chemists either had to choose problems that were tractable classically or those requiring quantum theory. It had to be either or.

During the late 1960s, scientists were attempting to move beyond the purely classical picture of molecular interactions used until then in modelling systems and construct inter- and intramolecular potentials using the more fundamental quantum mechanical methods. S. Lifson, along with Warshel and Levitt, developed techniques to study molecular systems such as proteins. These developments naturally began to impact computer-based simulations as well. Thus, by the late 1960s and early 1970s, there were two approaches to computer-based simulations: using equations either of classical Newtonian physics or those of quantum theory. Correspondingly, the software that was applicable to each of the domains came to be developed separately. But both had their pluses and minuses.

The strength of the classical approach was that the calculations were simple, and using the corresponding software, one could model really large molecules. Classical models have much fewer degrees of freedom, and therefore, the corresponding programmes could be evaluated faster on a computer. Although the approach gave a very good visualisation of how atoms were positioned in complex molecules, as explained earlier, they could only reveal the nature of the system in a state of rest. Although the quantum approach provided a more realistic representation of the actual process, such calculations required enormous computing power because calculations involve processing of every single electron and every atomic nucleus in the molecule.

This is akin to the resolution of a digital image. High resolution means many more pixels but also requires more computer resources. Likewise, a detailed quantum mechanical description of a chemical reaction requires high-performance computing systems. Therefore, at that time, quantum theory-based simulations could be carried out only for small molecules. Also, in the models constructed, scientists were forced to ignore the interactions with the surrounding medium, which is not the case in the real situation. Chemical reactions usually take place in some kind of a solution (Figure 1). If the effect of solvent was to be included in a computer simulation, it would take several decades for the results to be generated.

The solution to the problem was provided by the work of Karplus, Levitt and Warshel: to use classical modelling for the larger surrounding and quantum modelling for the core region where the action of interest took place. The first step towards multi-modelling was made when Warshel visited Karplus’ laboratory at Harvard in 1970 after his PhD at the Weizmann Institute of Science in Rehovot, Israel. Karplus’ group developed quantum physics-based programmes for the simulation of chemical reactions. Karplus also developed the Karplus Equation, a method based on quantum mechanical properties of molecules, which is today textbook material for NMR investigations.

Warshel had done work in inter- and intramolecular potentials, and at the Weizmann Institute, which had a powerful computer called Golem, he and Levitt developed path-breaking computer programmes based on the classical approach that could handle all kinds of molecules, including really large biomolecules. The two developed a new programme that could perform different kinds of calculations on different types of electrons. In most molecules, each electron orbits a particular nucleus. But in certain molecules, some electrons can move unhindered between several atomic nuclei. This happens in the retina of the eye in which the molecules responsible for animal vision are embedded.

Their programme could do quantum calculations on such free electrons, also called pi-electrons, and applied the simpler classical equations for all other electrons, or sigma-electrons as they are called. To begin with, they modelled a number of other similar but simpler planar molecules, in which the two eventually were able to construct a programme that could calculate the spectra of freely moving electrons and the vibration spectra. This groundbreaking programme was done in 1972.

Eventually, they succeeded in modelling retinal molecules as well. This was the first demonstration of the possibility of constructing hybrid computational methods that combined the advantages of classical and quantum approaches in a single programme. But the approach had a limitation in that it could attack molecules with mirror symmetry, like the planar molecules that were investigated, where the symmetry makes a natural separation between pi- and sigma-electrons (Figure 2).

After two years at Harvard, Warshel returned to Israel and teamed up with Levitt, who had by then finished his doctoral work at Cambridge, United Kingdom, to develop a programme without any limitation, one that could be used to study more complex biomolecules by welding the two approaches. They began to look at how enzymes function, an area in which Warshel had been interested since his student days. Simulating enzyme reactions required a programme that could transit from the classical to the quantum domain more smoothly than earlier.

Universal programme

What they wanted to construct was a general computational scheme for partitioning the two classes of electrons in the system. Fundamental problems, such as construction of coupling between classical and quantum parts of the system as well as couplings between classical and quantum parts with the surrounding medium, needed to be solved, which took them a couple of years. The work, which they had started at the Weizmann Institute, culminated at Cambridge, where Levitt had returned. Warshel joined him later. In 1976, they published the first computerised model of an enzymatic reaction (Figure 3). It was groundbreaking work because the programme is now universal. It can be used for any kind of molecule. Thus, in simulating chemical reactions, the size or geometry of the molecule became irrelevant.

In sum, what Levitt and Warshel accomplished was to apply the computational power where it was needed, on the electrons and nuclei that directly participate in the chemical process. Thus, the computer does not have to take every single atom in the rest of the system into account. They have shown that it is possible to merge several atoms during the calculations, which greatly brings down the required computing power. In fact, further developments in computational chemistry have enabled chemists to bundle atoms and molecules in areas distant from the centre of a chemical action into a single homogeneous mass.

The universality of the multi-scale modelling approach developed by the laureates is such that now chemists can simulate all kinds of chemical processes, from molecules of life to industrial processes to drugs for optimal health care. The progress in the field has been such that today we have hybrid multi-scale models capable of simulating more than four million atoms. The work of the three laureates has triggered a great amount of interest in developing more accurate theoretical models for application in studies of diverse chemical systems ranging from the synthesis of proteins to photosynthesis, the study of which remains one of the most outstanding problems in chemistry.

“The prize recognises the key role of models in enabling the growth of chemistry, and other disciplines for that matter; computational chemistry models in fact share general principles with models in climate science, ecology and economics,” points out Jogalekar in his blog. “But more importantly,” as the scientific background provided by the Swedish Academy of Sciences notes, “it has opened up a fruitful cooperation between theory and experiment that has made many unsolvable problems solvable.”

One of Levitt’s dreams, as he has written in one of his publications, is to simulate a living organism on a molecular level. The Nobel laureates have given chemists such a powerful investigational tool that this may well become possible in the near future.