Chasing supernovae

Published : Nov 04, 2011 00:00 IST

The universe is expanding faster than ever before: this is what the research of the winners of this year's Nobel Prize in physics concludes.

What if one were to throw up a ball in the air and, instead of it coming down under the force of gravity, watched it overcoming the gravitational attraction and disappearing into space faster and faster, as if attractive gravity was being countered by a stronger repulsive force? The winners of this year's Nobel Prize in physics observed something similar happening in the large scale of the universe.

Two astronomy research groups concluded around 1998, independently of each other, that the universe that we know to be expanding as a result of the Big Bang that created it now seems to be speeding up its rate of expansion instead of slowing down, as one would expect, under the gravitational pull of the matter contained in it. One was the Supernova Cosmology Project (SCP) team and the other was the High-z Supernova Search (HZSS) team. The SCP was initiated in 1988 and was headed by Saul Perlmutter; and the HZSS was launched in 1994 and was headed by Brian Schmidt, with Schmidt's associate Adam Riess playing a crucial role.

The 2011 physics Nobel Prize, according to the award citation, has been given for the discovery of the accelerating expansion of the Universe through observations of distant supernovae. One half of the award has gone to Perlmutter, and the other half, divided equally, to Schmidt and Riess.

Perlmutter, a 52-year-old American, continues to head the SCP, which is still gathering data on supernovae for understanding cosmology, at the Lawrence Berkeley National Laboratory (LBNL), and is a professor of physics at the University of California, Berkeley. Brian Schmidt, a 44-year-old American-Australian, heads the HZSS team and is Distinguished Professor at Australia National University at Western Creek, Australia. Adam Riess, a 42-year-old American, is now a Professor of Astronomy and Physics at Johns Hopkins University and the Space Telescope Science Institute, U.S..

Just over a century ago, the universe was considered no larger than our own galaxy, the Milky Way. The many nebulae that were seen in the sky were thought of as mere gas clouds on the fringes of the Milky Way. In 1912, the American astronomer Vesto Slipher, working at the Lowell Observatory, pioneered measurements of the shifts towards the red of the light from the brightest of these nebulae that had been seen. The redshift of an object in light is akin to the Doppler Effect that one is familiar with in sound from a moving object; sound waves from a train moving away, for example, gets stretched and the pitch drops in proportion to the velocity. Similarly, light waves from a moving object, too, get stretched and shift towards longer wavelengths or the red end of the spectrum. Slipher had found something interesting that could not be explained. The nebulae seemed to be moving faster than the escape velocity for the Milky Way!

In the 1920s, the nature of these nebulae, too, got unravelled when astronomers, most importantly Edwin Hubble, using the new 100-in telescope at Mt. Wilson, looked at the better resolved images of these nebulae and could spot individual stars within these spiral nebulae like the Andromeda nebula. Some of these stars were found to be pulsating stars, called Cepheids, which brightened and dimmed with regular periods. Already in 1912, the American woman astronomer Henrietta Swan Leavitt, analysing thousands of these Cepheids, had found an interesting characteristic of these variable stars. The brighter ones had longer pulse durations and the luminosity of Cepheids and their pulse periods were related. Using this information, she could calculate the intrinsic brightness of distant Cepheids by observing their pulsating periods. So in Cepheids astronomers had the standard candle', the yardstick to measure a star's intrinsic brightness.

Now if the luminosity-period relation could be calibrated with nearby Cepheids, whose distances can bemeasured from parallax measurements, distances to other Cepheids can be established (to within 10 per cent) the dimmer its light, the farther away is the star in accordance with the inverse square law. The pulse period will give its intrinsic luminosity and from the apparent luminosity (brightness as observed) and the inverse square law the distance can be deduced. Hubble used Leavitt's relation to estimate the distances of the spiral nebulae and concluded that they were too far away to be part of the Milky Way and hence must be galaxies themselves beyond the Milky Way. So the universe was much larger than just our Milky Way.

Further, combining his own measurements of 46 galaxies and the redshifts of these that had been measured by Slipher and others, Hubble found that almost all these galaxies were moving away from us. He also found a rough proportionality of an object's distance with its redshift, which in turn is proportional to its radial recession velocity and the proportionality factor is called the Hubble constant. The conclusion was that the galaxies were rushing away from us and each other much like the raisins in a cake swelling in the oven and the farther away they are, the faster they are receding from us. This is today known as Hubble's law', which Hubble published in 1929.

While observational cosmology was giving new insights about the universe we live in, the profound theoretical foundations for the gravity-dominated universe was also being laid by Einstein. In November 1915, he formulated the theory of General Relativity (GR), an extension of his theory of Special Relativity. This revolutionary theory is one of the greatest achievements in the history of science, a major milestone in 20th century physics. It soon became clear to the astrophysics community that the GR equations could be applied to cosmological situations and in 1917 Einstein himself applied them to the entire universe, making the implicit assumption that the universe is homogeneous at cosmological scales when effects of local clusters of matter get averaged out.

COSMOLOGICAL CONSTANT

Remarkably, however, the solutions of the equations suggested that the universe could not be stable the universe had to either expand or shrink. This conclusion, which Einstein found disturbing and contrary to his view of a static universe, had been reached at least a decade before Hubble's observations of ever receding galaxies. Einstein, however, soon found a way out of it. His GR equations, in the most general form, could accommodate a cosmological constant', a constant energy component of the universe, which could cancel the instability to give a static universe. However, calculations using GR equations by the Russian mathematician-physicist Alexander Friedman during 1922-24 and the work by the Belgian priest physicist Georges Lematre in 1927 essentially showed that Einstein's steady-state solution was unstable and any small perturbation would render the universe non-static. Einstein, however, did not like an expanding universe and had apparently called the idea abominable.

It must also be pointed out here that, though Hubble is credited with Hubble's Law, Lematre in his 1927 paper had derived the equations for an expanding universe and obtained a relation similar to Hubble's and found essentially the same proportionality constant from observational data on distances and radial velocities of 42 galaxies two years before Hubble did. Only that his results were published in a not widely circulated Belgian journal and the 1931 English translation of it in the Monthly Notices of the Royal Astronomical Society (MNRAS) had removed the section about Hubble's Law. Recently it has come to light that these omissions of Lematre's results on expansion of the universe were perhaps deliberate, which have denied Lematre the due credit for such a revolutionary cosmological result. He also subsequently showed the logical consequence of his finding that the universe must have existed for a finite time only, and must have emerged from an initial single quantum and thus laid the basis for the concept of the Big Bang.

The discovery of the expansion of the universe by Slipher, Hubble, Lematre and others in the 1920s soon led to its widespread acceptance with which Einstein, too, later reconciled, given the growing evidence. He is supposed to have called the introduction of the cosmological constant his greatest mistake. From then on the idea of the cosmological constant in GR equations faded, only to gain importance once again following this year's Nobel Prize-winning discoveries. The constant, originally put forth by Einstein for different reasons, is now thought to represent an intrinsic energy component of spacetime that can overcome the total energy content of ordinary matter and provide the push for the observed accelerated expansion of the universe.

The theoretical understanding of an expanding universe in the GR framework following Friedman's and Lematre's work also gave credence to the model of a homogeneous and isotropic universe, which, too, quickly came to be accepted by the scientific community. This idea that the universe looks the same on cosmological scales to all observers, independent of the location and independent of the direction in which they look, is called the Cosmological Principle. The model of the universe that incorporates the Cosmological Principle is called the Friedman- Lematre-Robertson-Walker (FLRW) model. Since the 1930s, the evidence for the Cosmological Principle has grown stronger and stronger with the discovery in 1964 by Arno Penzias and Robert Wilson of the all pervading Cosmic Microwave Background (CMB) radiation, the relic radiation from the Big Bang that has since stretched to long wave radiation corresponding to a black body at 3 degrees Kelvin and is distributed uniformly throughout. The recent observations of CMB by WMAP satellite show that the largest temperature anisotropies (of the order of a thousandth of a degree) arise owing to the motion of the Milky Way through space. If this component is subtracted from the data, the residual anisotropies are a hundred times smaller.

To understand the history of the universe, right from the Big Bang to the present and its possible evolution into the future, is the basic objective of cosmology. The paradigm that has emerged for understanding the large scale structure of the universe over the last seven decades is based on the equations of GR with the assumption of a homogeneous and isotropic universe. In principle, the record of the expansion history of the universe can be obtained by using as a standard candle' any distinguishable class of astronomical objects of known intrinsic brightness that are available to the astronomer over a wide range of distances. The recorded redshift and brightness of such objects will thus provide a measure of the total integrated expansion of the universe since the time the light was emitted. If such measurements over a sufficient range of distances can be compiled, we will have a record of the history of the universe's expansion.

`STANDARD CANDLES'

As we have seen, one well-known class of standard candles is the Cepheid variables stars. These stars can now be identified at distances up to about 10 megaparsec (1 parsec or pc is a distance corresponding to a parallax of 1 arc second and is used as a unit of length in astronomy. It is equal to 3.26 light-years, or about 31 trillion (3.11013) km.) The Milky Way has a diameter of 30 kpc. To determine the expansion history of the universe, however, we need to be able to measure distances of at least about 1,000 Mpc. Way back in 1938, Walter Baade and Fred Zwicky, working together at the Mt. Wilson Laboratory, suggested that the supernovae were highly promising candidates as distance markers. They are bright enough to be visible at large distances and can, in fact, over a few weeks, outshine an entire galaxy. However, with many supernovae being discovered over the years, it was found that their peak brightness, which had originally seemed quite uniform, had a considerable range and these objects were really not as homogeneous as standard candles' should be.

Fortunately, by the 1980s a class of supernovae emerged with no hydrogen emission lines in their spectra, which were called Type 1 supernovae, and among these there was a sub-class, Type 1a, which in addition had silicon absorption line at 615 nm wavelength in their spectra. Supernovae belonging to Type 1a had an amazing degree of uniformity in their characteristics. Their spectra were found to match feature-by-feature as did their light curves'. Light curves are plots of supernovae brightness as they wax and wane in the weeks following a supernova explosion. Type 1a supernovae became the new standard candles.

Type 1a supernova results from the explosion of an extremely compact star called white dwarf, which is of the size of the earth but is as heavy as the sun, at the end of its life cycle. White dwarfs form when a star has run out of its fuel, hydrogen and helium, required to sustain its nuclear fusion reaction in its core. Only carbon and oxygen remain. But it is not at all uncommon that low-mass white dwarfs are part of binary star systems, in which case the white dwarfs' strong gravity can lead to accretion of gas and matter from the companion star. However, when the white dwarf approaches the limit of 1.4 solar masses, the Chandrasekhar limit, the star cannot hold itself together and it becomes unstable. When this happens, the white dwarf becomes sufficiently hot for runaway nuclear fusion and the star explodes. The nuclear fusion emits strong radiation that increases rapidly during the first weeks after the explosion and then decreases over the following months. So, irrespective of the nature of the white dwarf and how it started out its life cycle, its final fate is the same and this is why Type 1a supernovae are remarkably similar and can serve as ideal standard candles.

About 10 Type 1a supernovae occur every minute, but given that the universe is so huge, in a typical galaxy only one or two explosions occur in a thousand years. But astronomers were lucky to spot such a supernova explosion just last month in a galaxy close to the Big Dipper (Saptarishi) constellation, which was so bright that it could be seen with a pair of binoculars. But most supernovae occur at great distances and thus are dimmer. In any case, these explosions are very brief and occur at random. So in their search for such supernovae, astronomers have a daunting problem at hand. How to plan an observation to be able to catch a supernova in this vast expanse of the universe? And telescope observation times are not available on demand.

Supernovae on demand

Perlmutter said in a 2003 article: , This was a classic Catch-22. You couldn't preschedule telescope time to identify a supernova's type or follow it up if you couldn't guarantee one. But you couldn't prove a technique for guaranteeing Type 1a supernova discoveries without scheduling telescope time to identify them spectroscopically. Besides these problems of logistics, astronomers had to deal with certain technical issues as well. The light of supernovae had to be extracted from the background light in their host galaxies. To obtain the correct maximum brightness of supernovae one had to correct for the scattering and absorption of light by the intervening galactic dust.

Crucial to hitting upon a technique of catching the supernovae as they brightened, which Perlmutter and Carl Pennypacker came up with in 1988, was the invention by Willard Boyle and George Smith of light-sensitive digital imaging sensors based on Charged-coupled Devices (CCDs), which allowed studying thousands of galaxies in a night on a 4 m telescope, thus increasing the chances of catching a supernova. Contemporary computing and networking advances just barely made possible the next-day analysis that would let us catch supernovae as they first brightened, wrote Perlmutter.

The Catch-22 problem was solved by Perlmutter and his associates by an ingenious idea. In retrospect, wrote Perlmutter, the solution we found seems obviousBy specific timing of the requested telescope schedules, we could guarantee that our wide-field imager would harvest a batch of about a dozen freshly exploded supernovae, all discovered on a pre-specified observing date during the dark phase of the moon. (A bright moon is an impediment to the follow-up observation.) We first demonstrated this supernovae-on-demand methodology in 1994. From then on, proposals for time at major ground-based telescopes could specify delivery dates and roughly how many supernovae would be found and followed up. This approach also made it possible to use the Hubble Space Telescope (HST) for follow-up light-curve observations. With a growing number of collaborators, the Berkeley team's project came to be called the SCP.

Implementing the programme and chasing supernovae was thus as much challenging to science and technology as to logistics. First, the right type of supernova had to be found. Second, its redshift and brightness had to be measured. The light-curve had to be analysed in order to be able to compare it with to other supernovae of the same type at known distances. This called for a network of scientists to decide whether a particular star was suitable for observation and follow-up. It also required switching between telescopes and having observation time at a telescope, including the HST, granted without delay, a procedure that normally takes months. All this was required to be done quickly because supernovae fade quickly

The new supernovae-on-demand techniques now permitted systematic study of distant supernovae. At the end of 1994, the second collaboration HZSS led by Schmidt, which included many supernova experts, also got under way, essentially adopting the same techniques. Over the following years the two collaborations independently searched for supernovae, often but not always at the same telescopes. The two rival teams, according to Perlmutter, raced against each other over the next few years occasionally covering for each other with observations when one of us had bad weather as we all worked feverishly to find and study the guaranteed on-demand batches of supernovae. Like the SCP, the HZSS could also demonstrate the validity of the chosen strategy.

When Einstein removed the cosmological constant from the GR equations, his obvious question was regarding the universe's ultimate fate. He posed it in terms of the geometry of spacetime that the equations represented. Is the universe open or closed or is it something in between, flat?

An open universe is one where the gravitational force of attraction is not large enough to prevent the expansion of the universe. All matter gets diluted in an ever larger, ever colder and emptier space. In a closed universe, on the other hand, the gravitational force is strong enough to halt and even reverse the expansion. So the universe will ultimately stop expanding and fall back together in a hot and violent ending, a Big Crunch'. Most cosmologists prefer a simple and mathematically elegant universe that is flat. But if there is a cosmological constant, the expansion would continue to accelerate, even if the universe is flat. The Nobel laureates, in their attempt at measuring supernovae distances and their redshifts, expected to measure the cosmic deceleration or how the universe was slowing, as it was expected to in the generally accepted cosmological scenario given the energy density of observed matter in the universe.

In the simplest cosmological models, the expansion history of the cosmos is determined entirely by its mass density. The expansion will be slowed down more by gravity if the density is high. Therefore, in the past, a high-mass-density universe would have been expanding faster than it does now. So one did not have to look too far back in time through extremely distant and faint supernovae to find evidence for the expected deceleration. Conversely, in a low-mass-density universe one will have to look far back in time. But the mass distribution in the universe at present already determines the lower limit to the mass density. That is how far one must look to find a given redshift.

In the beginning of 1998, both groups published their results which seemed to show evidence for far less than the expected deceleration. While these seemed consistent with a low-mass density universe, the high redshift supernovae that the two groups found were fainter than would be expected even for an empty cosmos. The faintness or distance - of the high-redshift supernovae was a dramatic surprise, he wrote. If the cosmic expansion had been slowing down, they should have appeared brighter. The surprising conclusion was that the expansion of the universe was not slowing down but actually accelerating. Two breakthrough papers implying the above were published later in 1998 itself. While the HZSS observations were based on 16 Type 1a supernovae analysed by Riess, then a postdoctoral researcher at Berkeley, the SCP paper by Perlmutter and others included 45 supernovae. The fact that both the groups independently presented similar, albeit extraordinary, results was crucial to their general acceptance by the scientific community.

To be sure that their conclusions are right, the scientists have investigated questions such as: Could the dimness of distant supernovae be the effect of intervening dust? Or, did the Type 1a supernovae in the early universe have different properties from the nearby, recent ones? Addressing these questions extensively the two groups have concluded that dust is not a major problem and the spectral properties of near and distant supernovae are very similar. Later studies of supernovae of very high redshift, from the time when the universe was much denser and energy density due to matter dominated, indicate that repulsion set in when the universe was about half its present age. The dramatic conclusion of an accelerating universe has been confirmed by precision measurements of CMB and by studies of galaxy clustering.

The driving force behind the acceleration, however, is unknown. The widely held current belief is that the cause of expansion is vacuum energy called dark energy, but it is one of the biggest challenges to present-day physics to understand its exact nature. The amount of acceleration found actually implies that three-fourths of the universe is in this unknown form of energy. If the hypothesis is correct, together with the other unknown form of matter, which has gravitational force but does not interact with light, called dark matter, this constitutes 95 per cent of the universe. Only the remaining 5 per cent is regular matter that makes up galaxies, stars, the planets and living things.

The simplest way to introduce a repulsive counterforce is to put back Einstein's constant, which, being a constant, does not change with time. So dark energy began to dominate when matter got diluted to a low density because of expansion over billions of years. According to scientists, that would account for the fact that the cosmological constant entered the picture so late in the history of the universe, some five to six billion years ago. However, with regard to the cosmological constant, vacuum energy gives rise to its own peculiar problem based on elementary particle physics and quantum theory. The latter tells us that the vacuum is never empty but is actually a bubbling soup of matter and antimatter that are constantly being created and annihilated and contribute to energy. However, the simplest estimate puts this at an astounding 10120 (1 followed by 120 zeroes) times large than the amount of dark energy required. So vacuum energy would actually raise more questions than actually solve the question of an accelerating universe.

PERTINENT QUESTION

But the more pertinent question, irrespective of whether there is dark energy or not, is whether the conclusion of an accelerating universe, for which the Nobel Prize has been awarded this year, is itself questionable. The evidence for accelerated expansion is also indirect, points out Subir Sarkar, an Oxford University cosmologist. It is based on interpreting the brightness of distant supernovae (versus their redshift) in the framework of an assumed homogeneous cosmological model whereas the real universe is manifestly not soThe very fact that this immediately implies that the universe is dominated by the cosmological constant should ring alarm bells and force us to examine the assumptions of the standard' cosmological modelAlthough the universe is inhomogeneous on small scales, averaging over spatial fluctuations and studying time evolution is not easily done in a mathematically rigorous and consistent way. The real problem is we have very few cosmological solutions of the GR equations apart from the simplest ones based on the cosmological principle, he added.

Indeed, there are many scientists working to understand if the interpretation is an artifact of the assumed idealised FLRW model. One such theorist is Syksy Rasanen of Helsinki University. The fact that the distribution of matter and geometry in the universe has large local fluctuations can change the relation [between the distance and expansion rate] from the FLRW model, he pointed out. However, he made the following significant observation: There are also independent observations of the expansion rate, which indicate that the expansion has at least decelerated less than expected. So the expansion of the universe has also to be changed anyway, even if acceleration is not proved beyond reasonable doubt. Also, acceleration does not in itself imply the presence of dark energy or modification of gravity. The presence of structures can also lead to actual accelerating expansion of cosmological volumes. Whether the effect of structures is significant remains an open question. Until this effect is quantified, we do not know whether the observations indicate new physics, or if they can be understood in terms of a complex realisation of the physics we already know.

The major contribution actually lies in their demonstration of how to use distant supernovae to record the history of the expansion of the universe. It is as much a demonstration of the ingenious application of science and technology in cosmological studies as it is of the successful collaboration among the global community of astronomers towards knowing the truth about the universe we live in and its ultimate fate.

Sign in to Unlock member-only benefits!
  • Bookmark stories to read later.
  • Comment on stories to start conversations.
  • Subscribe to our newsletters.
  • Get notified about discounts and offers to our products.
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide to our community guidelines for posting your comment