For a true picture

Published : Nov 04, 2011 00:00 IST

The 2-m IUCAAGirawali Observatory in Pune, the next test bed for Robo-AO. - BY SPECIAL ARRANGEMENT

The 2-m IUCAAGirawali Observatory in Pune, the next test bed for Robo-AO. - BY SPECIAL ARRANGEMENT

The use of Adaptive Optics and Robo-AO will go a long way in improving the performance of optical instruments such as telescopes.

NURSERY RHYMES have told us since childhood days that the stars that we see in the sky twinkle. This apparent twinkling is because of the swirling air or the turbulence in the atmosphere above us. The same thing happens when stars or planets are observed or imaged through a telescope. In addition to the twinkling, turbulence causes the light from the star to jump about in the field of the eyepiece of the telescope. If instead of an eye there were an imaging device at the focal plane of the telescope's lens, the image would correspondingly shift around on the photographic plate or the imaging plane of the detector, resulting in a smudged or blurred image, a fuzzy blob, instead of a sharply defined point or a disk. A bigger and more powerful telescope only aggravates the smudging.

If two stars are very close, this makes it difficult to see' or resolve them as separate objects. Astronomers quantify this ability to resolve two nearby objects as seeing', which is a measure of the optical steadiness of the atmosphere. The unsteadiness arises owing to thermal non-uniformities in the atmosphere, like layers having different temperatures and wind velocities, which are always present, causing the light from the star that passes through them to deviate constantly. Temperature fluctuations in small patches of air act as many little lenses and cause light to be refracted many times by little amounts. Therefore, when light reaches the telescope, what started out as a plane wave gets distorted. Equivalently, the light rays are no longer parallel and hence cannot be focussed to a point. Figure 1 shows the distortion to the incoming light schematically.

Seeing' is the biggest problem in earth-based astronomy. This is why most astronomical observatories are built on mountain tops as the atmosphere closer to the ground is much more convective than at high altitudes. Though seeing' does improve, it does not solve the problem completely. One of the chief reasons for launching telescopes, such as the Hubble Space Telescope (HST), into space is to overcome the blurring effect of the atmosphere and achieve a far-better resolution. However, this option is not available to everyone since space-based observatories are expensive and difficult to maintain as the HST experience should tell us.

Under ideal conditions, the theoretical limit for the optical quality, or resolving power of an imaging device including the eye, is determined by the diffraction of light waves. This is the so-called diffraction limit' and is given by the Rayleigh Criterion. It gives the smallest angular separation at which two equally bright point sources can be distinguished. Images of any two objects separated by a smaller angle will merge because of diffraction effects. According to the Rayleigh Criterion, numerically, the resolving power of a telescope (in radians; 2 radians make an angle of 360) can be approximated by the equation R = (, the wavelength of light)/(D, diameter or aperture of the telescope). For example, this relation tells us that for a theoretically achievable angular resolution of 0.1 arcsec (1 arcsec = 1/3,600 of a degree) with yellow light of wavelength 580 nanometre (nm), the telescope should have a diameter of 1.2 metres.

However, this theoretical limit is practically unachievable because of atmospheric distortions. Table 1 gives a comparison of the diffraction-limited resolution and actual resolution that different ground-based telescopes achieve. The 2.4-m HST, on the other hand, has a resolution better than 0.1 arcsec in the visible compared with the diffraction limit of 0.05 arcsec. (The eye's resolving power is also shown for comparison though the eye's limitations are more from the imperfections in the cornea and the eye lens.)

Adaptive Optics

Adaptive Optics (AO) is a technique that uses optical systems in conjunction with the telescope to correct, or compensate for, the optical aberrations introduced by the intervening medium. In 1953, Horace Babcock, an inventor of astronomical instruments then working at the Mt. Wilson and Palomar observatories in the United States, proposed this concept. However, appropriate technologies were not available in the 1950s to meet the precision required in an AO system.

The first working AO system was built in 1956 by the Caltech nuclear physicist Robert Leighton as an amateur astronomer to improve planetary images at the 60-inch telescope at Mt. Wilson. Conceptually, however, it was quite different from present-day AO systems. It was based on reducing the image drift with short exposure times and using an electronic guiding system that moved the imaging plane to cancel out the image motion on the film. In 1992, the telescope was fitted with a different early AO system called the Atmospheric Compensation Experiment (ACE) developed as part of the Strategic Defence Initiative (SDI).

Present-day AO systems essentially correct the wavefront received by the telescope itself by sensing its distortions in near real-time and applying the required corrections so that the image detector receives a wavefront that approximates the plane wave and gives an image resolution that is close to the diffraction limit. Because the atmosphere is constantly changing, the distortions caused are random and quite dynamic. Therefore, for the corrections to be made in near real-time, AO systems need to operate at high frequencies, typically about 1,000 Hertz (that is, correction response times of a few milliseconds). Since it is difficult to change the primary mirror of a telescope at such high speeds, AO systems use secondary wavefront correcting devices, such as a flexible deformable mirror (DM) or a liquid crystal display (LCD) array. After the Cold War ended, when many of the AO technologies used by the military were declassified, rapid technological advances in AO became possible. (In active optics, as against adaptive optics, deformations in large primary mirror geometry itself are corrected by using an array of segmented mirrors, and the timescale involved is much longer.)

Basically, an AO system comprises the following main components: a wavefront sensor (WFS) to measure the distortion due to atmospheric turbulence; a wavefront corrector, usually a DM located behind the exit pupil of the telescope, to compensate for the distortion; and a control system to calculate the required correction and necessary shape to apply to the corrector. The basic principle is that the system takes a sample of light from a star, determines how the atmosphere is perturbing and distorting the plane wavefront, and then uses a DM to straighten it.

The early techniques used a bright guide star in the sky close to the observed object such as a galaxy to serve as a reference. Light from the natural guide star (NGS) passes through the telescope optics and is sampled at about 1,000 times/sec by the WFS, which will measure how turbulence is distorting the wavefront. This information is sent to a computer which calculates the correction to be applied to the DM, and this information is fed to the DM to counteract the distortions. The light wavefront from the observed galaxy reflects off the DM, and the distortions are cancelled out. The above process is actually a continuous one and is in a closed loop control system, which constantly sends small corrections to the shape of the DM. Important to this process are fast computing and modelling of the incoming wavefront.

Usually, AO systems employ wavefront correction in two stages: first, what is known as tip-tilt correction, the simplest form of distortion correction, is applied, followed by higher-order correction with the use of the DM. Tip-tilt correction corresponds to the correction of the tilts of the entire wavefront in two dimensions (relative to the plane perpendicular to the optic axis of the telescope) and is performed using a rapidly moving tip-tilt mirror which can rotate around two axes. A significant fraction of the atmospheric distortion can be removed this way.

Often it is difficult to find an NGS that is sufficiently bright to serve the purpose close to every observed object. For infrared observations, it must lie within about 30 arcsec of the object and for visible light it must be within 10 arcsec. These constraints are fairly severe and they correspondingly limit the sky coverage of the telescope. The fraction of objects in the sky that have a suitable NGS is a few per cent or less. To overcome this problem, astronomers in recent times have begun to use a laser to make an artificial star in the part of the sky being observed.

Laser guide star

The idea of using laser guide star (LGS) was suggested in the early 1980s. Basically, this involves mounting a laser beam on the telescope and pointing it at the object observed. One concept uses the sodium LGS as the beacon, which uses a yellow colour laser light with wavelength 589 nm that can excite a layer of sodium atoms naturally present in the mesosphere deposited by meteorites at a height of about 90 kilometres. The sodium atoms then re-emit the laser light, resulting in a small glowing spot in the sky that serves as the artificial star that can be used to measure turbulence effects on the wavefront. (The same atomic transition is used in sodium vapour street lamps.) The power of sodium beacons is typically 6-25 watt.

An alternative concept is that of the Rayleigh beacon, which uses a pulsed laser light focussed at an altitude of about 10-15 km. The laser light is scattered by air molecules, and the WFS is timed so as to observe the scattered laser light at just the time when the pulse would have travelled up and back from the chosen altitude. The concept is so named because the type of scattering from air molecules is called Rayleigh scattering' the scattered light intensity here is inversely proportional to the fourth power of the wavelength. This means that the shorter the wavelength of the laser light, the more intense the scattered light. Therefore, Rayleigh beacons are usually in the near-ultraviolet wavelengths. (It is the Rayleigh scattering of light by the atmosphere that makes the daytime sky appear blue; blue light scatters more than the higher wavelength red or yellow light.)

One of the limitations of having LGS is that it is focussed at a finite altitude. So its light does not pass through the same patches of atmosphere as light from a real star or the object being observed. This is due to the so-called cone effect (Figure 2); light from the LGS forms a cone so that some bits of the atmospheric turbulence that affect the observed object are missed. It is obvious that in the case of the Rayleigh beacon, which is at a much lower altitude compared with the sodium beacon, the missing data' about the wavefront will be so much more, particularly in large aperture telescopes, that AO performance becomes quite poor. To compensate for this, it often becomes necessary to have multiple laser beams to determine the atmospheric distortions sufficiently accurately using what is known as tomographic wavefront reconstruction'. At present, only some telescopes around the world have implemented the LGS concept and a few with multiple beacons.

The other chief limitation of AO is that the technique performs well only at higher wavelengths like infrared as compared to the visible region. This is because of a factor called Strehl ratio, which is the ratio of the peak intensity of the resulting image against that of a theoretically perfect image (in the absence of the atmosphere) at the diffraction limit. It is a measure of the optical quality of telescopes. In practical situations images have a core' and a halo'. When an AO system performs well, there is more intensity in the core. In poor seeing conditions, the halo contains a larger fraction of the intensity and the peak at the core falls; that is, the image becomes smudgier. Now this ratio falls off rapidly at lower optical wavelengths. As compared to infrared, where ratios are in the range of 0.7, in optical wavelengths, current AO systems on large telescopes do not perform better than 0.01, according to AO specialists. So present-day AO systems are used largely only in the infrared.

The two key components of an AO system are the WFS and the DM. The WFS that can measure turbulence hundred to thousand times a second is typically a fast change Charge Coupled Device (CCD), similar to those used at slower speeds in video cameras. The arrangement of optics in front of the sensor varies depending on the technique being used. There are mainly two techniques in use: curvature sensing and Shack-Hartmann sensing. The latter, which is more widely used, is shown schematically in Figure 3. In Shack-Hartmann WFSs, the circular telescope aperture is split up into a two-dimensional array of pixels using an array of small lenslets so that the shape of the incoming wavefronts can be measured as a function of position in the telescope aperture. Light from an NGS or LGS is focussed by the array of lenslets on to the fast CCD camera. In the absence of turbulence, a plane wavefront would be focussed on to an evenly spaced array of spots on the camera. In the presence of turbulence, the spots would be irregularly spaced and these spots will jump around on the detector corresponding to the rapid changes in the atmosphere. The mean wavefront perturbation in each pixel is calculated quickly and the pixelated map of the wavefronts is fed into the DM, which then corrects the wavefront distortions.

DMs for astronomy are usually made of a very thin sheet of glass or membrane mirrors. Attached to the back of the membrane are actuators (pistons) which expand or contract in length in response to a voltage signal and bend the mirror locally. If the DM has a depression that is half the depth of the initial distortion in the shape of the wavefront, then as the light gets reflected from the mirror in the opposite direction, since it has to travel the extra path length equal to the distortion, the rest of the wavefront would have caught up with the distorted section and the wavefront will become flat. Different kinds of DMs are in use today: with piezoelectric actuators, with micro-electro-mechanical system (MEMS) devices, with LCDs and even liquid DMs using ferrofluids.

A DM usually has many degrees of freedom, and this determines the number of wavefront inflections that it can correct. The number of degrees of freedom in a DM can be roughly taken to be equal to the number of mechanical actuators. The DMs come in different sizes, and degrees of freedom range from a few tens to over a thousand. Technological advances have enabled new DMs that have over 5,000 actuators and WFSs that can handle this many degrees of freedom. Correspondingly innovative algorithms for computing systems too are under constant development.

Figure 4 compares stellar images obtained with and without AO. Practical AO systems, however, are expensive to build and incur large operational overheads. Therefore, only a handful of large observatories can afford to implement AO. Even in these, only a relatively small fraction of the available observing time is set aside for AO-mode operation because of the large operational costs. Despite this, the focus of AO has hitherto been only on large telescopes because of their inherent capability to view faint objects and, if suitable NGSs can be located nearby, AO can be used to view these faint objects sharply.

But there are a large number of small and medium telescopes (using mirrors of diameter 1-3 m). Besides their inherently limited light-gathering power, their observational capabilities are severely limited by atmospheric seeing'. What has not been so clear is that if the power of AO can be combined in an efficient and affordable way, it will open up hitherto unachievable observational possibilities. It can make moderate-sized telescopes more powerful with an investment under $1 million, pointed out Shrinivas Kulkarni of Caltech, who is spearheading a collaborative project between the Pune-based Inter-University Centre for Astronomy and Astrophysics (IUCAA) and Caltech that began two years ago to implement AO on small and medium-sized telescopes. This project, called Robo-AO, is currently being implemented on the fully robotic 60-inch (1.5-m) telescope hence Robo-AO at the Palomar Observatory in the U.S. The collaboration is leading innovation in astronomical adaptive optics, said Kulkarni.

Project Robo-AO's aim was to demonstrate a low-cost autonomous LGS AO system and science instrument, which works both in the visible and near-infrared, and is specifically designed to take advantage of modest aperture telescopes by improving their sensitivity. Robo-AO ushers in a new AO observing paradigm, said Christoph Baranec, who heads the Robo-AO group at Caltech and is the principal investigator of the Caltech-IUCAA collaborative project. The emphasis is on robustness, he said in his presentation on Robo-AO that he made at the recently concluded workshop on Robo-AO at the IUCAA. This demonstrator system, he hoped, would serve as the archetype for a new class of affordable systems that were deployable on 1-3 m class telescopes. Its design is modular and only small design changes are necessary to port the system to different telescope architectures, according to the project scientists. The idea then is to replicate the Robo-AO system in small and medium telescopes around the world, towards a Robo-AO network of telescopes. That, the scientists said, would bring the benefit of AO to the large community of moderate-diameter telescopes.

The three main components of the Robo-AO system are an LGS, an integrated AO and science camera system, and a robotic control system. The LGS is a 10-12 watt ultraviolet laser ( = 355 nm) used in the pulsed Rayleigh beacon mode focussed at an altitude of 10 km. If the telescope is reduced in size to 1.5 m, Baranec said in an e-mail message, the amount of missing data will be much smaller compared to, say, a 10-m telescope, and the AO system will actually work as against a large telescope. A 1.5-m telescope will work a little better with a sodium beacon focussed at 90 km, but since sodium lasers are roughly $1m [million] with ongoing maintenance costs of $100,000/yr, they are very costly compared to the 10-15 km beacons which can be purchased for less than $100,000 and last for several years before needing to be serviced.

The AO and camera systems are all based on components and systems that can be bought off the shelf. The WFS part of the AO system is performed with an 11x11 Shack-Hartmann sensor with a high quantum efficiency of 72 per cent at the laser wavelength. While the zeroth order wavefront correction is done by a piezoelectric tip-tilt mirror with a capability of up to 4 arcsec tip-tilt correction, higher order wavefront correction is done by a 12x12 actuator MEMS mirror. The pilot AO system has reportedly been working successfully at 1.5 kHz since it was mounted on the P60 telescope.

For science purposes, the system includes visible, infrared and near-IR cameras for imaging. The first is an electron-multiplying low-noise visible CCD camera and the second an indium-gallium-arsenide (InGaAs) IR camera, both of which are readily available. It is in adapting the generally available mercury-cadmium-telluride (HgCdTe) based near-IR camera, known as Hawaii-2RG, that IUCAA has played a significant role. The near-IR H2RG camera is, however, being planned only as a future upgrade for Robo-AO. Almost all hardware development for this upgrade (optics, electronics and a good fraction of the software) will be carried out at IUCAA.

According to A.N. Ramaprakash of the IUCAA, who leads the IUCAA Robo-AO effort, these HgCdTe sensors, bonded on to a silicon circuit, are the most successful near-IR detectors and are used in a number of large telescope observatories and will also be flown on the proposed James Webb Space Telescope. However, the detectors are extremely expensive; for example, a 4 megapixel detector costs about $400,000. In view of this demand, in IUCAA we have achieved the ability to handle these detectors by designing appropriate detector control and data acquisition and handling system, Ramaprakash said in an e-mail exchange. It allows the astronomers to run a simple software on a Linux platform and control the detector using a standard USB connection, he said.

This has actually resulted in the technology developed for Robo-AO flowing back to large telescopes. The IUCAA-designed controller called I-SDEC, in fact, has been chosen for building a near-IR spectrograph for the 11-m South African Large Telescope (SALT). It will also be used by a University of Florida team for an instrument called CIRCE being built for the 10.4-m Gran Telescopio CANARIS (GTC) at La Palma, Spain. In return for its contribution, the IUCAA will get observing time on SALT and GTC, Ramaprakash said.

According to him, for the current version of Robo-AO, which is being commissioned on P60, some of the other areas in which the IUCAA's labs were directly involved include development of Linux-based driver for the MEMS-based DM, tip-tilt mirror driver and its integration with the MEMS mirror driver system, laser launch system optics design, design and assembly of electrical and control system and software for integration of environment and safety sensors (temperature, humidity, and so on) and feedback.

The robotic operation and control, a crucial part of the Robo-AO idea, operates on a consumer grade personal computer running on Fedora Linux, a free open access software. According to the scientists, the software operates all the subsystems as a single instrument. The system thus is able to execute fully autonomous observations, which is directed by a very efficient and intelligent (observations) queue scheduling system. It is designed to handle up to 150 targets/night, with each observation lasting two minutes. This ability to handle a large number of targets a day in an automated mode will be ideal for the following up of the large number of planned large surveys by various astronomy groups, said Kulkarni. Robo-AO on small telescopes, with its minimal observation overheads, endows them with the capability of large 1,000-plus or even 10,000-plus targets in multiple-week single campaign high-resolution surveys with its high per night observing rate, pointed out Baranec.

The ability to perform observations in a science programme with Robo-AO on small telescopes, he added, is orders of magnitude more than what an astronomer would be able to realistically do at the world's largest apertures. If one wanted to, say, to look at 10,000-odd potential lensed quasar candidates with [the 10-m] Keck [telescope], it may take a century since any given astronomer may only be able to get one or two nights a year on that telescope. However, on small telescopes, where there is typically much less demand, one could execute all of the observations necessary in a matter of months, both due to the availability of the system and because of the efficiency afforded by making the system completely robotic and autonomous. Robo-AO's potential contribution to science comes from this unique ability to obtain near diffraction limited observations of a large number of targets, and the Robo-AO team is planning to apply this capability in three broad areas: Large Single Image Surveys, Rapid Transient Characterisation and Time-domain Astrometry, particularly high-precision astrometric characterisation of binaries and searches for planets.

According to the scientists, in terms of Robo-AO's performance, in the near-IR, the observations will give small telescopes capabilities equivalent to 4-m plus aperture telescopes at a much lower cost and with greater flexibility. While in the IR, Robo-AO's Strehl ratios are in the range of 0.5-0.7, in the visible it can deliver Strehl ratios of 0.1-0.2, an order of magnitude better than large telescopes with traditional AO, according to Baranec. The ability to do AO correction in the visible is the other unique capability of Robo-AO, he pointed out. We are not limited by physics to do so on larger telescopes, it's just that technology is probably several decades away, he said. This, pointed out Ramaprakash, results from a combination of careful error budgeting from the design phase itself, presence of an atmospheric dispersion corrector, atmospheric characteristics of the site, etc. Robo-AO's angular resolution in the visible is 0.1-0.15 arcsec, while in the IR it is 0.2-0.25 arcsec. Table 2 gives a comparison of the characteristics of a 1.5-m telescope with a traditional AO system and with Robo-AO system.

The Palomar P60 telescope, with the AO system fully mounted, is now undergoing final tests towards being commissioned soon. After commissioning, it will have a month of science demonstration run before being used for regular observations. An identical of the first Robo-AO system will be then deployed at the IUCAA Girawali Observatory's (Pune) 2 m telescope. A third NGS-only variant of Robo-AO is being developed for the 1 m telescope at Table Mountain of Pamona College in California. After a proprietary period, the design and software for Robo-AO will be made public under a general Public Licence, according to the scientists.

Sign in to Unlock member-only benefits!
  • Bookmark stories to read later.
  • Comment on stories to start conversations.
  • Subscribe to our newsletters.
  • Get notified about discounts and offers to our products.
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide to our community guidelines for posting your comment