Are opinion polls relevant?

Print edition : March 07, 1998
VENKATESH ATHREYA

NOT very long ago, public discussions on possible trends in Indian popular opinion on any important issue would take place mainly on the basis of impressionistic reports of journalists or the interested views of parties seeking to influence such opinion. Within the last decade and a half, however, a sea change has occurred. The conduct of surveys of public opinion on issues has come into vogue in a major way. Particularly at times such as the present, in the run up to a crucial general election, such surveys or opinion polls attract considerable attention. Equally important, though hardly surprising, is the fact that they evoke a great deal of controversy.

Reactions to the practice of conducting election-related public opinion surveys and disseminating their results vary widely. They range from sheer incredulity about their ability to gauge popular intentions to grave concern about their misuse. A well known journalist in Tamil Nadu asked this question on a TV channel recently: how can electoral outcomes for a whole State be forecast from the responses of select voters in a handful of constituencies? From another end, the Election Commission has taken the view that dissemination of the predictions of surveys on electoral outcomes could seriously and, in its opinion, unwarrantedly influence voter intentions, and has issued a press note "banning" such dissemination over a two-week period. The Commission's fiat has been challenged in the highest judicial forum of the country. The final verdict in the case is clearly going to be of great public importance in the context of the need for informed public discussion and the fundamental right of free speech.

Be that as it may, the question remains: Are public opinion surveys relevant, reliable or useful?

The answer, quite simply, is that if such surveys use a well-designed scientific methodology which is made transparent, they serve the very useful purposes of providing important clues to the opinions of the public at large on important issues, and of promoting informed public debate on such issues. A point that needs to be underlined is that the science of mathematical statistics makes it eminently possible to provide reasonable assessments about the intentions of a population as a whole from data on such intentions pertaining to a sample of respondents drawn from the population. What is required to ensure this is that the sample drawn be representative of the population. The manner in which this is done, to put it in the simplest manner possible, is by drawing a random example. The word "random" has a specific technical meaning here, and is not to be confused with the misleading popular uses of the term (such as, for instance, accosting the first ten people one meets on the road and eliciting information from them).

A simple random sample (SRS) is one where every member of a population has an equal chance of being drawn into the sample. For some purposes, for example, if a population consists of distinct heterogeneous groups which can be easily identified and set apart, we may divide the population into such groups or "strata" and a simple random sample drawn from each stratum. This is called "stratified random sampling". Without going into further technical discussion, we can take it for granted that the data obtained by evolving a scientifically appropriate procedure (called sample design) to draw a sample of respondents from a population, and obtaining their responses to a set of questions, can be used, with the help of the principles and procedures of statistical inference, to make forecasts/assessments for the population as a whole.

Assessments about population behaviour based on data from a carefully drawn sample are of course not altogether free of error. There are two distinct types of error. One is what is called, in technical jargon, "sampling error" and arises from the sample design itself. The magnitude of this error can be worked out precisely once the sample design is given, and the forecasts based on the sample design can be denoted as being subject to this margin of error in either direction. The other type of error, called non-sampling error, arises from the actual process of collection of data. This category includes errors in observation, measurement and recording. This latter error cannot be estimated with any degree of precision but can be minimised considerably, though not completely eliminated, by exercising great care in the process of data collection.

Chief Election Commissioner M.S. Gill displays his identity card at a polling station.-SHANKER CHAKRAVARTY

Against this rather cursory overview of the scientific basis and legitimacy of public opinion surveys, let us now take a look at the several pre-poll surveys pertaining to the election to the 12th Lok Sabha that have appeared in the recent period. Eight such surveys have been compared in the accompanying table. Before turning to the comparison, a couple of general points are need to be made.

Practically all the surveys have chosen the sample of Lok Sabha constituencies purposively -- and not by random sampling. This appears to be based on the professional judgment that a simple random sample will not capture the enormous diversity of the political scenario across the country or the variety of political alliances and adjustments that have been worked out.

A second feature, common to all surveys, is the element of arbitrariness in the methodology by which the percentage vote shares of various political formations (as derived from expressed voting intentions of sample respondents) are converted into Lok Sabha seats for them. In our first-past-the-post electoral system, an element of judgment is inevitable in moving from vote share forecasts to seat predictions, since there is no one-to-one correspondence between the two. Factors such as the degree of geographical concentration of a political formation's voter support, the degree of polarity in the electoral contest (whether a contest is bipolar, tripolar or even more multipolar) and the complexity (both on paper and in practice) of the electoral adjustments worked out are all crucial to determining the extent to which an increase in vote share translates into an increase in seats. This particular aspect needs to be borne in mind when evaluating or utilising opinion poll survey results.

It can be seen that there is quite some variation in the vote shares forecast for each political formation across the different surveys. Thus the BJP front's vote share is forecast to be as high as 36 per cent by the C-Voter poll of end-December 1997/early January 1998, and as low as 29.7 per cent by the A.C. Nielsen survey of end January 1998. The data also suggest that the early BJP front surge (reflected partially in the jump for the front's vote share from the Operations Research Group-Marketing and Research Group (ORG-MARG) figure of 29.9 per cent in mid-December 1997 to the C-Voter figure of 36 per cent) has been checked considerably, mainly on account of what has been termed the "Sonia effect' (Frontline, March 6, 1998), with all surveys converging on a somewhat lower figure of around one-third popular support for the BJP front by early February 1998. The Congress(I) seems to have gained significantly while the United Front has made up some of its lost ground. Interestingly, the seats forecast also show some degree of convergence across surveys, with all the pollsters except C-Voter agreeing that the BJP front will end up distinctly short of the magic figure of 272. The methods used to convert vote share into seats clearly play a crucial role here. Thus, the BJP front ends up with a seat tally of around 230, both in the Centre for Media Studies (CMS) poll which gives it a 34.8 per cent vote share and in the A.C. Nielsen poll (final round) which gives it 29.7 per cent vote share.

A priori, results from surveys with a significantly larger number of sample constituencies may be regarded as more robust. This is in some ways more important than the size of the sample in terms of the number of respondents per se, provided of course that the latter does not go below a critical minimum figure. It must be noted, however, that increasing the number of sample constituencies beyond a critical minimum size also does not yield much greater precision in vote share estimates. By these tentative criteria, one would consider the Nielsen and CMS surveys as more robust than the C-Voter and the first round Development and Research Services (DRS) surveys.

At the same time, the Centre for the Study of Developing Societies (CSDS) survey -- which has been characterised as the first all-India 'panel' study over two parliamentary elections, and one of the largest of its kind undertaken anywhere (India Today, February 23, 1998, p.18) -- has a serious methodological problem, even though its coverage in terms of Lok Sabha constituencies is the largest among all surveys considered here. The problem is that only around half of the chosen random sample of 15,500 voters could be contacted, a fact which, to be fair to the pollster, has been explicitly stated in the report on the poll. This introduces an unknown bias which casts serious doubt on the reliability of the vote shares forecast.

Recognising the limitations of various polls to which attention has been drawn, one is nonetheless struck by the broad convergence of the seat forecasts. The estimates of 220 to 240 seats or thereabouts for the BJP front, 140-160 seats for the Congress(I) front and 115-135 seats for the United Front are also consistent with the estimates from a booster sample of 4,000-odd respondents polled by CMS for the television channel, Asianet, in early February 1998.

It is well, however, to remember that these forecasts are based on several assumptions that can go awry. All surveys have employed their own specific methods for converting vote shares to seat shares. One does not have independent information on the quality of the field work of data collection carried out by investigators employed by the pollsters. Often, the combination of large sample sizes and tight deadlines for publication of findings can wreak havoc on the quality of field work. In every survey, a certain proportion of respondents remain undecided at the time of survey. It is by no means certain that all those who do express specific voting intentions in a survey will vote on election day nor even that they will not have changed their minds in the period between the survey and voting day.

All the caveats notwithstanding, it needs to be emphasised that carefully designed opinion surveys have far more of an objective basis than impressionistic reports of journalists in the print medium or political commentators on the electronic media. What should be expected of any serious pollster is that the methodology -- by which is meant the entire gamut of details, including sample design, field work methods and procedures, and analytical procedures by which inferences concerning the population are drawn from sample data -- is explicitly and transparently stated.

A good example of such thoroughness is the pre-election opinion survey for Tamil Nadu conducted for Frontline by the APT Research Group. This gives not only details of sample design and field work methodology, but also the specific formula used to convert vote shares into seat forecasts. The formula is a modified version of the "Cube Law" found appropriate for converting vote shares into seats in bipolar contests under the first-past-the-post system. (The 'Cube Law' states that if vote shares of the two dominant political formations in a first-past-the-post electoral system are 'a' and 'b', their seats will be in the ratio a3 to b3. This has to be modified in the Indian context to take into account multipolarity and complicated electoral adjustments).

To sum up, results of carefully designed opinion surveys published along with a transparent and clear statement of methodology constitute an important input for informed public debate on the state of politics. The guarantee against "rogue polls" by political or other vested interests cannot be provided by outlawing conduct and dissemination of public opinion polls -- which in any case have not been shown to influence voter intentions significantly so far -- but only by encouraging scientific opinion polls and subsequent critical discussion of their results.

Venkatesh Athreya is Professor of Economics, Indian Institute of Technology, Chennai.

This article is closed for comments.
Please Email the Editor