A Nobel-winning prescription for symptoms

As the euphoria over India’s latest Nobel Prize win subsides, it is time to question the validity of randomised control trials advocated by the prize winners as the gold standard in the fight against poverty.

Published : Nov 08, 2019 11:34 IST

Abhijit Banerjee and Esther Duflo during a press conference at MIT on October 14.

Abhijit Banerjee and Esther Duflo during a press conference at MIT on October 14.

If the media is a mirror to reality, the dismal science has a new rock star, a home-grown one at that. The widespread adulation in India over the award of the Nobel Prize in Economics for 2019 to Abhijit Banerjee, his wife and colleague Esther Duflo, both professors in the Massachusetts Institute of Technology (MIT), and Professor Michael Kremer, from Harvard University, is, without exaggeration, truly unprecedented.

No Indian Nobel Prize winner, not even the last Indian to win the same prize, Amartya Sen in 1998, has enjoyed the feverish adulation in the Indian media as Abhijit Banerjee has. He has, maybe unwittingly, prolonged the applause by participating in a series of interviews with every conceivable media segment, including perhaps for the first time by a Nobel winner an interview with a comedian who performs to a YouTube audience.

Given the largely one-sided commentary in the popular media, barring a few exceptions, a dampener is, perhaps, in order. So, why grudge an Indian’s Nobel?

This year’s Nobel for Economics, officially the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, was awarded to the trio “for their experimental approach to alleviating global poverty”.

While announcing the winners, the Royal Swedish Academy of Sciences observed that the prize was a reward for their development of “a new approach to obtaining reliable answers about the best ways to fight global poverty”.

The Academy noted that the three researchers had broken up the hugely vexatious issue of global poverty into “smaller” and “more manageable” sets of questions. “Carefully designed experiments” could then be used to address these targeted questions such as those aimed at improving access to health, education, nutrition or a hundred other issues that development economists have for long suggested as belonging to the “culture of poverty”.

At the heart of the “new” in new development economics, which has rapidly gained currency since the mid 1990s, is the use of randomised control trials (RCTs) to address every conceivable aspect of the problems associated with poverty.

Thus, RCTs have been used to “verify” the effectiveness of microcredit or the access to credit, health, education, nutrition and immunisation programmes or the extent of gender equality in a given social setting. In fact, the trio have conducted/are conducting more than 1,000 RCTs in 83 countries. Of these, “about 20-25 per cent” were in India, Abhijit Banerjee said in a recent interview.

The evangelical zeal with which the RCT is being propagated rides on the back of the claim that it is the “gold standard” to identify “what works” in the fight against poverty in a particular social setting, generally in a compactly defined locale for a particular kind of intervention, such as microfinance or immunisation.

The application of RCT as a modern statistical device has a history of over a century starting with the work of Ronald Aylmer Fisher, a British statistician and geneticist who first applied the technique to a large volume of data from agricultural crop experiments.

Significantly, Fisher, who incidentally visited the Indian Statistical Institute in Kolkata in the 1930s, was a strong advocate of experimentation of all kinds, not just the randomised kind. The real boost to RCT came with its widespread adoption in medicine, particularly in clinical trials and in drug testing and development.

The idea is disarmingly simple: divide the test audience randomly into two samples—one that acts as a “placebo” and the other that receives the proposed “treatment”. It is absolutely critical that both sets of samples are alike in every other respect in order to gauge the impact of the “treatment”. A “baseline” study undertaken before the evaluation serves as a benchmark to compare the results of the intervention after the survey, which determines the efficacy of the intervention.

The innovation in the “new” development economics has come from the adoption of the RCT as a tool for ascertaining “what works” in the battle against poverty.

Abhijit Banerjee, Esther Duflo and Sendhil Mullainathan (a professor of computation and behavioural science at the University of Chicago) founded the Abdul Latif Jameel Poverty Action Lab (J-PAL) at MIT in 2003. Since then, J-PAL has been the prime evangelist of a “scientific evidence-based” approach to policies that aim to rid the world of poverty.

The J-PAL website says that more than 190 professors from universities across the world work with it in conducting RCT-based field trials, mostly in developing countries. A recent academic paper by a team of four scholars based in Paris and Geneva estimated that 29 per cent of RCTs undertaken by J-PAL (completed and ongoing) were targeted at the microfinance segment. J-PAL’s list of donors reads like a who’s who of the usual suspects in global philanthropy.

Significantly, the wildfire of RCTs in the last decade happened at a time when the credibility of professional economists slid dramatically in the aftermath of the global financial crisis when the profession’s dons were caught playing the fiddle as the world burned.

In their latest book, Good Economics for Hard Times , exquisitely timed with their winning of the Nobel, Abhijit Banerjee and Esther Duflo cite a poll of professional groups conducted in the United Kingdom by YouGov that asked respondents: “Of the following, whose opinions do you trust the most when they talk about their field of expertise?” Economists were ranked second last among all professions polled, just above politicians. The authors note sardonically that even weather forecasters, regarded in popular opinion as those who often get it wrong, were twice more trustworthy than economists.

Rise of ‘scientific’ economics

What explains the remarkable rise of evidence-based “scientific” economics practitioners at precisely the time when the profession has lost credibility and legitimacy?

The first factor is the marginalisation of development economics, which had its heyday between the 1950s and the 1980s in mainstream economics. The particular problems of newly independent countries, including India, found expression in this current until it was displaced by the rising tide of neoliberalism in the early 1980s.

The second reason behind the shift in the economics profession pertained to its close embrace of economic policy, unlike earlier. In the 1950s, the 1960s and the 1970s, academic doyens such as K.N. Raj and Sukhamoy Chakraborty worked closely in helping guide policy, studiously maintaining a distance in order to ensure that their credibility in the profession remained untainted by their dalliance with policy.

The engagement of “professional” economists, purely as experts, ostensibly because they gave policy an evidence-based nudge, brought them into a bear hug with political masters who were themselves looking for a way to legitimise their actions (or inaction).

A cursory examination of the difference between the traditions of the Planning Commission and the NITI Aayog, which replaced it after Narendra Modi became Prime Minister in 2014, illustrates the point that “professional” economists are indistinguishable from politicians to whom they provide their services as unbiased evidence-based practitioners.

‘Act like plumbers’

Esther Duflo, in a 2017 paper, urged economists to act like plumbers, urging them to come down from their ivory towers and address real-world problems. Abhijit Banerjee has also frequently pointed out that economists’ obsession with big and weighty issues has prevented them from acting in more concrete ways to solve problems.

The Marxist economist Michael Roberts observes that this approach “ignores whether the plumbing is designed properly in the first place. Far from fixing leaks, economists may be trying to stop a flood with a spoon.”

Ironically, despite their claim to focus on the small, Abhijit Banerjee and Esther Duflo’s latest book promises to “show how economics, when done right, can help us solve the thorniest social and political problems of our day”. The book notes that “immigration and inequality, globalisation and technological disruption, slowing growth and accelerating climate change are sources of great anxiety across the world”, without revealing their methodology and how with their toolkit they plan to address these weighty issues.

Instead, we get the glib assertion: “The resources to address these challenges are there—what we lack are ideas that will help us jump the wall of disagreement and distrust that divides us.”

Celebrating fatalism

This brand new way of approaching development economics amounts to celebrating fatalism by prescribing a move away from addressing structural and deep-rooted problems in development and abandoning the critical role of human agency in social change, and instead developing quick fixes that address the symptoms while allowing the malaise to fester.

This is in keeping with the subversive undercurrent in development economics, which has been at least three decades in the making. Traditional mainstream economics had until then played second fiddle to the theories of economic development that had embedded them in social, political and economic processes.

This hijacking of the agenda of development economics was roughly coterminous with the unleashing of the neoliberal onslaught in economics. The first wave began when institutions such as the World Bank started unpacking a “poverty package” to handle the aftermath of the social and economic wreckage caused by its blanket recommendations of privatisation and deregulation, and by the International Monetary Fund’s imposition of a standardised Structural Adjustment Programme recommendation for countries that were far from similar.

As a result, what remains of development economics is a carcass that is focussed narrowly on poverty alleviation while disengaging from its rich tradition that was not morbidly obsessed with measurement. Interestingly, the title of a recent book, The Tyranny of Metrics , appropriately describes this phenomenon although it deals with the obsession with algorithm-based measurements, which, in the name of being “unbiased”, have captured huge swathes of popular life, from credit ratings to university rankings.

A flawed approach

Angus Deaton, winner of the Economics Nobel in 2015, in his book The Great Escape: Health, Wealth and the Origins of Inequality observed that “what works” in a particular project or a case of policy intervention is “unlikely to reveal anything very useful about what works or does not work in general ” (emphasis in original).

He pointed out that the experimental groups are very small and expensive to run, which limits their applicability elsewhere. As a result of this, the RCT-based method cannot, by definition, be replicated in another locale. “Even if an aid-financed project is the cause of people doing well—and even if we were to be absolutely sure of that fact—causes usually do not operate alone; they need various other factors that help them to work. Flour ‘causes’ cakes, in the sense that cakes made without flour do worse than cakes made with flour—and we can do any number of experiments to demonstrate it—but flour will not work without a rising agent, eggs, and butter, the helping factors that are needed for the flour to ‘cause’ the cake.”

In fact, the evolution of the microfinance industry itself offers a lesson on how it is not the same thing in every location and circumstance. In the initial stage of its evolution, it was envisaged that the new means of “small-ticket” development financing offered tremendous potential for human agency.

There was the hope that women-based self-help groups working in this field, more than others, would demonstrate that human agency could be harnessed in the struggle for gender equality. But now, microfinance buccaneers have replaced traditional moneylenders as the sharks roaming the countryside, and the RCT-based model, being obsessively evidence-based, does not differentiate between them and the not-for-profit microfinance agencies.

The RCT-based model offers no hope of initiating change through human agency.

Nancy Cartwright, a philosopher of science and author of numerous books, has been a trenchant critic of not only the exclusive dependence on RCT but also on how RCT is prone to serious errors if not counterchecked with evidence made available by other modes of inquiry.

In a recent study titled “Understanding and misunderstanding randomized controlled trials” (2018), she pooh-poohs the general notion that RCT is infallible in medical research. “The lay public, and sometimes researchers, put too much trust in RCTs over other methods of investigation,” she said in the paper co-authored with Deaton.

“Contrary to claims in the applied literature, randomisation does not [emphasis in original] deliver a precise estimate of the average treatment effect.... Finding out whether an estimate was generated by chance is more difficult than commonly understood.”

In effect, what Nancy Cartwright is saying is that there is potentially a wide gap between “what worked” for a particular person/situation/case and the average as revealed by the RCT results. In the case of medicines, for instance, she places far more importance on the “conversation” between the patient and the doctor who knows a lot more about the patient than just the particular ailment that is sought to be addressed by a particular drug that has gone through RCTs.

“If your physician tells you that she endorses evidence-based medicine, and that the drug will work for you because an RCT has shown that ‘it works’, it is time to find a physician who knows that you and the average are not the same,” she observed.

Nancy Cartwright is careful to point out that RCTs are useful, especially when there is absence of background information, a situation in which it is impossible to even hazard a guess about the average impact of a particular intervention. But that is a far cry from touting it as the gold standard in any field, let alone assuming it is a silver bullet in the war against poverty.

Multiple problems

In the euphoria of the Nobel, much has been made of how RCTs are able to solve problems. One case referred to how studies that advocated closed-circuit television cameras in schools resulted in improved student outcomes because they deterred teacher absenteeism. This approach to problem-solving has four major problems.

First, while it may be true that teacher absenteeism may be a problem, there is no appreciation of the fact that absenteeism itself may be triggered by other “structural” factors such as teachers not receiving their pay on time (not uncommon in many States), the gross lack of basic facilities in schools and the fact that a typical teacher in rural India often has to deal with students from two or more grades at the same time in a single classroom. Placing the entire burden of the problem on the hapless schoolteacher and adding insult to injury by asking that she be in view of cameras at all times is hardly problem-solving at its best.

The second problem arises from the researcher’s choice of what to measure, in this case, teacher absenteeism, by an a priori bias in the mind of the researcher. Note that this obsession with measurement demands the exclusion of all other variables. For instance, girl students may not be attending school for the full day because they may have a sibling to look after; or they may have to fetch firewood; or they may be missing school because the schools do not have toilets for girls, still a grim reality in many urban and rural schools.

Each of the proximate causes for girls missing school requires a different set of “policy interventions” and some of them could actually be intertwined. The RCT as a magic wand would be unable to comprehend the complexity of the problem, let alone offer a solution.

Instead of being shorn off ideological baggage that the RCT’s adherents accuse others of, the approach actually requires fairly strong and inflexible assumptions for the trials to “work”; framing the problem, as illustrated earlier, also reflects an ideological bias. Apart from all these, it assumes that the researcher knows what is best for the poor.

The third problem arises from the fact that RCTs pose moral and ethical dilemmas for those who are responsible for delivering the fruits of policy. Thankfully, we are still some distance away from a situation in which democratic checks and balances have been completely eliminated and we are instead commandeered by a techno-elite that holds the levers of policy without having any democratic sanction.

Imagine a situation in a drought-prone village about to undergo the experiment of an RCT which entails that some in the village would get a few kilograms of grain free, while the others who do not belong to the sample get nothing (in reality, only the sample group will get the grain while those in the control group as well as the remaining population in the village would get nothing). While the researcher can be smug in the illusion that he has remained true to his craft, the local Block Development Officer, whose task it is to oversee “development” activities in the village, would have a war on his hands.

A fourth problem emerges from the fact that RCT designs, results and their interpretation are extremely complex and inaccessible to those who may have the greatest interest in verifying them. Given that only a few, even from the ranks of academics, can comprehend the methodology and the results, there is a serious issue of credibility. This is not just a matter of not having the expertise but one that poses moral dilemmas.

The recent paper by researchers based in Paris and Geneva cited earlier attempted to replicate a study conducted by Esther Duflo and others on microcredit in 2015 in rural Morocco and found serious problems with “the reliability of the data and the integrity of the experiment protocol”. They discovered that their findings were because the authors of the original study “trimmed” outliers from the data. Although Esther Duflo and others responded to these observations, the fact that such a methodology is inscrutable to not just the target audience but also to those responsible for policy implementation, makes it fraught with risk.

Win-win for government, academics

Several media outlets pointed to the fact that Abhijit Banerjee and Esther Duflo have also been critical of the Modi regime, particularly the demonetisation of 500- and 1,000-rupee notes in November 2016. His utterances on the slackening pace of the Indian economy and his call for greater public spending instead of tax cuts for the rich were also cited as evidence of his concern for equitable growth.

While it is nobody’s contention that Abhijit Banerjee does not have a mind of his own, his recent utterances suggest that he is willing to make peace for the sake of his favourite toy, the RCT. In a recent interview to a leading English daily, Abhijit Banerjee sounded almost like a revolutionary, suggesting that he was part of “a movement” and that the Nobel “validated” the use of RCTs (“people looking for hard evidence instead of shooting their heads off”). He pointed out that several States, including Haryana, Punjab, Tamil Nadu, Rajasthan, Bihar, Odisha and West Bengal, already have “partnerships” with J-PAL.

It appears that governments and academics touting their wares are locked in a win-win embrace. While pushing austerity, deregulation and privatisation, governments are only too willing to engage “experts” to handle policy that is cloaked in the garb of being based on evidence that is outsourced from value-neutral academics. Asked about the significance of the Nobel, Abhijit Banerjee said: “Hopefully, it will open up the right doors; more people will open up to the idea of doing RCTs. That is our core business.”

Well, that sums up what the prize means for the poor. It belies the hope, expressed by many after the winners were announced, that the poor and their problems have now been brought to centre stage. In reality, the bandwagon will chug along nicely even as the poor remain where they are.


Baele, Stephane J. (2013): “The ethics of New Development Economics: Is the Experimental Approach to Development Economics morally wrong?”, The Journal of Philosophical Economics .

Banerjee, Abhijit and Esther Duflo (2008): “The Experimental Approach to Development Economics”, Working paper 14467, The National Bureau of Economic Research .

Banerjee, Abhijit and Esther Duflo (2011): Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty , New York: PublicAffairs.

Banerjee, Abhijit and Esther Duflo (2019): Good Economics for Hard Times , Juggernaut Books.

Bedecarrats, Florent, Isabelle Guerin and Francois Roubaud (2017): “All that Glitters is not Gold: The Political Economy of Randomized Evaluations in Development”, Development and Change .

Bédécarrats, Florent et al: “Estimating microcredit impact with low take-up, contamination and inconsistent data. A replication study of Crépon, Devoto, Duflo, and Pariente ( American Economic Journal: Applied Economics , 2015),”  International Journal of Re-Views in Empirical Economics , March 2019.

Cartwright, Nancy and Jeremy Hardie (2012): , Oxford University Press.

Colander, David and Craig Freedman (2019): Where Economics Went Wrong: Chicago’s Abandonment of Classical Liberalism , Princeton University Press.

Deaton, Angus (2015): The Great Escape: Health, Wealth and the Origins of Inequality, Princeton University Press.

Deaton, Angus and Nancy Cartwright (2018): “Understanding and misunderstanding randomized controlled trials”, Social Science & Medicine .

Dreze, Jean (2018): “Evidence, policy and politics: A commentary on Deaton and Cartwright”, Social Science & Medicine .

Duflo, Esther (2017): “The Economist as Plumber”, Working Paper 23213, The National Bureau of Economic Research .

Harrison, Glenn W. (2014): “Cautionary notes on the use of field experiments to address policy issues”, Oxford Review of Economic Policy , Volume 30.

Herrera, Remy (2006): “The Neoliberal ‘Rebirth’ of Development Economics”, Monthly Review .

Jamison, Julian C. (2017): “The Entry of Randomized Assignment into the Social Sciences”, Policy Research Working Paper (8062), World Bank.

Kabeer, Naila (2019): “Randomized Control Trials and Qualitative Evaluations of a Multifaceted Programme for Women in Extreme Poverty: Empirical Findings and Methodological Reflections”, Journal of Human Development and Capabilities .

Karlan, Dean and Jacob Appel (2011): More Than Good Intentions: How a New Economics is Helping to Solve Global Poverty .

Kremer, Michael (1993): “Population Growth and Technological Change: One Million B.C. to 1990”, Quarterly Journal of Economics .

Muller, Jerry Z. (2018): The Tyranny of Metrics , Princeton University Press.

Pearl, Judea and Dana Mackenzie (2018): The Book of Why: The New Science of Cause and Effect .

Ravallion, Martin (2016): The Economics of Poverty: History, Measurement and Policy , Oxford University Press.

Reddy, Sanjay (2013): “Randomise This! On Poor Economics”, Review of Agrarian Studies .

Shaffer, Paul, Ravi Kanbur and Richard Sandbrook (2019): Immiserizing Growth: When Growth Fails the Poor , Oxford University Press.


Sign in to Unlock member-only benefits!
  • Bookmark stories to read later.
  • Comment on stories to start conversations.
  • Subscribe to our newsletters.
  • Get notified about discounts and offers to our products.
Sign in


Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide to our community guidelines for posting your comment