More thought has been given to the validity of the conclusions drawn from development impact evaluations than to the ethical validity of how the evaluations were done. This is not an issue for all evaluations. Sometimes an impact evaluation is built into an existing program such that nothing changes about how the program works. The evaluation takes as given the way the program assigns its benefits. So if the program is deemed to be ethically acceptable then this can be presumed to also hold for the method of evaluation. (I leave aside ethical issues in how evaluations are reported and publication biases.) We can dub these “ethically benign evaluations.”
Another type of evaluation deliberately alters the program’s (known or likely) assignment mechanism—who gets the program and who does not—for the purpose of the evaluation. Then the ethical acceptability of the intervention does not imply that the evaluation is ethically acceptable. Call these “ethically contestable evaluations.” The main examples in practice are randomized control trials (RCTs). Scaled-up programs almost never use randomized assignment, so the RCT has a different assignment mechanism, and this may be contested ethically even when the full program is fine.
A debate has emerged about the ethical validity of RCTs. This has been brewing for some time but there has been a recent flurry of attention to the issue, stimulated by a New York Times post last week by Casey Mulligan and various comments including an extended reply by Jessica Goldberg. Mulligan essentially dismisses RCTs as ethically unacceptable on the grounds that some of those to which a program is assigned for the purpose of evaluation—the “treatment group”—will almost certainly not need it, or benefit little, while some in the control group will. As an example, he endorses Jeff Sachs’s arguments as to why the Millennium Villages project was not set up as an RCT. Goldberg defends the ethical validity of RCTs against Mulligan’s critique. On the one hand she argues that randomization can be defended as ethically fair given limited resources, while (on the other hand) even if one still objects, the gains from new knowledge can outweigh the objections.
I have worried about the ethical validity of some RCTs, and I don’t think development specialists have given the ethical issues enough attention. But nor do I think the issues are straightforward. So this post is my effort to make sense of the debate.
Ethics is a poor excuse for lack of evaluative effort. For one thing, there are ethically benign evaluations. But even focusing on RCTs, I doubt if there are many “deontological purists” out there who would argue that good ends can never justify bad means and so side with Mulligan, Sachs and others in rejecting all RCTs on ethical grounds. That is surely a rather extreme position (and not one often associated with economists). It is ethically defensible to judge processes in part by their outcomes; indeed, there is a long tradition of doing so in moral philosophy, with utilitarianism as the leading example. It is not inherently “unethical” to do a pilot intervention that knowingly withholds a treatment from some people in genuine need, and gives it to some people who are not, as long as this is deemed to be justified by the expected welfare benefits from new knowledge.
Far more problematic is either of the following:
The latter situation is clearly objectionable if it is seen to hold. But it is often hard to verify in development settings. Ethics has been much discussed in medical research. In that context, the principle of equipoise requires that there should be no decisive prior case for believing that the treatment has impact sufficient to justify its cost. (This is David McKenzie’s sensible modification to clinical equipoise to fit the types of programs in discussion here.) By this reasoning, only if we are sufficiently ignorant about the likely gains relative to costs should we evaluate further. Implementation of such an ethical principle may not be easy, however. In the context of antipoverty or other public programs, a priori (theoretical and/or empirical) arguments can often be made both for and against believing ex ante that impact is likely. A clever researcher can often create a convincing straw man to suggest that some form of equipoise holds and that the evaluation is worth doing. While this cannot be prevented, we should at least demand that the case is made, and it stands up to scholarly public scrutiny. That is clearly not the norm at present.
It has often been argued that whenever rationing is required—when there is not enough money to cover everyone—randomized assignment is a fair solution. (Goldberg makes this claim, though I have heard it often. Indeed, I have made this argument a few times with government counterparts in attempting to convince them on the merits of randomization.) In practice, this is clearly not the main reason that randomistas randomize. But should it convince the un-believers? It can be accepted when information is very poor, or allocative processes are skewed against those in need. In some development applications we may know very little ex ante about how best to assign participation to maximize impact. But when alternative allocations are feasible (and if randomization is possible then that condition is evidently met) and one does have information about who is likely to benefit, then surely it is fairer to use that information, and not randomize, at least unconditionally.
Conditional randomization can help relieve ethically concerns. One first selects eligible types of participants based on prior knowledge about likely gains, and only then randomly assigns the intervention, given that not all can be covered. For example, if one is evaluating a training program or a program that requires skills for maximum impact one would reasonably assume (backed up by some evidence) that prior education and/or experience will enhance impact and design the evaluation accordingly. This has ethical advantages over simple randomization when there are priors about likely impacts.
But there is a catch. The set of things observable to the evaluator is typically only a subset of what is observable on the ground (such information asymmetry is, after all, the reason for randomizing in the first place). At local level, there will typically be more information—revealing that the program is being assigned to some who do not need it, and withheld from some who do. The RCT may be ethically unacceptable at (say) village level. But then whose information should decide the matter? It may be seen as quite lame for the evaluator to plead, “I did not know” when others do in fact know very well who is in need and who is not.
Goldberg reminds us of another defense often heard, namely that RCTs can use what are called “encouragement designs.” The idea here is that nobody is prevented accessing the primary service of interest (such as schooling) but the experiment instead randomizes access to some form of incentive or information. This may help relieve ethical concerns for some observers, but it clearly does not remove them—it merely displaces them from the primary service of interest to a secondary space. Ethical validity still looms as a concern when any “encouragement” is being deliberately withheld from some people who would benefit and given to some who would not.
While ethical validity is a legitimate concern in its own right, it also holds implications for other aspects of evaluation validity. There is heterogeneity in the ethical acceptability of RCTs. That will vary from one setting to another. One can get away with an RCT more easily with NGOs than governments, and with small interventions, preferably in out-of-the-way places. (By contrast, imagine a government trying to justify why some of its under-served rural citizens were randomly chosen to not get new roads or grid connections on the grounds that this will allow it to figure out the benefits to those that do get them.) An exclusive reliance on randomization for identifying impacts will likely create a bias in our knowledge in favor of the settings and types of interventions for which randomization is feasible; we will know nothing about a wide range of development interventions for which randomization is not an option. (I discuss this bias for inferences about development impact further in “Should the Randomistas Rule?”.) Given that evaluations are supposed to fill our knowledge gaps, this must be a concern even for those who think that consequences trump concerns about processes.
If evaluators take ethical validity seriously there will be implications for RCTs. Some RCTs may have to be ruled out as simply unacceptable. For example, I surely cannot be the only person who is troubled on ethical grounds by the (innovative) study done in Delhi India by Marianne Bertrand et al. that randomized an encouragement to obtain a driver’s license quickly, on the explicit presumption that this would entail the payment of a bribe to obtain a license without knowing how to drive. (This study was conducted and funded by the World Bank’s International Finance Corporation. And it was published in a prestigious economics journal.) The study confirmed that the process of testing and licensing was not working well even for the control group. But the RCT put even more drivers on Delhi roads who did not know how to drive, adding to the risk of accidents. The gain from doing so was a clean verification of the claim that corruption is possible in India and has real effects, though I was not aware of any prior doubt about the truth of that claim.
There may well be design changes to many RCTs that could assure their ethical validity, such as judged by review boards. One might randomly withhold the option of treatment for some period of time, after which it would become available, but this would need to be known by all in advance, and one might reasonably argue that some form of compensation would be justified by the delay. Adaptive randomizations are getting serious attention in biomedical research; for example, one might adapt the assignment to treatment of new arrivals along the way, in the light of evidence collected on covariates of impact. (The U.S. Food and Drug Administration issued guidelines a few years ago.)
The experiment might not then be as clean as in the classic RCT—the prized internal validity of the RCT in large samples may be compromised. But if that is always judged to be too high a price then the evaluator is probably not taking ethical validity seriously.
Martin Ravallion
(First posted on the World Bank’s Development Impact blog.)
In 2013 the World Bank announced that one of its two goals is to “share prosperity,” which is to be measured by the growth rate in mean consumption (or income) for the poorest 40% of the population. (Its other goal is eliminating absolute poverty.)
The growth rate in the mean for the poorest 40% has the appeal of simplicity. But that comes with a cost. An important concern is that it does not tell us anything at all about how much rising prosperity is being shared amongst the poorest 40%, or how the losses from economic contraction are being spread. For example, the mean of the poorest 40% could rise without any gain to the poorest.
That limitation is important in the light of recent research. While the developing world as a whole has made huge progress in reducing the numbers of people living in poverty, much less progress has been made in raising the developing world’s consumption floor—the level of living of the poorest. (I show this here.) If we are really committed to sharing prosperity then we should surely not be leaving the poorest behind.
There is a remarkably simple fix for the Bank’s measure of success in sharing prosperity: Instead of measuring the growth rate of the mean for the poorest 40% the Bank should measure the mean growth rate of the poorest 40%. This may sound like some nerdy quibble, but it does matter. This subtle difference in the measure makes a big difference in its properties. With this change, the measure of “shared prosperity” reflects how equitably aggregate gains have been shared amongst the poorest 40%. If inequality falls (rises) among the poorest 40% then the mean growth rate will be higher (lower) than the growth rate of the mean. The mean growth rate of the poorest 40% is also quite easy to calculate from any two standard household surveys. (They do not need to be panel data.)
Let’s take a simple example. Suppose that there are four representative people comprising the poorest 40%, with incomes (in $s per day) of $0.75, $0.75, $1 and $1. After some economic shock or policy change their incomes are $0.50, $0.75, $1 and $1.25, i.e., there is a gain of $0.25 for the least poor, at the expense of the poorest. The growth rate in the mean of the poorest 40% is zero, while their mean growth rate is -2% (the average of -33%, 0%, 0%, 25%).
This change may still not be considered enough. There are other measures, although they often lose the advantage of simplicity. In more careful monitoring, a broader dashboard of measures will clearly be needed in assessing how well prosperity is being shared. For example, if one really cares about not leaving the poorest behind then one should also focus directly on that; there are now operational measures for that purpose, which are also easy to implement, as I show in this paper.
However, my point here is that one can improve the Bank’s measure with only a small change in wording. And the alternative measure can be implemented at virtually no extra cost in monitoring.
Martin Ravallion
(In the interests of full disclosure, I participated in the internal discussions at the Bank in 2012 about its goals. I am writing this post two years after leaving the Bank, and in the light of research since then.)
In an OPED last year, “Reading Piketty in India,” I noted how poor the U.S. was in the mid-19th century. As best I can determine from the data available, the proportion of America’s population living below India’s poverty line was roughly as high then as it is in India today. The period 1850-1929 saw the poverty rate fall by some 20 percentage points. The U.S. saw great progress against extreme poverty in this period. A few people have asked me for more details. Here they are, also extending the calculations to other rich countries.
We already know that the categories “developed” and “developing world,” were much less relevant around 1800 than they are today (and they will undoubtedly become less relevant in the future). Of course, there were disparities in average levels of living across the countries of the world, but less so than today—indeed, quite possibly less than one finds amongst many developing countries today. Average living standards in 18th century Europe were higher than in Asia or Africa, but the proportionate difference was less than we see today.
Francois Bourguignon and Christian Morrisson assembled distributional data back to the early 19th century, to match up with Angus Maddison’s estimates of national income. Bourguignon and Morrisson only calculated poverty measures for the world as a whole. Using their data base (which they kindly provided) I calculated what % of the population of the countries in their study that are considered rich countries today lived below the Bourguignon-Morrisson “extreme poverty” line back to 1820. That line was chosen to synchronize with the poverty rate for 1990 implied by the Chen-Ravallion “$1 a day” line. Figure 1 summarizes the results.
These numbers should only be considered as broadly indicative, especially given the paucity of good data for the 19th and early 20th centuries. Nonetheless, they suggest that today’s rich countries had poverty rates in the early and mid-19th century that are comparable to those found in even relatively poor developing countries today. The countries of today’s rich world started out in the early 19th century with poverty rates below the global average at the time, but not that much below. In most cases, their poverty rates fell dramatically in the 19th century (Japan was a late starter but caught up in the 20th century). Yet today there is virtually no extreme poverty left in today’s rich world, when judged by the standards of poor countries today.
Figure 1: Past poverty rates for today’s “rich countries”
Key: ACN: Australia-Canada-New-Zealand; ACH: Austria-Czechoslovakia-Hungary; BSM: Benelux-Switzerland-Micro-European States; PS: Portugal-Spain; UKI: United Kingdom and Ireland
Notes: Author’s estimates using parameterized Lorenz curves calibrated to the data set developed by Bourguignon and Morrisson (2002), which was kindly provide by the authors. Bourguignon and Morrisson used a poverty line based on that used by Chen and Ravallion (2001) for measuring poverty in developing countries. The estimates allow for the fact that Bourguignon and Morrisson anchored their measures to GDP per capita (from Maddison, 1995) rather than the survey-based means used by Chen and Ravallion. Bourguignon and Morrisson determined that the poverty line corresponding to the line of $1.08 per day used by Chen and Ravallion on survey-based distributions was $2.38 per day ($870 per year) when applied to GDP per capita.
A couple of remarks can be made on the current relevance of these numbers. First, understanding the past success of today’s rich world against extreme poverty should be high on the list of research issues for development economics. (I explore the topic further in Ravallion, forthcoming.)
Second, when progress against poverty is measured as a % point per year it slowed down a lot toward the end (as can be seen in Figure 1). More surprisingly, when measured in proportionate terms, experiences differed greatly, as can be seen from Figure 2. Some countries (the US, the UK, Japan) saw steady progress in proportionate terms, while others saw more erratic changes in rates of progress at low poverty rates. While we often assume that it will be a long hard slog to get the last few percentiles out of extreme poverty, some rich countries maintained steady progress to the end, and some even accelerated.
Figure 2: Annualized proportionate rates of change in the poverty rate from Figure 1
References
Bourguignon, Francois and Christian Morrisson, 2002, “Inequality among World Citizens: 1820-1992,” American Economic Review 92(4): 727-744.
Maddison, Angus, 1995, Monitoring the World Economy. Paris: OECD.
Martin Ravallion
(This was first posted on the Center for Global Development’s Policy Blog.)
There is growing support in the rich world for a basic-income guarantee (BIG), in which the government would provide a fixed cash transfer to every adult, poor or not. In 2015, for example, the Swiss will vote on a referendum to introduce a BIG. We have not yet seen a national BIG rolled out, although there are policies in place with similar features. (For example, the US earned-income tax credit, while not strictly a BIG, contains some similarities.) Proponents say it’s an easy way to reduce poverty and inequality; if that’s so, it’s time to think BIG in the developing world, too.
Support for the BIG idea (also known as a poll transfer, guaranteed income, citizenship income, or an unmodified social dividend) has spanned the political spectrum. Some supporters see it as a “right of citizenship,” or a foundation for economic freedom to relax the material constraint on peoples’ choices in life. Others have pointed out that a BIG is an administratively easy way to reduce poverty and inequality, with modest distortionary effect on the economy as a whole. There are no substitution effects of a BIG on its own (there’s no action anyone can take to change their transfer receipts). Supporters also note there’s no stigma associated with a BIG, since it’s not targeted only to poor people. And a BIG may well be more politically sustainable than finely targeted options that may have a narrow base of support.
Opponents, on the other hand, echo longstanding concerns that the welfare state undermines work incentives. There may well be income effects of a BIG on demand, including for leisure. The effect on employment is unclear, however. The BIG could ease constraints on work opportunities, such as those that hinder self-employment or migration. On balance, work may even increase.
As with any social policy, a complete assessment of the implications for efficiency and equity of a BIG must also take all costs and how it is financed into account. The administrative cost would likely be low, though certainly not zero given some form of personal registration system would be needed to avoid “double dipping” and to ensure larger households receive proportionately more. One low-cost way of doing this would be to establish a personal identification system, such as the Aadhaar in India.
Further, a BIG could be a feasible budget-neutral way of reforming social policies. There could be ample scope for financing it by cutting poorly targeted transfer schemes and subsidies heavily favoring the non-poor. A BIG scheme would easily replace many policies found in practice today. For example, it would clearly do better in reaching the poor than the subsidies on the consumption of normal goods (such as fuel) that are still found in a number of countries.
The un-targeted nature of a BIG runs against the prevailing view in some circles that finer targeting is always better. But that view is questionable. For example, recent research has shown that once one accounts for all the costs involved in India’s National Rural Employment Guarantee Scheme, including the forgone earnings of participants, a BIG with the same budgetary cost would have greater impact on poverty than the labor earnings from the existing scheme. The work requirements of the employment scheme ensure that it is very well-targeted. Even so, it is likely to be a less cost-effective way to reduce poverty than an untargeted BIG with the same budgetary cost. There may well be other advantages to India’s current scheme; for example, asset creation, risk mitigation, and empowerment. But it is not clear whether these benefits would tilt the balance relative to a far simpler BIG.
The BIG idea should be put on the menu of social policy options for developing countries.
Martin Ravallion
(This was first posted on the Center for Global Development’s Policy Blog.)
The challenges of measuring and monitoring global poverty have received a lot of attention in recent times. There have been debates about the Sustainable Development Goals, as well as some more technical debates. Assessments of progress against poverty at the country level, and most decisions about how best to fight poverty within countries, do not require global poverty measures. Nonetheless, such measures are important to public knowledge about the world as a whole, and they help inform the work of development agencies, including in setting targets for overall progress.
In a new working paper I discuss three current issues that are specific to global poverty monitoring, and proposes some solutions.
The first relates to one of the main sources of dissatisfaction with prevailing poverty measures that use a constant real line, namely that they do not take account of the concerns people everywhere face about relative deprivation, shame and social exclusion; these can be termed social effects on welfare. To some extent the fact that higher national lines are found in richer countries reflects these social effects on welfare. But the differences in national lines also reflect to some extent more generous welfare standards for defining poverty in richer countries.
Yet we can all agree that we need to use a consistent welfare standard in measuring poverty globally. We need to be reasonably confident that people we judge to have the same level of welfare—the same capabilities for example—are being treated the same way wherever they live. Amartya Sen put the point nicely: that “…an absolute approach in the space of capabilities translates into a relative approach in the space of commodities.” But when we think about how best to do that, we run into the problem that we do not know whether the higher lines in richer countries reflect differences in the incomes needed to attain the same level of welfare, or (instead) that they reflect higher welfare standards in richer countries.
The paper argues that two global poverty lines are needed—a familiar lower line with fixed purchasing power across countries and a new upper line given by the poverty line that one would expect given the country’s level of average income, based on how national poverty lines vary across countries. The true welfare-consistent absolute line lies somewhere between the two bounds. By this approach, to be judged “not poor” one needs to be neither absolutely poor (independently of where and when one lives) nor relatively poor (depending on where and when one lives).
The second problem is an evident disconnect between how poverty is measured in practice and the emphasis given in social policy and moral philosophy to leaving none behind. For example, a 2013 report initiated by the U.N. on setting the new SDGs argued that: “The indicators that track them should be disaggregated to ensure no one is left behind and targets should only be considered ‘achieved’ if they are met for all relevant income and social groups.” But how do we know of none are being left behind? To assess whether the poorest are being left behind one needs a measure of the consumption floor. Here there is a severe data constraint, namely that a low observed consumption or income in a survey could be purely transient, and so unrepresentative of permanent consumption.
However, I have shown that a more reliable estimate of the consumption floor can be derived from existing measures of poverty under certain assumptions. This can be readily implemented from existing poverty data, and it provides a rather different vantage point on progress against poverty. While the developing world has made much progress in reducing the number of poor, there has been very little progress in raising the consumption floor above its biological level. In that sense, the poorest have been left behind. Progress against poverty should not be judged solely by the level of the consumption floor, but it should not be ignored.
Finally, the paper reviews the ongoing concerns about the current Purchasing Power Parity (PPP) exchange rates from the International Comparison Program (ICP). (See, for example, the CGD blog post here, and the comments on that post; my new paper addresses this debate.) The days are (thankfully) gone when the community of users simply accepts without question the aggregate statistics produced by publicly-funded statistical organizations like the ICP. Recurrent debates about the ICP’s results have been fueled in part by poorly-understood methodological changes and in part by the ICP’s longstanding lack of openness, notably in access to primary data.
Calculating PPPs that are appropriate for global poverty measurement using ICP price data is not exactly easy, but nor is it the hardest task imaginable as long as researchers have access to the data. There are also options to using ICP prices, although further testing is needed on their performance. Even staying with the ICP, adjustments will be called for, such as to deal with urban bias in the price surveys. Going forward, better price-level comparisons for the purpose of measuring poverty, including sub-national analysis, require re-estimating the PPPs from the primary data. If the ICP is to continue to be a valuable resource, it needs to make public the primary data to facilitate such calculations.
Each of the paper’s proposals for addressing these problems could undoubtedly be improved upon and refined if there is enough agreement that effort is needed to develop better global poverty measures along these lines. That effort is justified if our global measures are to continue to have relevance in global public knowledge, and to international policy making and poverty monitoring.
Martin Ravallion
This week saw the release of the World Bank’s updated global poverty counts. There is new country-level data on poverty and inequality underlying these revisions. But the big change is that the numbers are now anchored to the 2011 Purchasing Power Parity (PPP) rates for consumption from the International Comparisons Program (ICP). Previously the numbers were based on the prior ICP round for 2005. The Bank published a reasonably clear Press Release explaining that the new international poverty line is $1.90 per person per day at 2011 prices; also see this blog post by Bank researchers.
Some observers have said that $1.90 entails a large upward revision to the Bank’s global poverty line. An article in the Financial Times ran the headline that “The Earth’s poor set to swell as World Bank shifts poverty line.” Similarly, ’in this Vox piece, CGD’s Charles Kenny and Justin Sandefur claim that this is “the biggest upward revision of the poverty line in 25 years.”
The FT article went further to suggest why this has happened, quoting Angus Deaton, a Professor at Princeton, as claiming that the Bank has an “institutional bias towards finding more poverty rather than less.” By this view, there is a motive to the Bank’s seemingly large upward revision to its poverty line—to keep itself in business as the leading institution fighting global poverty. But this conspiracy theory makes little sense on closer inspection.
We must first understand that the $1.90 is in 2011 prices while $1.25 was in 2005 prices. Everyone knows about inflation. But how should one deal with inflation for this purpose? If one simply updates the $1.25 line for inflation in the U.S. one gets $1.44 a day in 2011. This was done in some calculations soon after the release of the 2011 ICP results, such as those by CGD researchers reported here. Updating the line for U.S. inflation 2005-11 greatly reduces the global poverty rate for 2011 when compared to the old PPPs.
However, fixing the U.S. purchasing power of the international line over time is very hard to defend given the generally higher inflation rates in developing countries than the U.S. Thus, while $1.44 a day in 2011 has the same purchasing power in the US as $1.25 in 2005, when $1.44 is expressed in local currencies of developing countries using the 2011 PPPs it has lower purchasing power in most of those countries than when the prior $1.25 line in local currency is adjusted for inflation in those countries. In that sense, using $1.44 in 2011 lowers the poverty line, and that is why one gets less poverty.
Instead, the Bank’s researchers went back to the national poverty lines for low-income countries that were used to derive the $1.25 a day line, as described here. They then updated those national lines to 2011 prices using the best available country-specific Consumer Price Indices. On then converting to PPP for 2011 and taking an average they got $1.90. This is not the only way one could have updated the $1.25 a day line. One could instead have asked what the average national line is amongst the poorest “x” countries in 2011, which would have been more consistent with past methods used by the Bank. But the method they have used to get to $1.90 is defensible, and it has the appeal that the underlying national lines for low-income countries have constant purchasing power over time.
This is surely a strange way for the Bank to reflect the claimed bias toward overstating the extent of poverty. More plausibly, in my view, there is no such bias since the real value of the line is being held constant in poor countries.
Furthermore, none of this makes much difference to the pace of progress against extreme absolute poverty over time. As the Bank announced in its Press Release, that progress remains largely unchanged from the old PPPs. Indeed, the PR is quite upbeat on the pace of progress. This hardly sounds like a bias toward exaggerating the extent of global poverty!
There are, nonetheless, changes in the composition of the world’s poor, as the new ICP round has revised the PPPs for many countries. Those changes are not yet well understood. (See, for example, my comments here on India’s new PPP.) As I noted in a recent blog post, “We need better global poverty measures,” the ICP has not been as open as one would like about their price data. And the raw PPPs are not well suited to poverty measurement. The Bank’s researchers have done some “patch-ups” (such as adjusting for the evident urban bias in the ICP’s price surveys), but a more fundamental ICP overhaul is needed if the PPPs are to continue to be used in global poverty measurement.
I also argued in the same blog post that the absolute line of $1.25 a day in 2005 prices (or $1.90 a day in 2011 prices) is inadequate today. Two global poverty lines are now needed—a lower line with fixed purchasing power across countries and a new upper line given by the poverty line that one would expect given the country’s level of average income, based on how national poverty lines vary across countries. The true welfare-consistent absolute line—whereby one judges poverty by a common absolute standard of welfare, which may well require differing commodities in different settings—lies somewhere between the two bounds. By this approach, to be judged “not poor” one needs to be neither absolutely poor (independently of where and when one lives) nor relatively poor (depending on where and when one lives). Global poverty estimates for both bounds can be found here; the upper bound suggests less progress against poverty, but still progress. If anything, the World Bank is overestimating the pace of that progress.
My advocacy of this new “upper bound” is not some bias toward over-estimating poverty for some conspiratorial reason. Rather it recognizes the differing social realities of what is needed to not be considered poor in today’s world. The World Bank, and its critics, also needs to recognize those realities.
Martin Ravallion
(This was first posted on the Center for Global Development’s Policy Blog.)