Problems with the Pacific Index

The Pacific Islands region faces complex, and in some instances unique, development challenges. The small size of most of the island states, long distance to major regional markets, bilateral engagement from only a handful of donors, coupled with high aid dependency and lagging performance on human development indicators, make the region deserving of more attention and research. It is this context that made the Pacific Index, a scorecard of donor performance in the region, a promising research project. Unfortunately, despite its good intentions, the Index misses the mark in three particular ways, which I will outline in this post.

What is the Pacific Index?

The Pacific Index is a joint initiative of the Alfred Deakin Research Institute and Sustineo Pty Ltd that was released in June this year. Funded by a $150,000 ARC Linkages grant, the Pacific Index takes the Center for Global Development’s Commitment to Development Index (CDI) and replicates it for the Pacific context.

The CDI takes a holistic view of rich country interactions with developing countries (beyond the conventional aid paradigm), assessing seven specific indicators and ranking donors accordingly. The Pacific Index takes these same indicators and gives them a regional focus on the Pacific. Three of these indicators (aid, migration and trade) have been re-calculated  to focus specifically on contributions to the Pacific. The other four (environment, finance, security and technology) are taken directly from the CDI because they focus on rich country commitments to global public goods. Each indicator is then given an even weighting and countries are ranked based on their average score. The end product is interesting, and the authors should be commended for trialling a new approach and bringing potentially very useful data (hopefully soon to be released) together, but what do the results tell us?

Pacific index 1Ostensibly, they tell us that Australia and New Zealand are the two donor countries doing the most for the region. As the charts above and below show, they rank highest by a very safe margin, which grows even larger when focussing on the three indicators that have been re-configured for the Pacific. The authors detail some of their calculations for the three indicators in their background paper [pdf].

Pacific index 2Such findings will be reassuring to the development policy communities in both countries. But before we dwell for too long on them, questions need to be asked about the index’s approach.

What is it trying to achieve?

The authors claim that they want the Pacific Index to “not only provide an analysis of rich country support but also a basis for policy advocacy, holding rich countries to account for their often repeated public commitments to support development in poor countries, in this case in the Pacific.” Unfortunately, the way that the index is constructed hampers its advocacy objectives. To illustrate, take a look at the component scores for the three indicators that were re-configured for the Pacific.

imageAustralia and New Zealand clearly dominate the Index, but not because we are overwhelmingly more generous than the rest of the rich countries on this list. Geography brings with it strategic interest and, as much as benevolence, this likely shapes our heightened involvement with the Pacific as reflected in these rankings. It’s also hard to see how shaming the rest of these countries (most of whom have no direct engagement with the Pacific to begin with) would spur them into more action, especially considering, what appears on paper to be, an insurmountable dominance of the top two spots. Or, to put it another way, how is it useful for us to know that Australia and New Zealand pay attention to their own back yards, while other countries focus more on their own neighbourhoods?

Measurement issues

For the sake of brevity I will focus here on the indicators on which I am most familiar (and which have been tailored for the Pacific), namely aid and migration.

On aid, the authors stress that the index focuses on effort rather than the volume of engagement. So rather than focusing on the volume of aid delivered to the Pacific by each donor (where Australia would emerge as a clear winner) they focus on the level of ODA to the Pacific measured as ratio of donor GNI and the share of donor aid going to the Pacific. While it’s true that this captures donor effort towards the region in a sense, at the same time it neglects the thing that ultimately contributes to development: absolute magnitude of aid flows. This is most obvious in the case of the US, which is the third largest bilateral donor to the Pacific (and one of only a handful of bilateral donors engaging with the Pacific) but which ends up near the bottom of the aid rankings.

imageSource: OECD QWIDS

The flaw of this measure is further highlighted by the fact that Luxembourg, a country with no bilateral aid engagement (and likely engagement of any kind) with the Pacific ranks fourth in the aid rankings. If Luxembourg were to cease all aid to the Pacific tomorrow it would have little to no impact on the region’s development fate. This is not true of the United States, however. There are also some legitimate concerns regarding the quality aspects of the measure, which were excellently covered by the NZADDs newsletter a few months ago.

On migration, 65% of the indicator is the proportion of migrant inflow coming from the Pacific. Another 15% measures the share of foreign students from Pacific countries as a share of the total from non-DAC countries. The final 20% has to do with asylum seeker quotas (again directly lifted from the CDI), which seems odd given that few Pacific Islanders seek asylum. Measuring migration in this way seems odd since the Pacific makes up such a small portion of global migration flows. A more appropriate measure, to prevent New Zealand from dominating the ranking thanks to its more open migration policies with Polynesian countries, would be to look at the proportion of migrants to each recipient nation based on the total migrant outflow from the Pacific each year. According to Richard Bedford [pdf] 350,000 people of Pacific ancestry live in New Zealand, 300,000 in the US and 150,000 in Australia. For the US to again rank so poorly in this indicator is a worrying result.

More measurement issues could be raised. For example the exclusion of RAMSI from the security component is an oversight (they do identify this in their background paper). One could also argue that there is limited value in simply copying four indicators from the CDI, inserting them into an index focussed on a region that takes up 0.1% of the world’s population, and then giving them equal weighting with the three regional-specific indicators. No measurement approach for this kind of exercise would be perfect, and the Pacific Index certainly blazes a new trail, but these measurement concerns bring me to my last question.

Is a regional approach to CDI useful?

An Index of this nature is clearly an interesting academic exercise. However, it has significant shortcomings in identifying the actual nature of rich country engagement with the Pacific. By tying themselves to the CDI the authors limited their ability to mold research design to the unique nature of the Pacific. More time spent thinking about what indicators are important for the Pacific, and how to effectively measure them, would have made the exercise more valuable.

There are also other actors that are heavily involved in the region, which the Index doesn’t pick up at all. The first is the multilaterals. Not only are they major donors to the region (four of the top ten donors are multilaterals) but they are also hugely influential in directing and advising on development policy in the region (particularly with regard to fiscal and monetary policy). The second is China. The authors acknowledge that China is a large omission from their analysis, and defend themselves behind a lack of data, which is a legitimate point.

But this is not a fault of the Pacific Index, rather of the general design of the CDI more broadly, which has always been a rough tool to promote better engagement with the developing world. Unfortunately, the advocacy objective becomes harder to justify when the index is focussed on a unique region with limited actors and specialised challenges.

The flaws of the Index that I have identified above, as well as the obvious conclusion that New Zealand and Australia are much more focused on the Pacific region than other developed countries, do raise in my mind the question of the value of this research.

Jonathan Pryke is a Research Officer at the Development Policy Centre.

image_pdfDownload PDF

Jonathan Pryke

Jonathan Pryke worked at the Development Policy Centre from 2011, and left in mid-2015 to join the Lowy Institute, where he is now Director of the Pacific Islands Program. He has a Master of Public Policy/Master of Diplomacy from Crawford School of Public Policy and the College of Diplomacy, ANU.

2 Comments

  • Interesting analysis – which basically once again highlights the fact that Pacific aid and the measures chosen to establish its effectiveness are fundamentally the playthings of statisticians. You would think that the question of which donors are the biggest and most important players in the Pacific would be straight forward enough to establish? Think again. The PDI illustrates this conundrum.

    But any list of players in the Pacific that does not have the US, France, EU, Japan and China right up there with Australia and NZ is basically flawed. An awful lot hinges on how one defines the categories used to measure donor assistance. And suffice it to say that ODA is not the only category that is relevant.

  • Very interesting analysis Jonathan. As you say the comparison with other donors outside the region is probably not very useful. The Commitment to Development Index highlights many of the key areas that are important in supporting development and I think it is very useful at a global level. However perhaps the CDI should just be a guide for a Pacific Index which might also examine additional policy areas of greater local relevance and have more appropriate performance indicators under each area.

Leave a Comment