As early as 2011, the question of what data sources to use to monitor progress towards the PSGs was a growing source of tension behind the scenes between the g7+ Group of Fragile and Conflict-affected States and their Northern partners. Discussions on how to monitor PSG policies led to the creation of two separate sets of indicators: country-level indicators and common indicators.
Country-level indicators are intended to track PSG goals in the security, economic and political contexts in which g7+ countries find themselves. So some country-level indicators will be unique to particular countries. Country-level indicators will be used to create national fragility assessments that will locate the assessed countries on a 5-level fragility spectrum from “crisis” to “resilience”. States’ initial location on the fragility spectrum could in principle serve as an approximate baseline against which to measure or estimate their progress towards achieving resilience.
Common indicators are common to all fragile states – like the under-five mortality rate, which was a key MDG indicator on child health. Fragile state governments have resisted common indicators claiming they primarily reflect the interests of donors. These North/South differences have been an ongoing source of tension and have slowed the IDPS’s momentum. Part of the reason for the slow-down is political. In fragile states without robust statistical systems – the large majority – nationwide household surveys are the only means of generating the reliable data needed to populate peacebuilding and statebuilding indicators. But the transparency that especially perceptions surveys generate can be politically embarrassing and even damaging to governments.
Moreover, for many fragile state governments the Northern emphasis on common indicators and cross-national surveys misses the critical point that the development and security challenges that confront fragile states are determined by their unique historical, cultural and political circumstances – a concern shared by many development researchers who rely on qualitative methods. Country-level indicators of progress can be designed to take the unique circumstances of fragile states into account; common indicators – by definition – cannot.
Suggested IDPS common indicators are similar in concept to MDG indicators. But many fragile state governments, along with aid critics in the North, are sceptical about the MDG model – and not without reason. First, the MDG monitoring process, which has relied heavily on cross-national survey data, has failed to reveal the very real developmental successes that have been achieved by sub-Saharan African states since 2000. The MDGs’ architects chose indicators that few African states could hope to achieve, while ignoring those in which they were making important gains.
Second, fragile states worry that common indicators may stigmatise them as “failures” and identify them as “poor performers” – assessments that can lead to reduced aid allocations or the imposition of harsh conditionality measures.
Third, the UN has asserted that the campaign to boost achievement of the MDGs is the “most successful global anti-poverty push in history”, a claim that appeared persuasive. Economic assistance to the MDG process has doubled in value since 2000, and as aid flows increased, development outcomes improved. But correlation is not the same as cause. In 2013 a major econometric study by UN economist Howard Friedman cast major doubts on the claim. Friedman did not question the fact that, on average, MDG development outcomes across the developing world had improved since 2000, but he pointed out that these indicators were mostly improving before 2000. The pre-2000 improvements cannot logically be attributed to aid flows that increased after 2000. This raised an obvious question: if the MDG process has not had the positive impact that its supporters claim, then why assume that the very similar IDPS process would be any more successful?
Finally, there is the question of trust. Developing countries’ scepticism arises from the repeated failures of donors to honour aid pledges. In 2005, for example, the G7 – the rich countries’ club – pledged to increase aid levels to sub-Saharan Africa by $25 billion within five years. But by 2010, less than half the promised amount had actually been delivered. Donors also have concerns. They can – and do – point to rent-seeking, inappropriate aid disbursements and pervasive corruption as part of the reason why aid has had so little measurable positive impact, particularly in fragile states.
The long history of failed aid policies is well understood in both donor and recipient countries – though accounts of who is responsible for the failures, not surprisingly, differ considerably. The failures have been a major driver of the two decade-long push by the Organisation for Economic Cooperation and Development (OECD) to improve the effectiveness of aid disbursements in both donor and recipient countries.