A common rating system could make aid evaluation less opaque.
Better yet, it could deliver more bang for precious bucks.
Each year, more than US $2 billion of foreign aid is invested in the Pacific Islands region, equivalent to roughly 8% of the region’s GDP. This aid comes in the form of thousands of projects from more than 60 donors. Information about these projects is often messy and opaque, with public information at the project level for all donors often being sparse, lacking in detail and difficult to access.
The Lowy Institute Pacific Aid Map was designed to help fill this transparency gap and improve the efficiency of aid allocation and its overall effectiveness, collecting, harmonising and making publicly available data on more than 20,000 projects from 62 donors in 14 countries from 2011 onwards.
All this effort putting this data together still only tells half the story of aid in the Pacific. We now know in granular detail exactly how much money is being given by donors and where that money is ending up. What we don’t know is how this money is performing. What outcomes are being achieved? What impact are these aid investments having?
Most donors are committed to rigorous assessments for projects of a certain size. It’s not worth the time to go see how a water tank in a community is going five years after giving it, but anything valued over $1 million is typically given significant scrutiny. But all donors operate under different reporting structures, have different targets and track different metrics. Where projects are given a “score”, it is impossible to homogenise that performance across donors.
That leaves us with the much denser world of project-specific evaluations. Thousands and thousands of hours are poured into project design, mid-term reviews, end-of-project reviews and independent evaluations (some donors even conduct independent evaluations of the evaluations, despite very small constituencies for their findings, and considerable frustrations and limitations with the drafting of these documents).
These documents are, however, spread across the internet and often not easy to access. To make these documents more accessible and thereby encourage greater learning from previous aid efforts, Lowy has put together an aid evaluations database as an addition to the Pacific Aid Map. The database pulls together 762 documents from 272 aid projects in the Pacific that have run over the last two decades. These projects combine to the value of over $6 billion. While this is only a fraction of the 24,000 projects we have identified in the Pacific Aid Map, it does cover 60% of their value. The database covers evaluations from seven donors who account for 80% of all aid to the Pacific between 2011 and 2017.
Those notably missing from the top end of the donor pool are China and the EU. China doesn’t do evaluations of its projects, at least none that are public. The EU’s have so far proven challenging to access.
A common ratings standard will also make evaluations and reviews accessible to the broader public, an crucial constituency often lost by the use of technical development jargon in evaluations.
All told these documents amount to tens of thousands of pages. Most of them may be unhelpful, but many provide insights into how projects have performed in the past that can contribute to better program design in the future. We have incorporated links to all of these documents into the Pacific Aid Map, and will look to include even greater functionality around accessing evaluation documents in the future, as well as conducting further analysis.
The gold standard is the evaluation policies set by the World Bank and the Asian Development Bank. These are applied to every project they oversee and often stipulate multiple evaluations and reviews of a single project, condensing evaluations into ratings for areas such as project relevance, effectiveness, efficiency, impact and sustainability. While ratings can never capture the success or failure of a project, they can be used as data points to track the performance of development assistance over time as seen in this database.
Ratings such as those done by the ADB and World Bank must become a global standard. For one thing, a global ratings standard will empower countries seeking development assistance, particularly in the Pacific. It will give countries with less generously staffed development agencies the chance to make data-informed decisions regarding partnerships.
But just as importantly, a common ratings standard would also make evaluations and reviews accessible to the broader public, a crucial constituency often lost by the use of technical development jargon in evaluations. In Australia, taxpayers have a vested interest in getting bang for buck for government spending. Recent debates and analysis have highlighted the importance of increasing transparency for accountability and effectiveness. With Australia’s foreign aid budget under continued pressure – including the controversial slimming down of its own internal evaluation capacity – creating a public data series from project evaluations is a cost-effective method for increasing transparency, encouraging external analysis and addressing shortcomings in project design.
From our data collection, the ADB and the World Bank are the only agencies to consistently provide ratings. Other agencies could quite easily incorporate a ratings system into evaluation policies, as many evaluation consultants already provide this data in reports. To ameliorate the obvious difficulties in homogenising these ratings, a standard criteria framework could be added to the reporting requirements of either the Organisation for Economic Co-operation and Development or the International Aid Transparency Initiative.
The best outcome would be stimulate a ratings competition between development agencies, with the quality of development assistance the overall winner.