The Australian volunteer evaluation and the capacity building straitjacket (part 1)

Written by Stephen Howes

imageAt Devpolicy, we have considerable interest in the volunteer program. One of us used to be a volunteer. I learnt from the recently-released Office of Development Effectiveness (ODE) Evaluation of the Australian Volunteers for International Development (AVID) program that my colleague Ashlee Betteridge was one of the 15% of returned volunteers not satisfied with her assignment. She was one of the 30% not satisfied with the support received from her in-country manager, one of the 44% whose job was not as per the position description, and one of the quarter whose volunteer assignments had been shortened. In short, she was in a minority of less-than-happy volunteers, but not a small minority.

This is perhaps why the blog she wrote last year on her experience with the volunteer program went viral, at least by our standards. There has been so little frank public discussion on the volunteer program and there is clearly untapped demand for it. Unfortunately, the new evaluation, commissioned by ODE and conducted by ARTD Consultants, didn’t deem it necessary to reference Ashlee’s blog and the discussion it generated. As I’ll show, this was a lost opportunity.

We also have an interest in the volunteer program because we have recently become a partner organisation. We are trying to help the UPNG Economics Department get a volunteer lecturer.

So we certainly appreciate the Australian volunteer program, and have some insight into it at the working level.

One of the really good things about the ODE evaluation is that it is based on survey data. The evaluation undertook a survey of host organisations across three countries (Cambodia, Vietnam and Solomon Islands) – the first ever such survey we understand. The evaluation also draws on another survey, separately commissioned by (then) AusAID, on returned volunteers.

It is a pity that the evaluation has not included more data from the two surveys. A simple example is that we don’t know the average age of a volunteer. This might sound like a small point, but one of the main arguments of the report is that there is very little difference between youth volunteers (AYADs) and other volunteers (non-AYADs), including in terms of age. If that is the claim, we should be told the average age of the two groups. We are given some data on age ranges, and from that I calculate that the average age of an AYAD is 28, and that of a non-AYAD 45. That is a big difference.

There should have been more data tables, and, beyond that, we would like to request ODE to release the survey of host organisations. We would also like to request DFAT to release the report on the returned volunteer survey, which I understand has been written but not published. You can’t cite extensively from a report and then not release that report, which is the situation at the moment.

It would also have been good if the host organisation survey had asked a few more questions. At one point the evaluation calculates that the cost of a volunteer for a year is about $70,000. The survey should have asked host organisations if they would have rather had the volunteer or $70,000 in cash. That would have been useful information. Such information would be useful in reaching a judgement as to how much of the cost of the volunteer program was justified by its development objectives, and how much by its benefits to the volunteer.

The survey does ask whether a national could have done the job just as well as the Australian volunteer. It finds low response rates from Vietnam and the Solomon Islands, but finds that half of all Cambodian host organisations answer yes to that question. That is a worrying response, given the skill shortages in any poor country. It seems to me there should be an investigation into the Cambodia volunteer program to find out what is wrong. Cambodia may simply have too many volunteers. It has the second highest number of Australian volunteers of any country.

Overall though, I applaud the data collection exercise. The discussion that follows in this and my next post is around the interpretation of the data, and its limited use.

Unfortunately, the lens used to examine the volunteer program is that of capacity development. I’m not sure why. Capacity development doesn’t feature in the terms of reference for the volunteer evaluation. But it is after all the Holy Grail of aid. And I know from the AYAD website, and from my own experience with getting a volunteer for UPNG, that the capacity development objective is really taken seriously by Australia’s volunteering program.

An entire chapter of the evaluation is in fact devoted to the “impact of volunteering on host organisation capacity development” (Chapter 4). The bottom line of this chapter is Recommendation 5 which contends that “DFAT should refocus the AVID program on developing the long-term capacity of host organisations.”

The focus of the evaluation is not just on capacity development, but on capacity development oddly defined. The data shows that volunteers are much appreciated both for the work they do and the skills they transfer. But that is not enough for this evaluation. Capacity development requires the volunteer placement “to generate new forms of capacity from within the organisation without long-term dependence on the volunteer.” (p. 42) Why, even under this definition, skill transfer is not counted as capacity development, but as “sustainable capacity” is unclear. But, whatever the reason, capacity development, thus defined, is instead measured by, for example, whether Australian volunteers are judged by their host organisations to have “helped our organisation better manage our own affairs.” Not surprisingly, the volunteer program scores relatively more poorly against criteria such as this. Hence the recommendation for a renewed focus on capacity development.

At no point does the evaluation reflect on whether it is using a sensible definition of capacity development or whether it is realistic to expect most volunteers to contribute to capacity development in this sense. A more sensible definition and more realistic expectations would have led to the conclusion that the volunteer program ticks the capacity development box more than adequately (primarily through skill transfer), not that the program should be refocused on it.[1]

This is the first in a three part series based on Stephen’s presentation at Devpolicy’s recent ODE evaluations forum. Robin Davies’ analysis of the Lessons from Australian Aid report can be found here.   

Stephen Howes is Director of the Development Policy Centre.  


[1] In fact, the evaluation finds that efforts to help in areas judged by it to constitute capacity development were linked with reduced satisfaction. Perhaps organisations, in general, don’t want volunteers to help them manage their internal affairs. They want them to do what they are skilled at, or whatever specific task they have been assigned.

Stephen Howes

Stephen Howes is the Director of the Development Policy Centre and a Professor of Economics at the Crawford School. Stephen served in senior economic positions for a decade at the World Bank before becoming AusAID’s first Chief Economist. In 2011 he was a member of Australia’s Independent Review of Aid Effectiveness.

4 Comments

  • I wonder about organizations that continue to get repeat volunteers for very similar positions. Once a resource mobilization officer has left and been deemed a success, they are swiftly replaced by a donor liaison officer who in turn is deemed a success – just in time for the next resource mobilization officer, or for variety, communications professional with grant writing experience!

  • Thanks Stephen, even though the introduction makes me sound like a person who is very hard to please, I very much agree with your post!

    I found it baffling that the evaluation defined capacity building in such narrow terms. Even skill transfer in the most basic sense is a positive and arguably builds capacity in some way. Even if staff or counterparts move on to other jobs or other organisations, as often happens, you are still boosting the range of skills and competencies in the country’s human resource pool.

    I don’t really understand why the volunteer program would be the tool you would choose to use for institutional capacity building if it is deemed to be so important, especially since the program by nature focuses more on building ‘people to people’ links rather than institution or organisational strengthening. Do Australia Awards and other scholarships, for example, aim to build institutional capacity or just the country’s human resource capacity more generally?

    • The recommendation that the “AVID program on developing the long-term capacity of host organisations”, seems a very big ask for a 20-something person; when (in some cases) decades of technical assistance from much more experienced folk has not been able to do it. A comparison between the commercial and NGO volunteer progams would be interesting as well

      • As a 20-something person that tried and failed at this, I can only agree Patrick! I think the bar is being set too high here for something that people ultimately sign up for out of a sense of goodwill. Volunteers aren’t consultants. As the evaluation shows, in many ways this is why they are so valuable. They can achieve things that consultants cannot. If a volunteer manages to develop the long-term capacity of a host organisation, then good on them, that is wonderful–but it shouldn’t be the expectation.

Leave a Comment

Tweet
Share
Share
+1