Great expectations and the volunteer program

Written by Ashlee Betteridge

Australian volunteer James Anthony works as an English Language Specialist with the Ministry of Education, Culture and Science in Mongolia. James is pictured with one of his students. Mongolia 2012. Photo: Austraining InternationalAfter going through a competitive recruitment process for a dream position in a tropical locale, you finally get the call—you’ve landed an Australian Volunteers for International Development (AVID) role.

In the several weeks or months between getting the gig and going to pre-departure training, you wonder what it will be like. Perhaps you communicate with your host organisation, who tell you the ten million things they would like you to do on assignment. Perhaps you talk to returned volunteers who have romanticised their experience to the point where it could be a paperback novel. Perhaps every person you meet and talk to about your upcoming adventure gushes praise upon you like you are a modern-day Mother Teresa, piling on tired clichés that you are going off to save the world.

You take all of this with a grain of salt. But you are excited about what lies ahead.

When you finally get to pre-departure training, this is the main piece of advice on what to do over there:

“Don’t have any expectations!”

OK, no expectations. Easy to erase, right?

The next part of the briefing?

“Here are all of the things that we expect from you…”

Expectations play a huge part in the Australian volunteers program. There are not only the expectations of the enthusiastic volunteer. There are the expectations of the host organisations, the partner organisations delivering the program and the Australian government. And that’s before we even get to the expectations of recipients, or partners of the host organisation, or communities.

This is hardly unusual in aid. At the start of any project or grand endeavour there are always big expectations, couched in the foreign language of proposal-writing land. And the expectations held by different stakeholders might not always match up.

One of the things that sets the volunteer program apart from numerous others in the aid program is that it runs largely on goodwill. Instead of well-paid consultants or experienced staff implementing a program, you have everyday people signing up—fresh-faced 20-somethings, retirees and anyone in between. Sure, they get something out of it—living expenses covered, career development opportunities and the intangible, more spiritual remuneration that comes from doing good, taking on a challenge, absorbing and living within a new culture and building new relationships and friendships. But at the end of the day, volunteers are volunteering—if there wasn’t a significant element of goodwill involved then surely these volunteers would be searching out more lucrative roles.

The recent evaluation of the volunteer program by the Office of Development Effectiveness (ODE) at DFAT raised many questions—my colleague Stephen Howes has already critiqued the evaluation in a series of blog posts (and you can listen to a podcast of the discussion at our recent forum here), so I won’t go over the same ground.

But one question that sprung to mind from reading the evaluation and Stephen’s analysis was: how much should we expect from volunteers?

How far can you push the program towards aid world professionalism before you deplete the goodwill that underpins it?

And how could some of the mismatches in expectations between the different stakeholders in the program actually be addressed?

Firstly, as Stephen explored, the on-paper capacity building message often doesn’t meet the expectations of host organisations, nor match what most volunteers actually end up doing. This makes things harder for everyone involved.

For example, the evaluation notes that failing to build capacity was the major source of dissatisfaction for volunteers.

Volunteer dissatisfaction is mostly related to an inability to be effective in developing capacity of their host organisations, mainly because their organisation was poorly functioning, inadequately prepared to host a volunteer or had different expectations of the volunteer’s role in their organisation. (p.41)

Is it reasonable to expect volunteers to achieve long-term organisational capacity development, something that professional consultants and aid managers often struggle to do themselves?

The evaluation suggests a move towards role-based assignments, which makes sense. But if the institutional capacity focus remains, while the host organisation is instead expecting a freebie in-line worker or someone who is there to share skills (which as Stephen points out, is curiously not considered to be the right kind of capacity building by the evaluation), then this problem will persist.

No job description anywhere ever reflects a role fully, so there is always going to be a gap between expectation and reality. But ditching this capacity building obsession could cut the spin in assignment descriptions so that volunteers have a better idea of what they are signing up for and host organisations actually get the chance to more honestly articulate what they want and need, bringing everyone’s expectations more closely into alignment.

If we don’t change this, volunteers will remain stuck in the middle of what the Australian aid program expects from them and what their host organisation (HO) is hoping for.

It’s a tricky place to be—especially when you are expected to be a walking, talking ambassador for your country as well. The volunteer evaluation notes this tension between the capacity building push and the public diplomacy goals of the volunteer program.

…host organisations’ satisfaction was not significantly associated with any activities affecting the capacity of the host organisation as an entity… This finding highlights the inherent tension of AVID’s public diplomacy and capacity development objectives. To satisfy host organisations, volunteers must do tasks that fulfil immediate needs and transfer skills to individuals in the host organisation. Although this approach is immediately satisfying for host organisations and no doubt a positive public diplomacy outcome, it is a limiting proposition for capacity development. (p. 47)

A way to reconcile this is never addressed. One way would be to scrap the capacity building imperative—helpful Australians doing what host organisations would like them to do seems quite diplomatic.

But if we insist on institutional capacity building as a focus, volunteers also need some clarity on what their priorities should be: are they there to primarily serve the host organisation; are they there mainly to be nice ambassadors for Australia; or are they there predominantly to serve the aid program’s institutional capacity development agenda? In theory it is possible to do all three of these things simultaneously, but in practice it is much harder. So which one comes first?

Even if volunteers contribute on all of these objectives, which we know many of them do to some extent, how much of this should be a programmatic expectation and how much of this should just be considered a bonus?

If we start being too prescriptive on how volunteers should contribute then we not only risk a loss of goodwill. We also risk volunteers or HOs simply finding workarounds for unrealistic or incoherent program logic instead of actually following it—the data collected by the evaluation seems to show that this is already happening and it’s something that I’m sure many volunteers could attest to. Program planning that looks good on paper, but that rarely translates to reality, is probably not doing the volunteer program any favours, nor making a tough gig any easier.

Not that we could track this clearly from the program’s current monitoring and evaluation (M&E) processes.

The ODE evaluation finds that M&E is a major gap. It asserts that none of the three levels of M&E outlined in the program’s shared standards (individual assignments, host organisation capacity against three year plans or the contribution of volunteers to development outcomes) are being completely implemented.

End of assignment reports got an extra slap on the wrist. Each core partner uses a different template, and volunteer and host organisation contributions to the reports were “poor quality”.

One of the frustrations that I wrote about in my blog post last year on the volunteer program was writing reports that never seemed to get read. In my six months in-country, I met all the reporting expectations for my assignment, which included a one month initial report, a six-month report and then an end of assignment report. The reports collected no useful or comparative data, took up a lot of time (which wasn’t really an issue for me since once of my complaints was that I had too much free time) and in my experience weren’t used to address any issues flagged by the volunteer. The feedback loop was broken. So I can well understand how many volunteers wouldn’t be bothered to fill the reports in comprehensively by the end of a longer assignment.

Like most evaluation reports, the report calls for more rigorous evaluation, with, a favourite of most M&E advocates, Key Performance Indicators (KPIs).

Streamlining M&E across the core partners is a must, as is improving the quality and usefulness of the information received. But we should be wary about introducing an M&E regime that treats volunteer assignments like consultancies with deliverables –especially when we are asking people to ‘voluntarily’ build capacity in challenging contexts, and when it is essential to avoid placing volunteers in a position where their HOs feel they are being assessed or judged by an outsider who is supposed to be integrating into their team.

Perhaps what the volunteer program needs most is a stripping back of formal expectations and a refocusing on support rather than outcomes. The volunteer program is a good thing, as the evaluation shows, so we should be careful not to strangle it or to burn out the goodwill that it runs on in the race for results.

In my view, we need a volunteer program that creates the best enabling environment for people to go out and do good, by making a positive contribution to development as well as Australia’s image in the region, in line with the needs of host organisations and their wider communities. Getting everyone’s expectations on the same page as much as possible and having practical and realistic objectives for the program would probably be a good place to start.

And, whatever is done to the volunteer program, don’t introduce KPIs! There’s already an oversupply of lofty expectation.

Ashlee Betteridge is a Research Officer at the Development Policy Centre. She was an Australian Volunteer for International Development in Timor-Leste in 2012, which she blogged about here.

Ashlee Betteridge

Ashlee Betteridge is the Program Manager (Research Communications and Outreach) at the Development Policy Centre. She was previously a Research Officer at the centre from 2013-2017. A former journalist, she holds a Master of Public Policy (Development Policy) from ANU and has development experience in Indonesia and Timor-Leste.

6 Comments

  • Hi Ashlee,

    An interesting post. As a former AYAD (circa 2007-08), i can empathise with a number of your points.

    On capacity building – as my own career has progressed, it’s become increasingly evident that ‘capacity building’ is a medium-long term ambition. And volunteer roles are partly defined by being relatively short term. Seeing any tangible outcomes in 6-12 months is in many/most cases a pretty unrealistic expectation. What you actually get, assuming the volunteer has both the skillset and organisational support to do any ‘capacity building’ (two big ‘if’s), is a series of short term outputs – training sessions, ‘mentoring’ sessions, edited/translated documents etc. With volunteer roles focusing on capacity building, it strikes me that this is only going to be both effective and sustainable if it’s part of a broader institutional engagement (ie between DFAT, host-org, and partner org). By being part of a broader strategy, volunteer roles can be both responsive to what’s happened before, and help lay the groundwork for what’s going to happen afterwards. I suspect this would entail a more cohesive, long term approach than is currently the case.

    Capacity building should also be regarded as a distinct skill, and one that your average 25-29 year old volunteer probably hasn’t been exposed to much previously. There are a range of different approaches that can be taken. If we’re serious about this part of volunteer assignments, than it might make sense that specific training is provided for all volunteers on how to design and deliver capacity building activities. A few years sitting behind a desk in Canberra, or doing a Masters in development studies, probably isn’t very good preparation for having to build the capacity of staff/an organisation in an environment very different in terms of culture, language and resources. From my experience, a focus on tangible skills during the Canberra-based training or in-country orientation might be far more beneficial than the more generic introductions they usually entail, particularly for those volunteers who already have experience in that country or more broadly in aid/development.

  • Readers,

    Our anti-spam service has been overzealous in the past week and all comments have been marked as spam. We have corrected the issue and the comments service is now working normally. We are working to retrieve the comments that have been marked as spam, and should have them available next week.

    Jonathan

  • Volunteer programs with various hosts working in different contexts in the same region often face a myriad of problems. The fact is that an array of different organisations of varying capacity and competency are engaging with an individual (obviously also unique) to engage in some form of mutual cooperation or assistance. The very nature of this relationship is obviously based on goodwill. When it comes to achieving more positive results beyond goodwill relationships I think program officers on the ground have a massive role to play. This role comes down to managing and sometimes controlling the expectations of a host organisation, while also ensuring that the actions of a volunteer are in line with facilitating an organisation to achieve its goals without stepping outside the bounds of what the organisers (AVID or otherwise) are capable of and within their directive to do. Paid individuals on the ground who are supported, active and not overworked I think are the absolute key to ensuring that issues on the ground are dealt with promptly and resolutely. Their ability to manage relationships between two hugely different parties with no doubt usually incongruent expectations is the first step to improving results beyond just goodwill. These are professional positions and should be well filled and also well paid, because if the person who does this job does it well it is work that can have widely reverberating positive effects.

  • I’m not entirely certain on the purpose of this commentary. It seems confused, still hoping to leverage the discussion generated in its previous iterations, yet without anything new to add to the conversation.

    If we’re going to continue to critique the volunteer program, let’s start by discussing what the practical options are to remove some of the barriers to its effectiveness.

    Move to a model whereby all volunteers are managed in country by a single contractor. Streamline the in-country management program. One In-Country Manager, the addition of junior or short-term support officer positions for volunteers who have completed assignments in country and are now at a loose end, and increase the number of country staff. Free up the money, ensure advice and support given to volunteers is consistent, assignments are designed and evaluated consistently, and continuous improvement mechanisms are able to function.

    Stripping back formal expectations will do little to improve the volunteer program. If anything, assignments should have more measurable, realistic and tangible objectives, and host organisations as well as volunteers should be accountable for them.

    Much of the problem with assignments that do not work out has less to do with a breakdown in the ‘feedback loop’, and more to do with how that assignment was created in the first place.

    Successful assignments are based in Host Organisations with a history of successful volunteer assignments. Not surprisingly, HOs with a history of volunteers who terminated assignments early are less likely to be able to turn this trend around. Moreover, placing young, inexperienced volunteers in generalised roles such as ‘development officer’ or ‘policy officer’ and telling them their goal is to ‘capacity build’ is going to overwhelmingly result in disappointment for all involved.

    In-country contractors developing and creating assignments with HOs must be accountable for the assignments they create. Does a pre-advertising evaluation of assignments created take place at the moment? And if so, how?
    Contractors need an incentive to be accountable for the assignments they create, and to critically evaluate the reasons assignments are not working.

    Why is another assignment being advertised at this HO when the 3 assignments before this have not worked?

    Why is an assignment with lofty and unrealistic deliverables being advertised in the first place?

    If we place a young, single, female in this remote location without direct access to ready support networks, what are her chances of being successful?
    Too often these questions are not revisited.

    Volunteers should be expected to meet professional and formal expectations such as KPIs. Not only is this valuable experience, but I would argue a volunteer will have even more motivation, empowerment and impetus to find solutions to struggling assignments when treated as a professional in their field, rather than a foreign ‘do-gooder’ muddling around in entry-level development work.

    Telling volunteers to ‘rip up assignment descriptions’ is not helpful.

    Volunteers need to be able to be flexible enough to re-evaluate the objectives of their assignment within the context and capacity of their host organisation once they understand it. They need to understand that most assignments do not have any traction until the 6-8 month point, when the volunteer is fully embedded in the organisation. They need to feel supported to raise any serious barriers to assignment success with their HO well before assignments reach a critical point and are terminated. Issues should be raised with HOs as a first step before they are taken back to ICMs. Volunteers should have access to proper policies outlining grievance procedures and what is expected of them.

    And ultimately, I think we should re-evaluate the use of the ‘volunteer’ at all. I’d much rather simply be part of the “Australians for development” program. The term ‘volunteer’ just muddies the waters.

    • This is a great comment, thanks for contributing. You’ve laid out some really sound ideas that I agree with.

      But I think it would be really difficult for most volunteers to meet a KPI that was related to this lofty idea that a volunteer is supposed to build the long-term ability of an organisation to manage its own affairs, for example (depending on the role). You’d be setting a lot of people up to fail. And this is the trouble–you have to get the expectations of the program right and clear before you can start setting any sort of performance measures on volunteers. The program is still confused, still aiming for the stars instead of some level of practicality, and the evaluation seemed to recommend even more whimsy on the type of capacity building that it should be aiming for.

      So when I mentioned a stripping back of formal expectation in the program, I was meaning at this higher level. Isn’t it enough that the volunteer program makes a very positive contribution to development? Why should we expect more and more from it? This seems to be what drives unrealistic assignment descriptions that are littered with aid-world jargon rather than practicality and reality, for example.

      The evaluation also didn’t say much on what the partners paid to deliver the program should be doing. A lot more on what volunteers should be doing, but not on the structural issues in the program that make the ‘job’ of the volunteer more challenging.

      So basically, in my view there needs to be a clearing up of the expectations all around. Until these are cleared up, I would be surprised if we saw any progress on the areas you have mentioned in your very thoughtful comment. The ODE evaluation (which is what I was writing in response to) didn’t put forward strong ideas on how to fix these problems.

  • Hey there Ash,

    A great post and an interesting read. It is so important to hear thoughtful reflection from people such as yourself, who have experienced these things and had time to synthesise learnings.

    One thing the evaluation, Stephen’s blogs and yours all highlight to me is the problem caused by a neglect to define ‘capacity development’ in the early stages of any intervention, or in this case, volunteer placement. As you suggest, coming to shared expectations is essential. I would think this would involve arriving at a shared meaning for what capacity development means in relation to any particular intervention. I worked on a reasonably large project that actually had ‘capacity building’ in its title but had no in-built, articulated concept of what capacity building meant in terms of the project and what was realistic to achieve. This caused so many challenges along the way. For example, what I thought were successes in capacity development were not necessarily viewed as such by others involved in the project because they were looking at things from a different idea of capacity development. (This was particularly so because I saw the loss of some ‘capacities’ as a success.)

    So I don’t think ditching capacity development is necessarily the way to go, because I think it has use as a concept and an approach. Rather, I think a useful way forward is much as you suggest – for any activity, creating a shared understanding of what capacity and its development means for the people involved (particularly the host organisation), and what change might look like. This might actually mean just having an individual fill a role for a time.

Leave a Comment

Tweet
Share
Share
+1