Caritas Research Caritas Research

Insight

What makes a research design resilient in remote field settings

Programme teams often face a tension between methodological rigour and practical feasibility. A few design principles that help an evaluation survive contact with the field.

Caritas Research
  • Research design
  • Impact evaluation
  • Methodology

Programme teams working in remote or rural contexts often face the same tension: they want their evaluation to be methodologically rigorous, but the people doing the work are stretched, the logistics are unpredictable, and the beneficiaries themselves have limited time to spare for anything that feels like extra paperwork.

An evaluation that is technically well-designed but cannot be delivered as designed isn’t rigorous at all — it’s theatre. The data either doesn’t get collected, or it gets collected badly, or it gets collected beautifully for a subset of participants who happen to be easiest to reach. None of these outcomes tell you much about what the programme actually achieved.

Over time we’ve found a handful of principles that help a research design stay standing when field reality pushes back.

Measure what matters, not what’s easy

The most common failure mode isn’t choosing the wrong methodology — it’s choosing indicators that sound important but don’t actually track the programme’s theory of change. “Number of people reached” is easy. “Sustained change in a specific outcome six months on” is the harder number, and usually the one that matters.

A small number of well-chosen indicators, measured honestly, will nearly always beat a long list of indicators measured half-heartedly. We bias toward the shorter list.

Layer methods rather than chain them

Many evaluations are designed as a linear pipeline — baseline survey → intervention → endline survey → analysis. If any one stage fails, the whole design falls over.

Layered designs are more robust: qualitative interviews running alongside quantitative surveys, routine programme data feeding into the analysis, informal check-ins with beneficiaries that surface issues before they become problems at endline. No single method carries the full burden of the conclusion, so a weak data point doesn’t undo the study.

Build in redundancy for messy data

Field data is never clean. Attendance sheets get waterlogged, phones run out of battery, participants move away between rounds. A design that assumes clean data produces findings that fall apart under audit.

Two things help: (1) plan for missingness up front, including what analytical strategy we’ll use when 20 per cent of respondents drop out; (2) overcollect slightly in ways that cost little but leave room to drop bad observations without destroying statistical power.

Hand the instruments over early

A survey that only Caritas can administer is a fragile survey. If the programme team can’t run it themselves after we leave, the study ends when our engagement ends.

We aim to make the data collection instruments usable by the programme team from day one — simple enough to field without continuous supervision, documented well enough that a new staff member can pick them up, and phrased in the local idiom so beneficiaries understand what’s being asked. This turns the evaluation from a one-off exercise into a capability the partner keeps.

What this looks like in practice

It looks like a study plan you can defend to a funder and to the programme manager at the same time — rigorous enough that the findings will hold up under scrutiny, realistic enough that they will actually be collected.

When a design feels impressive in a planning document but impossible in the field, that’s usually a signal we’ve drifted into the first category without checking whether we’re still in the second.


If you are designing a study for a programme in a challenging setting and would like to talk through the options, we are happy to have that conversation.

Want to talk through how this might apply to a programme you're working on?

Get in touch