It's frustrating when your programme is strong but your evidence of impact doesn't meet the bar set by a prospective funder. This is a problem that many training and education charities experience firsthand. Cohort after cohort are becoming more capable, more confident and better connected. Ttestimonials are strong. Staff and volunteers who deliver the programme can point to specific individuals whose businesses or lives have shifted. Your programme clearly works. But when funders ask for the evidence there's a problem. The issue isn't with your impact, it's that there aren't the systems in place to capture that impact in a way that is rigorous, repeatable and increasingly irrefutable over time. When you're stuck with microsoft forms, google forms, surveymonkey links and manual processes, it's hard for the evidence of your impact to keep up with your ambition for your organisation.
Why training and education programmes are particularly hard to measure
The outcomes that matter most from a training or education programme are rarely visible immediately after the final session. Confidence grows over months. Business turnover changes over years. The network connection that transforms someone's trajectory might not happen until eighteen months after the programme ends.
This creates a fundamental measurement challenge: the moment when you most need to collect data e.g. at baseline, before or at the very start of the programme, is also the moment when participants are least likely to have seen any benefit; yet the moment when impact is most likely to have occurred is often months or years later when those participants have moved on or are much harder to reach.
Most survey tools are not designed for this reality. They capture a snapshot of where people are now rather than trackomg the same person's responses over time. This means that they don't connect a follow-up response to a baseline response from a year earlier; and they don't show you how an individual's situation has changed.
A spreadsheet can store results but it's a lot of effort to use it to analyse longitudinal data. Someone on your team has to manually match up a participant's baseline response with their six-month follow-up response and then connect that to the annual check-in response, for every question, on every survey, to every cohort. The process is time-consuming and ripe for human error.
The specific things a well-configured platform should do
For a training or education programme, a purpose-built impact management platform needs to handle several things that generic survey tools cannot.
It needs to maintain a contact record for each participant and track their entire relationship with you from the moment they apply through to every survey they complete, regardless of how much time has passed between survey completions. That record is what makes longitudinal measurement possible.
It needs to send surveys automatically at the right intervals. Baseline surveys go out at the start of each cohort. Follow-up surveys go out at three months, six months, a year or whatever intervals make sense for your programme. Reminders go out to those who haven't responded. None of this should require manual action from the programme team each time it happens.
It needs to show how responses have changed over time, not just what the latest responses say. The most important question a funder can ask is not "how are your participants doing now?" but "how has participating in your programme changed how they are doing?" Answering that question requires the platform to connect responses from different points in time for the same individual and to summarise that change across the whole programme.
It needs to allow disaggregation by characteristics that matter to the programme and its funders. For a programme serving a specific demographic, being able to show how outcomes vary by gender, country of origin, region or business sector is not just a nice-to-have, it's the very thing that funders are looking for and it's what separates credible evidence from hard-to-prove claims.
It needs to handle multiple cohorts. A programme on its tenth or twelfth cohort has a body of participants that spans years. The platform needs to keep those cohorts organised, comparable and individually accessible while also making it easy to look across all of them for programme-level reporting.
The baseline problem: what to do when you've been running for years without structured measurement
Many organisations that come to Makerble are not starting from scratch. They've been running their training, education or capacity-building programme for several years and have alumni are proof of the impact, but their data has been inconsistent or stored in formats that make it hard to analyse and compare.
It's understandable to think that because you're committing to getting it right from the next cohort, that you need to abandon all the data you've captured previously. That's not always true. If you have contact details for alumni, you can go back and gather impact data from them retrospectively. It would mean designing a retrospective survey that asks alumni where their business or their career was when they joined the programme and where it is now... or conducting qualitative interviews with a sample of alumni to gather richer evidence. Either way, that data could be imported into your measurement platform alongside current cohort data. The benefit of doing this is that you're able to demonstrate evidence of impact over a meaningful period of time, from people who have experienced the benefits of your training over several years and who have the perspective to articulate the transformative effect your programme has had.
What good measurement looks like in practice
Makerble is designed to enable education and training organisation to evaluate impact in a way that sits comfortably alongside programme delivery. Each new cohort is set up in the platform in the same way as the last. Participants are added as contacts. Surveys go out automatically. Responses come back and are tracked against each individual's history. The programme team receives alerts if a response suggests someone needs support. When you need to report results, the data is already structured and already visualised in charts and tables. Your funder reports, trustee summaries and SLT overviews are all drawn from the same dataset so you can report to different stakeholders without needing to re-enter any information.
With Makerble you're able to answer the pertinent questions with confidence and without excessive effort: who did we reach, what changed for them and how can we prove it.
If you run a training or education programme and want to build a more rigorous evidence base, get in touch or find out more about how Makerble supports education and training organisations on our Learning, Training and Education page.













.jpg)
.jpg)








.png)


.png)






.png)

%208.png)








.png)

