Most charities know their funders want outcomes data. The problem is not a lack of ambition. It is a lack of infrastructure. Fundraising data lives in one system, case records in another, and survey results on someone's laptop. Compiling a single funder report can take three weeks of manual work. SORP 2026 now makes impact reporting mandatory for all tiers. This article is a practical guide to building the data architecture that connects service delivery to funder-ready reports, with honest assessments of the software that can help.
Speak to us about charity software · +44 7494 618 651 · Mon to Fri, 9am to 6pm
There is a gap between what charities report and what funders need. Only 36% of charities feel confident measuring their impact, yet 89% of funders require some form of outcomes reporting in grant applications and end-of-grant reports. The numbers tell the story: just 8% of charities report their impact comprehensively, while 68% provide some outcome information that rarely meets funder expectations.
The root cause is a confusion between outputs, outcomes, and impact. These terms are not interchangeable.
Most charity reports stop at outputs. Funders want outcomes. The best funders want to understand the connection between what you did and what changed, with evidence to support the claim. "We helped 500 people" is activity reporting, not impact reporting.
Funders typically require five types of evidence: quantitative output data, measured outcomes against a baseline, financial evidence showing how the grant was spent, case studies illustrating individual journeys, and beneficiary feedback demonstrating satisfaction and perceived change.
Not all funders ask for the same thing. The reporting burden varies enormously depending on who funds you, and most charities report to several funders simultaneously, each with different frameworks and timelines.
| Funder Type | Grant Range | Reporting Frequency | Key Evidence Required |
|---|---|---|---|
| National Lottery Community Fund | £10,000 to £1m+ | 6-monthly or annually | Outputs, outcomes, beneficiary feedback, case studies |
| Large trusts and foundations | £10,000 to £500,000 | Annually or 6-monthly | Theory of Change alignment, outcome data, financial evidence |
| Small and family trusts | £1,000 to £25,000 | End-of-grant only | Outputs, basic outcomes, narrative |
| Local authority commissioners | £50,000 to £2m+ | Quarterly | KPIs, outcome measures, demographic breakdowns, PROM scores |
| Corporate funders | £5,000 to £100,000 | 6-monthly or annually | Outputs, stories, photos, social media metrics |
| Government departments (DCMS, DHSC) | £100,000 to £10m+ | Quarterly or monthly | Standardised outcome frameworks, financial returns |
Grant administration costs approximately £6,600 per grant when you account for the full cycle of application, reporting, and monitoring. For a charity managing ten grants simultaneously, that is £66,000 in administrative overhead before you factor in the staff time spent compiling reports from disconnected systems.
There is a positive trend worth noting. Over 150 funders (collectively making grants worth over £1 billion in 2023-24) have signed IVAR's Open and Trusting commitments, pledging proportionate reporting requirements. But "proportionate" still means evidence-based. It means fewer metrics, not no metrics.
The Charities Statement of Recommended Practice (SORP) 2026 applies to accounting periods starting on or after 1 January 2026. The most significant change: impact reporting moves from "nice to have" to a mandatory element of the Trustees' Annual Report for all charities.
The Tier 2 requirement is the one that will catch most charities off guard. It is not enough to describe what you did. You must disclose the measures and indicators you used to assess performance. If you do not have a measurement framework, SORP 2026 requires you to explain why.
This connects directly to the Charity Commission annual return. Charities with income over £25,000 must already submit their Trustees' Annual Report, accounts, and an independent examiner's or auditor's report. From October 2026, the audit threshold rises to £1.5m and the independent examination threshold increases from £25,000 to £40,000. The reporting requirements are shifting towards fewer charities needing formal audits but more charities needing structured impact evidence.
Before you choose software, you need to decide what you are measuring. An outcomes framework gives you the structure. Several are in common use across the UK charity sector.
The foundation of most impact measurement. A Theory of Change maps: Inputs (what you invest) to Activities (what you do) to Outputs (what you produce) to Outcomes (what changes) to Impact (long-term difference). It makes your assumptions explicit: "If we invest these resources, do these activities, and reach these people, then we expect these changes to happen, for these reasons."
Theory of Change is promoted by NPC, NCVO, and required by most major funders including the National Lottery Community Fund. Most charities have one on paper. Very few have operationalised it into their data collection systems.
A simpler, more linear version of Theory of Change. Logic models map inputs through to outcomes on an if-then basis. They are used by government departments (DCMS Outcome Delivery Plans, HM Treasury Green Book) and are useful for programme-level reporting, though less suited to organisation-wide impact measurement.
Specific tools that measure specific changes in a standardised way:
The Public Services (Social Value) Act 2012 requires commissioners to consider wider social, economic, and environmental benefits in procurement. For charities competing for council-commissioned contracts, this means being able to quantify your social value in monetary terms. The UK Social Value Bank (HACT) uses a wellbeing approach to produce monetary values for social outcomes, and the Social Value Engine is the UK's only accredited SROI calculation platform, with over 300 peer-reviewed financial proxies.
NPC recommends focusing on 3-5 well-chosen metrics measured consistently, rather than 20 sporadic ones. The temptation to measure everything is strong. Resist it. Choose the outcomes that matter most to your beneficiaries and your biggest funders, then measure those properly.
This is the core technology problem. Most charities operate with at least two, often three, disconnected data systems.
Fundraising CRMs (Beacon, Donorfy, Access Charity CRM) manage donor relationships. They track who gave what, when, and how to ask again. They handle Gift Aid claims and campaign management. But they cannot track beneficiary outcomes, case notes, or service delivery data. For more on what these systems do well, see our charity CRM comparison.
Case management systems (Lamplight, Charitylog, Views) track beneficiary work. They record referrals, service usage, support plans, and case notes. Some include outcomes tracking modules. But they do not connect to fundraising data. For a detailed comparison, see our guide to charity case management software.
M&E tools (Upshot, Makerble, Outcomes Star Online) measure outcomes and generate impact reports. But they often sit separately from both the fundraising CRM and the case management system, creating a third silo.
The problem is structural. Each system uses different identifiers for the same people. A beneficiary in your case management system may have a completely different record ID in your survey tool. There is no shared key linking a funder's grant to the services it paid for to the outcomes those services produced.
This three-layer problem breaks down as follows:
The digital skills gap compounds the problem. According to the 2025 Charity Digital Skills Report, 39% of charities rate themselves as poor at website and analytics data (up from 31% the previous year). Only 18% rate their use of digital in service delivery as "excellent." And 69% cite strained budgets as the biggest barrier to digital progress. For more on the broader challenge of managing charity data effectively, including donor management and Gift Aid, the pattern is the same: critical data trapped in tools that were never designed to talk to each other.
The market splits into three categories: dedicated M&E platforms, CRMs with built-in outcomes tracking, and enterprise solutions. None of them solve the full problem on their own.
| Software | Focus | Strength | Limitation |
|---|---|---|---|
| Upshot | Youth, sport, community outcomes | Purpose-built for funder reporting. Used by 1,400+ organisations across 60 local authorities. Tracks 315,000 sessions and 1.3 million participants annually. | Limited Theory of Change support. Less flexible outside the youth and sport sector. |
| Makerble | Impact measurement dashboards | Free tier available. SROI calculator, geographic heatmaps, real-time dashboards. | Limited case management. Better for reporting than daily service delivery. |
| Outcomes Star Online | Outcomes Star framework | Purpose-built for Star measures. Over 1,000 organisations globally. | Only works with the Outcomes Star framework. Limited general M&E capability. |
| Software | Focus | Outcomes Capability | Limitation |
|---|---|---|---|
| Lamplight | Charity CRM (700+ UK orgs) | Supports WEMWBS, Outcomes Star, Core 10, PHQ-9, GAD-7. Scatter plot reporting. Customisable fields. | No AI-powered reporting. No Theory of Change features. Basic dashboards. |
| Views | Case management (voluntary sector) | Person-centred support planning. Outcome tracking. Strong for disability services. | Niche market position. Limited integrations with other platforms. |
| Salesforce NPSP | Fundraising CRM (10 free licences) | 70+ pre-built reports. Custom dashboards. Basic grant tracking. | Outcomes management requires upgrade to Nonprofit Cloud. Significant configuration needed. TCO typically £50,000+ in year one. |
The selection criteria should follow your reporting needs, not the other way around:
For most charities, the honest answer is that no single off-the-shelf product covers fundraising, case management, and impact reporting in one place. You either accept two connected systems or you build something bespoke.
Whether you use off-the-shelf tools or a bespoke system, the architecture follows the same logic. Here is a practical seven-step approach.
Take your Theory of Change (or create one if you do not have it) and identify exactly where in your service delivery each data point gets captured. For every outcome you claim, there should be a specific moment when that data enters your system. If there is no collection point, the outcome is unmeasurable.
Look at the reporting requirements of your three largest funders. Identify the measures that overlap. If the National Lottery wants wellbeing outcomes and your local authority commissioner wants WEMWBS scores, that is one measure serving two reports. Consolidate where possible.
Data collection must happen as part of the work, not as an afterthought. If frontline staff see data entry as a separate task they do at the end of the week, quality will be poor and compliance will be inconsistent. The intake form, the session record, and the exit survey should be part of the service delivery process itself.
If a person appears in your case management system, your survey tool, and your attendance register, they need the same identifier in all three. Without this, you cannot link a person's journey from referral through service delivery to measured outcomes. This is the single most important architectural decision you will make.
Do not wait until a funder report is due to discover your data is incomplete. A live dashboard showing collection rates, outcome measure completion, and programme progress lets you catch gaps in real time rather than three weeks before a deadline.
Each funder has a different reporting format. Build templates that map your data fields to each funder's requirements. When a report is due, the template should populate automatically from your database. The three-week manual compilation exercise should become a half-day review process.
The best architecture in the world fails if staff do not understand why they are collecting data. Frontline workers need to see the connection between the session record they complete and the funder report that secures next year's funding. This is a culture shift, not just a training exercise.
The structural problem is clear. Fundraising data, case management data, and outcomes data sit in separate tools built by separate vendors for separate purposes. Off-the-shelf platforms solve one or two layers of the problem, but none solve all three.
A bespoke system can connect all three layers into a single data architecture. One beneficiary record that links to the grant that funds their support, the services they received, and the outcomes they achieved. Funder reports that generate automatically because the data was structured correctly from the start. No manual Excel consolidation. No three-week compilation exercises.
This is not the right approach for every charity. A small organisation with a single funder and straightforward service delivery will be well served by Lamplight or Upshot. But for charities managing multiple funders with different reporting requirements, delivering services across several programmes, and facing SORP 2026 compliance for the first time, the cost of maintaining disconnected systems (in staff time, reporting delays, and data quality issues) often exceeds the cost of building something that works as a single unit.
For a fuller exploration of bespoke versus off-the-shelf charity software, see our charity software guide.
Speak to us about charity software · +44 7494 618 651 · Mon to Fri, 9am to 6pm