Part of the Charity Software Guide
Charity May 2026 13 min read

Charity Impact Reporting: How to Build a System That Actually Satisfies Funders (2026)

Most charities know their funders want outcomes data. The problem is not a lack of ambition. It is a lack of infrastructure. Fundraising data lives in one system, case records in another, and survey results on someone's laptop. Compiling a single funder report can take three weeks of manual work. SORP 2026 now makes impact reporting mandatory for all tiers. This article is a practical guide to building the data architecture that connects service delivery to funder-ready reports, with honest assessments of the software that can help.

Speak to us about charity software · +44 7494 618 651 · Mon to Fri, 9am to 6pm

36%
of charities feel confident measuring their impact (NPC)
89%
of funders require outcomes reporting in grant applications
SORP 2026
makes impact reporting mandatory for all charity tiers

Why Most Charity Impact Reports Fail (And What Funders Actually Want)

There is a gap between what charities report and what funders need. Only 36% of charities feel confident measuring their impact, yet 89% of funders require some form of outcomes reporting in grant applications and end-of-grant reports. The numbers tell the story: just 8% of charities report their impact comprehensively, while 68% provide some outcome information that rarely meets funder expectations.

The root cause is a confusion between outputs, outcomes, and impact. These terms are not interchangeable.

  • Outputs are what you did. "We ran 120 sessions and 500 people attended."
  • Outcomes are what changed. "72% of participants reported improved confidence in managing their finances."
  • Impact is the long-term difference. "Participants reduced their reliance on emergency debt services by 40% over 18 months."

Most charity reports stop at outputs. Funders want outcomes. The best funders want to understand the connection between what you did and what changed, with evidence to support the claim. "We helped 500 people" is activity reporting, not impact reporting.

Funders typically require five types of evidence: quantitative output data, measured outcomes against a baseline, financial evidence showing how the grant was spent, case studies illustrating individual journeys, and beneficiary feedback demonstrating satisfaction and perceived change.

What Different Funders Require (A Practical Breakdown)

Not all funders ask for the same thing. The reporting burden varies enormously depending on who funds you, and most charities report to several funders simultaneously, each with different frameworks and timelines.

Funder Type Grant Range Reporting Frequency Key Evidence Required
National Lottery Community Fund £10,000 to £1m+ 6-monthly or annually Outputs, outcomes, beneficiary feedback, case studies
Large trusts and foundations £10,000 to £500,000 Annually or 6-monthly Theory of Change alignment, outcome data, financial evidence
Small and family trusts £1,000 to £25,000 End-of-grant only Outputs, basic outcomes, narrative
Local authority commissioners £50,000 to £2m+ Quarterly KPIs, outcome measures, demographic breakdowns, PROM scores
Corporate funders £5,000 to £100,000 6-monthly or annually Outputs, stories, photos, social media metrics
Government departments (DCMS, DHSC) £100,000 to £10m+ Quarterly or monthly Standardised outcome frameworks, financial returns

Grant administration costs approximately £6,600 per grant when you account for the full cycle of application, reporting, and monitoring. For a charity managing ten grants simultaneously, that is £66,000 in administrative overhead before you factor in the staff time spent compiling reports from disconnected systems.

There is a positive trend worth noting. Over 150 funders (collectively making grants worth over £1 billion in 2023-24) have signed IVAR's Open and Trusting commitments, pledging proportionate reporting requirements. But "proportionate" still means evidence-based. It means fewer metrics, not no metrics.

SORP 2026 Makes Impact Reporting Mandatory

The Charities Statement of Recommended Practice (SORP) 2026 applies to accounting periods starting on or after 1 January 2026. The most significant change: impact reporting moves from "nice to have" to a mandatory element of the Trustees' Annual Report for all charities.

SORP 2026: Three-tier reporting requirements.

Tier 1 (under £500k income): Must answer: "In what way has the charity's work made a difference to the circumstances of its beneficiaries?"

Tier 2 (£500k to £15m): Must also explain how well the charity performed against its aims, the measures or indicators used to assess performance, and the outputs achieved by its activities.

Tier 3 (over £15m): All of the above, plus mandatory ESG disclosures, fundraising activity review, and comprehensive sustainability reporting.

The Tier 2 requirement is the one that will catch most charities off guard. It is not enough to describe what you did. You must disclose the measures and indicators you used to assess performance. If you do not have a measurement framework, SORP 2026 requires you to explain why.

This connects directly to the Charity Commission annual return. Charities with income over £25,000 must already submit their Trustees' Annual Report, accounts, and an independent examiner's or auditor's report. From October 2026, the audit threshold rises to £1.5m and the independent examination threshold increases from £25,000 to £40,000. The reporting requirements are shifting towards fewer charities needing formal audits but more charities needing structured impact evidence.

Timeline note: SORP 2026 applies to accounting periods starting on or after 1 January 2026. If your financial year runs April to March, your first SORP 2026 report will cover the period starting 1 April 2026. You have time to prepare, but not much.

Outcomes Frameworks: Choosing the Right One for Your Charity

Before you choose software, you need to decide what you are measuring. An outcomes framework gives you the structure. Several are in common use across the UK charity sector.

Theory of Change

The foundation of most impact measurement. A Theory of Change maps: Inputs (what you invest) to Activities (what you do) to Outputs (what you produce) to Outcomes (what changes) to Impact (long-term difference). It makes your assumptions explicit: "If we invest these resources, do these activities, and reach these people, then we expect these changes to happen, for these reasons."

Theory of Change is promoted by NPC, NCVO, and required by most major funders including the National Lottery Community Fund. Most charities have one on paper. Very few have operationalised it into their data collection systems.

Logic models

A simpler, more linear version of Theory of Change. Logic models map inputs through to outcomes on an if-then basis. They are used by government departments (DCMS Outcome Delivery Plans, HM Treasury Green Book) and are useful for programme-level reporting, though less suited to organisation-wide impact measurement.

Validated outcome measures

Specific tools that measure specific changes in a standardised way:

  • Outcomes Star: Used by over 1,000 organisations globally, including 500+ charities and 170 local authorities. Over 6 million Stars completed. Domain-specific versions include the Homelessness Star, Recovery Star, and Work Star.
  • WEMWBS (Warwick-Edinburgh Mental Wellbeing Scale): The standard for health and wellbeing programmes across the UK.
  • PHQ-9 and GAD-7: Clinical measures for depression and anxiety, increasingly required by NHS commissioners for VCSE contracts.
  • ONS4: The Office for National Statistics four subjective wellbeing questions, used for population-level comparisons.

Social Value and SROI

The Public Services (Social Value) Act 2012 requires commissioners to consider wider social, economic, and environmental benefits in procurement. For charities competing for council-commissioned contracts, this means being able to quantify your social value in monetary terms. The UK Social Value Bank (HACT) uses a wellbeing approach to produce monetary values for social outcomes, and the Social Value Engine is the UK's only accredited SROI calculation platform, with over 300 peer-reviewed financial proxies.

NPC recommends focusing on 3-5 well-chosen metrics measured consistently, rather than 20 sporadic ones. The temptation to measure everything is strong. Resist it. Choose the outcomes that matter most to your beneficiaries and your biggest funders, then measure those properly.

The Data Silo Problem (Why Your CRM Cannot Produce Impact Reports)

This is the core technology problem. Most charities operate with at least two, often three, disconnected data systems.

Fundraising CRMs (Beacon, Donorfy, Access Charity CRM) manage donor relationships. They track who gave what, when, and how to ask again. They handle Gift Aid claims and campaign management. But they cannot track beneficiary outcomes, case notes, or service delivery data. For more on what these systems do well, see our charity CRM comparison.

Case management systems (Lamplight, Charitylog, Views) track beneficiary work. They record referrals, service usage, support plans, and case notes. Some include outcomes tracking modules. But they do not connect to fundraising data. For a detailed comparison, see our guide to charity case management software.

M&E tools (Upshot, Makerble, Outcomes Star Online) measure outcomes and generate impact reports. But they often sit separately from both the fundraising CRM and the case management system, creating a third silo.

The result: Donations sit in one database. Case records live in another. Survey results are stored in a third platform, or worse, on someone's laptop. When a funder report is due, someone spends three weeks pulling data from each system, reconciling it manually in Excel, and hoping the numbers add up.

The problem is structural. Each system uses different identifiers for the same people. A beneficiary in your case management system may have a completely different record ID in your survey tool. There is no shared key linking a funder's grant to the services it paid for to the outcomes those services produced.

This three-layer problem breaks down as follows:

  1. Collection layer: Data enters through case management, attendance registers, surveys, and intake forms. Different tools capture different data at different times.
  2. Storage layer: Data lives in separate databases with no shared identifiers. Merging records requires manual matching.
  3. Reporting layer: Funder reports require joining data from multiple sources, almost always manually in Excel.

The digital skills gap compounds the problem. According to the 2025 Charity Digital Skills Report, 39% of charities rate themselves as poor at website and analytics data (up from 31% the previous year). Only 18% rate their use of digital in service delivery as "excellent." And 69% cite strained budgets as the biggest barrier to digital progress. For more on the broader challenge of managing charity data effectively, including donor management and Gift Aid, the pattern is the same: critical data trapped in tools that were never designed to talk to each other.

Software That Handles Impact Reporting (What to Look For)

The market splits into three categories: dedicated M&E platforms, CRMs with built-in outcomes tracking, and enterprise solutions. None of them solve the full problem on their own.

Dedicated M&E platforms

Software Focus Strength Limitation
Upshot Youth, sport, community outcomes Purpose-built for funder reporting. Used by 1,400+ organisations across 60 local authorities. Tracks 315,000 sessions and 1.3 million participants annually. Limited Theory of Change support. Less flexible outside the youth and sport sector.
Makerble Impact measurement dashboards Free tier available. SROI calculator, geographic heatmaps, real-time dashboards. Limited case management. Better for reporting than daily service delivery.
Outcomes Star Online Outcomes Star framework Purpose-built for Star measures. Over 1,000 organisations globally. Only works with the Outcomes Star framework. Limited general M&E capability.

CRMs with built-in outcomes tracking

Software Focus Outcomes Capability Limitation
Lamplight Charity CRM (700+ UK orgs) Supports WEMWBS, Outcomes Star, Core 10, PHQ-9, GAD-7. Scatter plot reporting. Customisable fields. No AI-powered reporting. No Theory of Change features. Basic dashboards.
Views Case management (voluntary sector) Person-centred support planning. Outcome tracking. Strong for disability services. Niche market position. Limited integrations with other platforms.
Salesforce NPSP Fundraising CRM (10 free licences) 70+ pre-built reports. Custom dashboards. Basic grant tracking. Outcomes management requires upgrade to Nonprofit Cloud. Significant configuration needed. TCO typically £50,000+ in year one.

What to look for when choosing

The selection criteria should follow your reporting needs, not the other way around:

  • Does the software support the outcome frameworks your funders require (Outcomes Star, WEMWBS, PHQ-9)?
  • Can it integrate with your existing fundraising CRM, or does it create another silo?
  • Can it generate funder-ready reports directly, or does data still need to be exported and reformatted?
  • Does it handle your data collection at the point of service delivery, or is it only a reporting layer on top?
  • What are the real costs, including implementation, training, and ongoing administration?

For most charities, the honest answer is that no single off-the-shelf product covers fundraising, case management, and impact reporting in one place. You either accept two connected systems or you build something bespoke.

How to Build Your Impact Reporting System (Step by Step)

Whether you use off-the-shelf tools or a bespoke system, the architecture follows the same logic. Here is a practical seven-step approach.

Step 1: Map your Theory of Change to data collection points

Take your Theory of Change (or create one if you do not have it) and identify exactly where in your service delivery each data point gets captured. For every outcome you claim, there should be a specific moment when that data enters your system. If there is no collection point, the outcome is unmeasurable.

Step 2: Choose 3-5 outcome measures that align with your biggest funders

Look at the reporting requirements of your three largest funders. Identify the measures that overlap. If the National Lottery wants wellbeing outcomes and your local authority commissioner wants WEMWBS scores, that is one measure serving two reports. Consolidate where possible.

Step 3: Build collection into service delivery workflows

Data collection must happen as part of the work, not as an afterthought. If frontline staff see data entry as a separate task they do at the end of the week, quality will be poor and compliance will be inconsistent. The intake form, the session record, and the exit survey should be part of the service delivery process itself.

Step 4: Establish a single beneficiary identifier across systems

If a person appears in your case management system, your survey tool, and your attendance register, they need the same identifier in all three. Without this, you cannot link a person's journey from referral through service delivery to measured outcomes. This is the single most important architectural decision you will make.

Step 5: Set up automated dashboards for live monitoring

Do not wait until a funder report is due to discover your data is incomplete. A live dashboard showing collection rates, outcome measure completion, and programme progress lets you catch gaps in real time rather than three weeks before a deadline.

Step 6: Create funder report templates that pull from your data automatically

Each funder has a different reporting format. Build templates that map your data fields to each funder's requirements. When a report is due, the template should populate automatically from your database. The three-week manual compilation exercise should become a half-day review process.

Step 7: Train frontline staff on why data quality matters

The best architecture in the world fails if staff do not understand why they are collecting data. Frontline workers need to see the connection between the session record they complete and the funder report that secures next year's funding. This is a culture shift, not just a training exercise.

Common Mistakes That Waste Time and Money

  • Measuring everything instead of what matters. Twenty sporadic metrics are worse than five consistent ones. Choose the outcomes your beneficiaries care about and your funders require, then measure those properly.
  • Treating data collection as an end-of-project task. If you only measure outcomes at the end of a programme, you have no baseline and no ability to demonstrate change over time.
  • Choosing software before defining what you need to measure. The tool should serve the framework, not the other way around. Define your Theory of Change first, then find the technology that supports it.
  • Running parallel systems with no integration. Two systems that do not share data are worse than one system that does half of what you need. At least with a single system, the data is in one place.
  • Confusing outputs with outcomes in funder reports. "We ran 120 sessions" is an output. "72% of participants reported improved wellbeing" is an outcome. Funders know the difference, and they notice when you do not.

The Case for a Unified System

The structural problem is clear. Fundraising data, case management data, and outcomes data sit in separate tools built by separate vendors for separate purposes. Off-the-shelf platforms solve one or two layers of the problem, but none solve all three.

A bespoke system can connect all three layers into a single data architecture. One beneficiary record that links to the grant that funds their support, the services they received, and the outcomes they achieved. Funder reports that generate automatically because the data was structured correctly from the start. No manual Excel consolidation. No three-week compilation exercises.

This is not the right approach for every charity. A small organisation with a single funder and straightforward service delivery will be well served by Lamplight or Upshot. But for charities managing multiple funders with different reporting requirements, delivering services across several programmes, and facing SORP 2026 compliance for the first time, the cost of maintaining disconnected systems (in staff time, reporting delays, and data quality issues) often exceeds the cost of building something that works as a single unit.

For a fuller exploration of bespoke versus off-the-shelf charity software, see our charity software guide.

Speak to us about charity software · +44 7494 618 651 · Mon to Fri, 9am to 6pm