From numbers and anecdotes to credible contribution

Rethinking impact in labour market services

By

Ania Mendrek

Published

February 27, 2026

Across employment and skills systems, “impact” has become a central expectation: in bids, annual reports, social value sections, and conversations with commissioners and funders. For many organisations, this creates a familiar tension.

Providers are expected to demonstrate impact clearly and confidently, yet they operate in systems where outcomes are shaped by far more than any single service or organisation. Labour markets, employers, policy decisions, welfare systems, and people’s lives all play a role. Making sense of this complexity — and communicating it convincingly — is not straightforward.

Most organisations already publish impact content. What is often harder is explaining how change happens, what role the organisation plays within a wider system, and what can reasonably be claimed with confidence. This is less about effort or intent, and more about the structures and incentives that shape how impact is currently framed.

Why existing approaches often feel unsatisfying

For providers of employment services, formal impact evaluation is frequently impractical, inappropriate, or disproportionate. Randomisation is rarely feasible or ethical. Counterfactuals are unavailable. Outcomes are influenced by multiple actors working in parallel and together.

At the same time, procurement and reporting processes increasingly ask for “impact” or “social value” without always being explicit about how these should be evidenced. Understandably, organisations respond by leaning on what is most readily available: delivery data, outcome counts, and illustrative examples of participant journeys.

These approaches are not wrong, but on their own, they often leave important questions unanswered. Experienced commissioners and assessors are rarely looking just for activity or results; they are trying to understand why those results occurred, under what conditions, and what the organisation is genuinely contributing within a complex system.

Credible impact is not about claiming control over outcomes, but about being clear where your influence is strongest.

Shifting the focus: contribution rather than proof

A more useful starting point is to reframe the task. Instead of asking “can we prove impact?”, a more constructive question is:

"What is our contribution to making work lives better and how does it show?”

This is where a contribution-led theory of change becomes valuable, not as a compliance exercise or a planning template, but as a practical way of making delivery logic explicit.

Used well, a theory of change does not promise causality. It clarifies the steps between activities and outcomes, surfaces assumptions, and makes visible the external conditions that shape results. It helps organisations articulate their role within a wider ecosystem, without implying control over everything that happens next.

Crucially, it also shifts how outcomes are understood. Rather than treating outcomes as a single end point, such as job entry or sustained employment, it recognises change as a sequence. Early progress, confidence, readiness, and engagement are framed as legitimate and necessary steps, not as secondary or “soft” results. Non-linear journeys and pauses become part of the story, rather than something to explain away.

Evidence-informed does not mean data-heavy

One of the most common misconceptions in the sector is that credible impact requires perfect or comprehensive data. In reality, evidence-informed practice starts from the data organisations actually have, often partial, constrained, or fragmented, and works out what can be claimed responsibly within those limits.

A disciplined contribution framework does not inflate weak data. It does the opposite. It helps organisations avoid over-interpreting what they collect, and protects them from making claims that cannot be supported.

This requires being explicit about boundaries. Before–after data can illustrate change, but not causality. Case material can explain how change happens, but not prove that it did. External data can enrich understanding, but does not transfer control. Making these distinctions clear tends to strengthen credibility rather than weaken it.

Structured qualitative evidence plays an important role here. When used systematically through participant journeys, consistent feedback loops, or analysed practitioner insight it moves beyond illustration and becomes evidence about mechanisms, barriers, and enablers of change.

Why acknowledging limits strengthens credibility

There is often concern that acknowledging assumptions or dependencies might weaken an organisation’s position, particularly in competitive procurement environments. In practice, the opposite is often true.

Commissioners and contract managers already understand that employment outcomes are system-driven. What tends to raise questions is not realism, but confidence without explanation. Claims that move directly from activity to impact are often quietly discounted by experienced readers.

There is a subtle but important distinction:

  • Less effective framing: “we cannot control this, so we cannot claim it.”

  • Stronger framing: “this outcome depends on multiple factors; our role is specifically X, Y, and Z and here is the evidence for that contribution.”

The second signals maturity, professionalism, and system awareness. It does not diminish the organisation’s value; it clarifies where its influence is strongest and why that matters.

In complex systems, clarity is an asset.

Impact as infrastructure, not a last-minute exercise

Perhaps the most significant shift is to stop treating impact work as something that happens only when a bid is due or a report is required. Contribution-led frameworks work best when they are embedded over time.

Early on, they help clarify delivery logic and outcome ambition. During delivery, they provide a shared reference point across teams. Later, they underpin procurement narratives, reporting, and evaluation, rather than forcing organisations to reconstruct logic under pressure.

In a context where bids are increasingly supported by AI-driven drafting tools, this underlying coherence matters even more. The question is no longer who can write fastest, but whether there is a clear, defensible logic for those tools to draw from.

Why now

Employment and skills systems are further evolving towards more fragmented or locally commissioned contracts, greater need for partnership models, and greater scrutiny of social value. In these environments, exaggerated claims become risky. Clarity becomes an asset.

In a sector where many organisations are doing strong work under challenging conditions, credibility increasingly comes from being precise about contribution, not from claiming control over outcomes that no one truly controls.

Impact, in this sense, is not about proving more. It is about explaining better, with confidence, proportion, and honesty.