24 October 2025

Address to Relationships Australia SA strategic thinking day

Note

Testing what works: Randomised trials and real change

Thank you for the invitation to be part of your strategic thinking day. I’m sorry I can’t be with you in person, but I’m delighted to join you virtually from Ngunnawal country in Canberra to contribute to your conversation about ‘Outcomes that count: Accountability, impact and opportunity’.

Let me start by acknowledging Relationships Australia South Australia, and of course the leadership of your CEO, Dr Claire Ralfs. Your organisation has spent decades helping families navigate life’s hardest moments. Your work changes lives every day, often quietly, always profoundly.

I also want to acknowledge Professor John Lynch, whose Thriving Families initiative demonstrates the power of combining data, systems thinking and compassion. And Nick Tebbey, whose leadership across the Relationships Australia federation ensures that the voice of community services is heard clearly in the national policy debate.

Your work captures the essence of today’s theme: measuring what matters, learning what works and building trust through accountability and transparency.

Why measuring what matters matters

The phrase ‘measuring what matters’ sounds simple, but applying it consistently can transform the way we work.

Too often, what gets measured is what’s easiest to count: inputs and outputs, not outcomes. We know how many clients were seen, how many hours of counselling were delivered, how many workshops were run. But the real question is: did lives improve? Did families grow stronger? Did communities become more connected?

When we focus on outcomes, not activity, something powerful happens. We start to see what works. We learn from success. We fix what isn’t working. And we direct resources where they make the greatest difference.

That’s what drives the government’s Measuring What Matters framework, creating a shared language for wellbeing and progress. But it only works when that national vision connects with the everyday work of community organisations, where real change happens.

Outcomes are not abstract numbers. They’re stories of people: a parent rebuilding confidence, a couple learning to communicate again, a teenager finding a safe space to talk. Measuring what matters means making human progress visible.

The case for rigorous evaluation

If we want to improve outcomes, we need to be rigorous about how we learn.

Good intentions aren’t enough. Every program looks good on paper, but the real question is: does it work in practice?

That’s where evaluation comes in. Rigorous evaluations – including randomised trials – help us separate hope from evidence. They give us a counterfactual, a way to know what would have happened if we’d done nothing.

It’s not about being cold or clinical. It’s about humility, recognising that our best ideas sometimes fall short, and that knowing what doesn’t work is just as valuable as knowing what does.

An Australian study illustrates the point. In Western Australia, a randomised trial tested the use of infant simulators, life‑like dolls designed to reduce teenage pregnancy by showing students how demanding childcare can be. The theory made sense, and the program was rolled out in schools. But when researchers looked at the results, they found that girls who participated were more likely to become pregnant, not less.

It was a confronting finding, but an important one. Without evaluation, the program might have continued for years, consuming resources and good intentions without achieving its goal. Evidence helped redirect effort to strategies that work.

This is the value of rigorous testing. It protects taxpayers, strengthens accountability and ensures that compassion is matched by effectiveness.

In medicine, we would never approve a new treatment without testing it against alternatives. Social policy deserves the same care.

When evaluation is built into a program from the start, it becomes a tool for learning, not a bureaucratic afterthought.

Transparency and trust

Evaluation also builds trust.

People trust organisations that are open about results, good and bad. When governments and charities share evidence, they show confidence, honesty and respect for the public.

Decades ago, the social scientist Donald Campbell warned of the ‘over‑advocacy trap’, the tendency to exaggerate a program’s effectiveness in order to secure support. The antidote is transparency.

That’s one reason our government established the Australian Centre for Evaluation in the Treasury. Led by Eleanor Williams, the Centre works with departments to embed rigorous evaluation, including randomised trials, into program design. It’s a modest team, operating by persuasion rather than command, but its impact is already significant.

The centre is collaborating with agencies across employment, health, education and social services. It’s helping departments learn from what works and share lessons across government. And it’s doing so in partnership with external researchers, because the best evidence is co‑produced.

Transparency doesn’t just strengthen accountability; it strengthens democracy.

When decisions are based on evidence rather than ideology or instinct alone, people can see that government is trying to serve them honestly. That’s how we rebuild trust, not through slogans, but through results.

Building a culture of learning in the social sector

Government can set the tone, but culture change depends on leadership within the sector itself.

Relationships Australia SA has been a leader in this space, showing how evaluation can deepen rather than distract from human connection.

Across the community sector, more organisations are moving from compliance reporting to learning systems, using outcomes data to reflect, adapt and improve.

That shift isn’t always easy. Smaller organisations often lack data infrastructure. Funding cycles can discourage experimentation. And evaluation can feel daunting when lives and livelihoods are at stake.

But the alternative is worse. When we don’t evaluate, we end up repeating mistakes, spreading thinly across too many programs and missing the chance to scale up those that really work.

Government can help by embedding evaluation funding into program budgets, by improving access to administrative data, and by rewarding learning, not just success.

That’s why initiatives like the Paul Ramsay Foundation’s experimental evaluation grants – supported by the Australian Centre for Evaluation – are so valuable. They help charities test ideas rigorously, share what they learn and strengthen the evidence base for everyone.

Outcome frameworks aren’t just technical tools; they’re instruments of integrity. They show that our compassion is disciplined by a commitment to truth.

From programs to systems

One of the most interesting things about today’s discussion is the focus on systems, not just programs.

Australia’s biggest social challenges – such as inequality, loneliness and intergenerational disadvantage – are complex and interconnected. They can’t be solved by single projects or quick fixes. They demand long‑term, coordinated approaches that test, learn and adapt.

That’s where outcomes frameworks really shine. When used well, they don’t just measure impact; they shape it. They help funders and service providers stay aligned on shared goals. They make collaboration visible. They turn learning into a collective enterprise.

When I talk to people across the social sector, I hear a consistent message: we know we’re making a difference, but we want to prove it – to ourselves, to funders and to the people we serve.

That mindset is powerful. It shows a shift from compliance to curiosity, from ticking boxes to making change.

When learning becomes part of culture, progress accelerates.

The government’s role: partner and enabler

Government’s role is to be a partner and enabler, creating the conditions for evidence‑informed practice to thrive.

That means building evaluation into funding agreements from the start, not as an optional extra. It means investing in data sharing and linkages, so that outcomes can be tracked over time. And it means supporting charities that are willing to innovate and take measured risks.

We’re working to ensure that human services contracts focus on outcomes, not just activity, and that charities spend less time on paperwork and more time delivering impact.

We’re also strengthening the infrastructure of evidence, through the Australian Centre for Evaluation, through better coordination of government data and through partnerships with researchers, philanthropists and frontline organisations.

This collaborative approach recognises that no single player holds the full picture. The best solutions emerge when government, community and academia work together – combining lived experience with rigorous analysis.

Looking ahead: The promise of evidence

Evidence‑based policymaking is a quiet revolution. It doesn’t always attract attention, but it’s changing how governments and communities think about accountability and impact.

Fifty years ago, social policy relied mostly on intuition. Today, we have thousands of evaluations, each one a lesson in what works and what doesn’t.

The challenge now is to connect those lessons and make them accessible, so that every organisation can learn from what’s been tried before.

Imagine a future where every service, from parenting support to family mediation, could draw on a shared evidence base of proven approaches. Where learning travels faster than failure. That’s the promise of evaluation.

Ultimately, measuring what matters isn’t about counting things. It’s about making progress visible and making sure progress is shared.

Conclusion

At its heart, evaluation is about respect.

Respect for the people we serve, whose time and trust we hold.

Respect for taxpayers, who deserve to know their contributions are making a difference.

And respect for the truth, because honesty about what works is the foundation of improvement.

If compassion gives our work its heart, evidence gives it its backbone.

So to everyone taking part in today’s Strategic Foresight Day, thank you. Thank you for asking the hard questions, for valuing learning, and for keeping the focus on outcomes that truly count.

As WH Auden once said, ‘We are all here on earth to help others — what on earth the others are here for, I don’t know.’ Measuring what matters helps us answer that question. It helps us see whether our help is actually helping.

Together, let’s keep building a social sector that measures what matters, learns from what works, and delivers outcomes that last.