Public confidence is the next big step in the UK's AI journey

Adam Pettman, Head of Innovation & AI, 2i Testing
24-Jul-2025

The UK government deserves credit for recognising the potential of AI early and acting decisively. While many nations hesitated, the UK took proactive steps to explore AI’s capabilities - an essential move to unlock transformative improvements in the delivery of vital services and ensure taxpayer money is spent wisely. 

This forward-thinking approach has positioned the UK as a serious contender in the global AI race. Initiatives such as the AI Safety Summit, partnerships with leading AI providers and most recently, a landmark agreement with OpenAI to accelerate safe adoption of generative AI in the public sector, all signal that the UK isn't sitting back.  Instead, it's rolling up its sleeves and helping shape what's next. That’s something we should all applaud. 

However, laying the groundwork is only the first step. The real challenge lies ahead: turning ambition into adoption - and adoption into trust. 

Why trust is the currency of AI adoption 

Think about the last time you signed up with a challenger bank through their app. The process was quick, intuitive and it just worked. That positive experience created instant goodwill.  So much so, that millions now trust these banks with their savings. 

Government services have a similar opportunity. When citizens engage with public services, their experience determines how they feel about the system as a whole. Imagine a benefits claim processed in hours instead of weeks or an NHS appointment system that predicts and prevents bottlenecks. 

These aren’t abstract promises.  They’re achievable outcomes, powered by AI. But the key to unlocking these benefits is public confidence. Without it, even the best technology will struggle to gain traction. 

Putting responsible AI to work 

We’ve heard a lot about “responsible AI” and rightly so. We’re long past the theory stage. AI now needs to be ethical, fair and explainable in practice. We need to move beyond principles and start proving them through real-world success stories. 

Here’s what that looks like. 

  1. Pilot AI where trust already exists 
    Leverage trusted institutions like the NHS as proving grounds for AI. Show how AI can reduce wait times, improve diagnostic accuracy and optimise resources without compromising care. 
  2. Show tangible results, not theoretical benefits 
    Citizens care about outcomes, not algorithms. Highlight metrics that matter - faster processing times, reduced errors and demonstrable cost savings. 
  3. Prioritise transparency and communication 
    Tell the story behind the success. Explain how data is used, why decisions are fair and where humans remain in control. Address concerns openly to reinforce accountability. 
The role of QA in building trust 

The Government has already laid important foundations with a cross-government initiative headed up by Mibin Boban. Credit goes to him and his team for driving this work and setting standards the public sector can adopt when tackling AI. These frameworks define what good looks like. 

But standards alone aren’t enough. Quality Assurance (QA) for AI is the bridge between principle and practice. 

It’s not enough for AI to simply work. It has to work in a way that is fair, explainable and dependable. At 2i, we focus on four pillars: 

  1. Accuracy – Does the model perform as expected, reliably? 
  2. Performance – Does it deliver value at scale without degrading service? 
  3. Explainability – Can decisions be understood and justified by humans? 
  4. Robustness – Is the system secure, bias-resistant and adaptable to change? 

These aren’t tick-box exercises. They’re the essential building blocks for public trust. When citizens know that AI has been rigorously tested for fairness and reliability, confidence follows. 

What success looks like 

Imagine a future where renewing your passport, applying for benefits or booking a GP appointment feels as simple as using a well-designed banking app. Where efficiency doesn’t come at the cost of fairness or accountability. 

That’s the standard the UK Government can set. Not just as an adopter of AI, but as a global leader in trustworthy AI deployment. 

The UK has taken the right first steps. Now the focus must shift to proving that these systems work for everyone, every time. By embedding rigorous AI QA into the adoption process, we can ensure innovation and public confidence move forward together. 

Because in the AI era, trust isn’t just an outcome - it’s the foundation.