Building artificial intelligence capability in government is not only about tools and technology. It is about people. Confidence, understanding and the ability to ask the right questions are the foundations that allow new technology to be applied responsibly.
Over the past year Hippo worked with a government department to explore what it takes to build capability at scale. The programme brought together hundreds of public servants to learn, test and question how AI might shape policy and services.
The challenge
AI is high on the agenda across government, but many teams lack a shared understanding of what AI is, how it works and where it might be useful. Misconceptions quickly become barriers, with a common one being that you need to be technical to use AI. Another is that AI can be treated as a shortcut to efficiency without considering issues of fairness or trust.
Alongside these skills gaps are questions of responsibility. In the public sector, it is not enough to ask what AI can do. The harder question is what it should do. When decisions touch people’s lives, technology must be held to the same standards of transparency, fairness and accountability that we expect of public services. Building capability is not just a technical exercise. It is about creating confidence and trust.
The approach
The programme was designed in phases, each one building a different aspect of capability. The first phase, called AI Week, was a hands-on learning experience for hundreds of public servants. Over five days people explored AI fundamentals, looked at real-world use cases and worked through the ethical issues that surrounds new technology. The aim was not to train data scientists but to remove the mystery and show how AI fits into existing ways of working.
From there, the focus moved into Policy Impact Workshops. These sessions brought policy teams together to explore how AI could shape their work. By looking at live policy areas, teams were able to test ideas in context and understand how the technology could support strategic objectives.
Building on AI Week and the Policy Impact Workshops, the next phase will be Design in a Day. This will use rapid prototyping to give teams the chance to explore how AI might shape future services or policies. It is planned as part of the ongoing programme and will focus on encouraging experimentation and showing how ideas can start to be translated into practice.
Breaking down barriers
One of the most valuable aspects of the programme was the AI 101 sessions. These tackled the basics in clear language. What AI is and is not. How it works in practice. Where it connects with other systems, for many participants this grounding was what allowed them to see beyond the buzzwords.
By making the fundamentals accessible, the sessions helped to shift the focus from fear of the unknown to a place where people could think critically and responsibly about AI. This is essential in government, where decisions must balance innovation with duty of care.
Responsibility at the centre
Every stage of the programme placed responsibility at the core. Efficiency and innovation can be powerful drivers, but they are only valuable when balanced with fairness and trust. The sessions explored bias in training data, the potential impact on jobs and the environmental cost of large scale computing.
Responsible AI is not a checklist to be added at the end. It has to be embedded from the beginning. For public servants, this means asking difficult questions and being confident enough to challenge assumptions.
What we learned
Creating a confident and questioning workforce is the most effective way to scale AI capability. Some of the key lessons that emerged from the programme include:
- Start with confidence-building, not code. People need to understand the basics before they can apply them responsibly.
- Design ethics into the process from the start. Responsible AI is not an afterthought.
- Use phased, hands-on learning to make adoption real. Awareness alone does not change practice.
The programme showed that capability is built through curiosity, clarity and collaboration. When people feel able to explore new technology without fear, they are better placed to apply it in ways that serve the public.
Keeping trust at the centre
AI will continue to evolve, and so will the way government applies it. What will not change is the need for public trust. When technology is used in public services it carries a responsibility that goes beyond efficiency. It has to work for everyone.
By investing in capability, government can create the conditions for responsible innovation. That means people who understand what AI is, who can see both its potential and its risks, and who feel confident to question and improve the tools they use.
Building AI capability at scale is not about producing experts. It is about creating a workforce that keeps innovation human first, makes the most of partnerships and ensures that public services remain fair, transparent and accountable.
Learn more from Hippo at DigiGov Expo 2025. Their talk , "Upskill, Uplift, Empower: Building AI Capability in Government at Scale," offers a unique look at a successful partnership between Hippo and a government department, focused on facilitating long-term AI capability.
.png)
Hippo