The opening panel discussion in the 2025 DigiGov Expo's GovTech Theatre, chaired by Miranda Sharp, trustee at the Centre for Cities think tank, focused on artificial intelligence in government. The panel brought together Fawad Qureshi, Global Field CTO at Snowflake; Dr Tom Smith, Director of AI at the Ministry of Housing, Communities and Local Government; and Robin Riley, Defence AI Centre Capability Head at the Ministry of Defence.
The session opened with stark findings from a recent National Audit Office report revealing that government departments have limited understanding of service costs despite spending £450 billion annually on operations. Excess costs arise from manual workarounds for outdated systems and poor-quality data from fragmented sources.
The scale of inefficiency was illustrated through IT support comparisons: a tech company with 10,000 employees maintains just 21 IT support staff, whilst in the UK civil service, approximately 5% of the half-million workforce provides IT support - roughly one person per 20 devices. The contrast highlighted how tech sector automation and trust-based approaches differ sharply from public sector manual processes and restricted access. One panellist recounted a workshop where 90% of government participants couldn't complete exercises because they lacked permission to uninstall a simple driver.
Despite challenges, genuine successes emerged. Work with the Met Office achieved a 19:1 return on investment, with every pound spent generating £19 in social value to the UK economy. In local government, AI-powered transcription tools are saving 87% of time in social care assessments, crucially freeing social workers to focus on human interaction rather than note-taking.
The Ministry of Defence's AI applications span business and battle space - from productivity improvements to satellite imagery analysis, radar interpretation, and sonar processing. Throughout, the emphasis remained on human centricity as one of MOD's five ethical principles.
A central theme was the relationship between AI and data infrastructure. The panel unanimously agreed that organisations need solid data foundations before expecting AI benefits - you cannot have AI cherries without data cakes. The fundamentals may be unglamorous, but they're essential.
One major public agency exemplified the problem: 2.5 exabytes of total data but only 300 petabytes unique - the same information copied ten times across eight systems. When data access queries arrive, staff must check ten different systems. The solution involves bringing compute to data rather than shipping data repeatedly, maintaining a single source of truth whilst preserving digital sovereignty.
For central government supporting local authorities, scaling AI across 300+ councils requires dedicated support, robust digital foundations including cybersecurity, and active market shaping to ensure viable supply chains.
The governance message was pragmatic: reuse existing digital approval processes rather than creating parallel bureaucracies. One department head's plea captured the sentiment perfectly - not another board. Testing governance through real implementations with delivery teams prevents purely abstract frameworks that become bureaucratic obstacles.
The approach borrowed from Government Digital Service principles: show the thing, do the thing, don't talk about it. Users shouldn't need to understand technical details any more than they need to understand search algorithm embeddings - they simply need systems that work and demonstrably save time or improve services.
Defence faces unique tensions: the risk of adopting AI unsafely versus the risk of falling behind adversaries who are advancing rapidly. The MOD's policy attempts to balance through three principles: ambitious, safe, and responsible. This tension exists across government - moving fast enough to realise benefits whilst ensuring proper safeguards.
Concerns arose about AI-generated content flooding organisations with reports and summaries produced ten times faster than humans could manage. The panel's response emphasised that intelligence remains strictly human. Large language models may appear intelligent but fundamentally aren't - they're tools for problem-solving once problems are identified, but problem-finding remains human work.
Staff using AI tools were given clear responsibility: you own your outputs regardless of how they're generated. AI should handle administrative donkey work - summarising documents, transcribing meetings, freeing humans for insight, interpretation, and recommendations that constitute genuine value. The goal is amplifying human capability, not replacing it.
AI doesn't alter existing data protection obligations for public authorities. What matters is understanding what makes AI systems different: where models get training data, how they make decisions about individuals, and ensuring proper human oversight. The MOD is building assurance frameworks in partnership with implementation teams, including testing through red teaming to verify protections actually work.
For sensitive processing, open-source large language models offer increasingly viable alternatives to cloud services, with major tech companies betting these are sufficient for most use cases.
The skills question received honest answers: government doesn't have all necessary AI skills, but neither does anyone else. The challenge affects legacy organisations across sectors. However, the UK has precedent - Government Digital Service became an international exemplar, and AI presents similar opportunity.
The strategy focuses on growing talent through apprenticeships, graduate schemes, and master's programmes. More fundamentally, it requires building data cultures that treat data as digital fuel rather than digital exhaust - creating feedback loops from every interaction, including failures.
Government cannot compete on salary but can win on mission and purpose. Public service offers meaningful work, and particularly in defence, genuinely exciting challenges. The competition for AI talent is global and intensifying, but organisations that build strong data cultures and focus on mission can attract people who want to make a difference.
The panel's core message emphasised starting with proper foundations, demonstrating benefits through working systems rather than promises, keeping governance proportionate and integrated with existing processes, and never losing sight of the humans these systems exist to serve.
With the most-googled term reportedly being Google itself - users typing it into search bars to access the search engine - government must design for real users at all skill levels. The challenge is technological but fundamentally human: building trust through demonstrated benefits whilst ensuring AI genuinely amplifies human capability rather than creating new barriers or diminishing human agency.
The race to build AI capability must proceed faster than the technology itself evolves, but with proper foundations, pragmatic governance, and focus on human outcomes, government can deliver genuine benefits to citizens whilst maintaining necessary trust and safeguards.