Artificial Intelligence isn't coming to the public sector — it's already here. That was the clear message from a recent webinar panel discussion hosted by Newham Council, University of East London, and GovNet. Attended by nearly 200 public sector technology leaders, the webinar explored how AI is being deployed not only safely, but ethically and inclusively, delivering practical impact in housing, healthcare, economic resilience, and beyond. The panel, moderated by Nathan Nagaiah (Lead for the Centre for AI, Newham Council) featured Professor Matt Bellgard, Pro-Vice Chancellor at the University of East London, an expert in AI and digital health, Arsalan Engineer, Head of Data at Newham Council with over a decade of experience in public sector transformation; and Professor Nick Zaps, a clinical researcher at Monash University leading innovative applications of AI in healthcare.
The panellists, drawn from academia, data leadership and healthcare research, gave a compelling view into how AI can serve residents, drive policy decisions and support sustainable development, all while maintaining the highest standards of governance.
The session opened with a challenge: How do we make AI work for the public sector, not just technologically, but ethically, inclusively, and at scale? For Nathan, the answer lies in community-first innovation. “We want to demonstrate that innovation doesn't just happen in Silicon Valley or in Whitehall, but it's happening here locally [in Newham], with a focus on fairness and transparency for real public impact.”
That mindset shapes the Newham Data Programme, which underpins the Centre for AI’s work. The Centre’s aim is to address real challenges, such as housing pressures, climate change, or workforce development, by applying AI that is trustworthy, transparent, and human-centred. Newham’s partnership with the University of East London demonstrates how academia and local government can co-create solutions that work in practice, not just on paper.
Professor Bellgard built on this by outlining the “Three Ts” of success: Tools, Trust, and Transparency. Later in the conversation, Nathan added a fourth: Talent. The panel agreed that while there’s no shortage of AI tools, the real question is whether they are fit for purpose in public services; whether they genuinely improve outcomes, whether they respect legal and ethical standards, and whether public servants are equipped to use them effectively.
Arsalan described how Newham’s wealth of data is being harnessed to respond to residents more quickly, optimise decision-making, and manage resources. He emphasised that data quality remains a key challenge, and that ethical use of AI is integral to every deployment.
Ethics was a recurring theme, not as a barrier to innovation, but as its foundation. From inclusive design to regulatory compliance, the conversation repeatedly returned to the importance of doing AI right, particularly when dealing with sensitive data and vulnerable populations. The Centre’s work includes projects on temporary accommodation, social care optimisation, and preparing local residents for future digital jobs — all areas where the impact of technology must be measured not just in efficiencies, but in equity.
Dialling in from Melbourne, Professor Zaps brought a compelling international perspective with his work in healthcare. He shared how machine learning models are being used to predict post-operative delirium, a costly and distressing condition affecting up to 20% of hospitalised patients after surgery. Through a PhD project using machine learning, his team developed highly accurate predictive models to identify at-risk patients in advance. However, the real barrier wasn’t the AI, it was data collection.
“The model works beautifully, but if a hospital doesn’t record medication histories or comorbidities properly, the model can’t function. It’s not a tech gap; it’s a data culture issue.” Nick’s conclusion resonated with many in the webinar: AI is only as good as the infrastructure around it. Governance, integration, and data maturity are every bit as important as the model itself.
Audience feedback supported this view. A live poll of attendees revealed where public sector professionals see the barriers to AI adoption:
- 65% cited lack of skills and knowledge.
- 51% pointed to data quality and availability.
- 50% raised ethical or legal concerns.
- 37% mentioned lack of funding/resources.
- Just 10% flagged leadership resistance.
For those working in public sector digital and data roles, this session was a timely reminder that AI’s promise doesn’t lie in theoretical models or flashy dashboards, but rather in the ability to solve real problems, safely, fairly, and at scale. That means investing in training, developing robust governance frameworks, and fostering collaboration between government, academia, and the community.
As the discussion ended, Nathan emphasised that the goal of AI in public services isn’t simply automation, but human-centred innovation. He reinforced the need for strong ethics, transparency, and public engagement to ensure AI delivers value safely and inclusively across communities.
In Newham, that journey is already well underway. For councils, departments and public bodies across the UK, the message is clear: ethical AI is not just possible, it’s happening now. The tools are available. The leadership is willing. What is needed is the collective will to ensure AI strengthens the public good at every turn.
If you're keen to find out more about Newham’s AI and data journey, from ethical innovation to real-world implementation, the team from Newham Council and the UK Centre for AI in the Public Sector will be present at DigiGov Expo. It's a great opportunity to connect, ask questions, and explore how local and central government can work together to build the future of ethical, inclusive AI. Register here! And if you missed the webinar, you can catch up on demand here

Ola Jader