Latest insights and news relating to Public Sector Technology.

Moving Beyond Proof of Concept: The FCA's Journey to AI Maturity

Written by Liuba Pignataro | Jan 26, 2026 7:00:00 AM

The GovTech theatre at DigiGov Expo 2025 hosted Holly Ellis from AWS and Gayathri Shyamsundar from the Financial Conduct Authority for a candid discussion about the real challenges of implementing generative AI in government. With delegates settling into their seats and the familiar hum of an expo in full swing providing the backdrop, Ellis wasted no time in addressing the elephant in the room: why do so many AI initiatives fail to make it past the experimental stage? 

The session opened with a provocative reframing of a widely cited statistic. Gartner research shows that only 41% of generative AI prototypes reach production – a figure typically preceded by the word "only" in industry discussions. But Ellis challenged this framing. "I actually think 41% is something we should be quite proud of, particularly in the public sector," she argued. "This is a pivotal moment in time where we are learning, we are experimenting, we are exploring." Quoting Jeff Bezos's 2015 assertion that "invention and failure are inseparable twins," she made a compelling case that the current moment calls for calculated risk-taking rather than guaranteed success. 

The crux of the challenge, Ellis explained, isn't technical. Drawing on insights from Amazon CEO Andy Jassy, she emphasised that the biggest obstacles to scaling AI are "cultural, leadership and process oriented rather than technical." With AWS's comprehensive technology stack displayed on screen from infrastructure through to ready-made applications like Amazon Q for coding assistance Ellis demonstrated that the tools already exist. "Whatever your level of capability, whatever your level of aspiration is in your organisation, there are different entry points for you and that technology exists today," she said. 

The AI Maturity Framework 

The concept of "AI maturity" formed the centrepiece of AWS's approach. Ellis described it as encompassing both technical and, crucially, non-technical capabilities that impact an organisation's ability to adopt AI at scale and create transformational strategic change. The framework spans a spectrum from low maturity, characterised by scattered, siloed data, isolated technical teams, and ad-hoc experiments – to high maturity, where data becomes a strategic asset, cross-functional teams operate with clear responsibilities, and business value measurement drives decision-making. 

The stakes are significant. Organisations with high AI maturity can expect four times more use cases reaching production and five times the financial impact, according to research from BCG. Yet only 8% of organisations have achieved advanced AI maturity. "We are at the very early stages with generative AI," Ellis acknowledged, "which makes it very hard for us to get to that high level of maturity. But if the foundations are in place, then as those technological advances come along, we are much more ready to implement them." 

The FCA's Practical Experience 

Gayathri Shyamsundar's insights provided the practical counterpoint to AWS's framework. As Head of Core Technology at the FCA, she's responsible for all cloud platforms and has been steering the regulator's AI journey for the past three years. Her candid assessment? "We're kind of in the middle." 

The FCA's approach has been methodical and collaborative. In 2024, they published their Frontier AI policy and deployed the UK's first AI testing lab, a sandbox environment where regulated firms and other regulators can collaborate and learn from each other's data and approaches. "This was absolutely critical for us because it gave us an opportunity to collaborate with other firms as well as understand and learn from each other's data," Shyamsundar explained. 

The organisation has moved beyond simple experimentation to develop specific use cases. Their Astro (now rebranded as Libra) model focuses on the senior manager regime, using AWS Bedrock to define capabilities and roles. In parallel, they're developing AI models to identify patterns in financial crime. "If data is the new oil, the FCA has got lots of oil," Shyamsundar noted. "So how do we use that data we have for the better use of consumers as well as for the market?" 

Breaking Down the Barriers 

When asked about the biggest challenges in building AI maturity, Shyamsundar didn't hesitate: "AI ethics and governance." She was adamant that these cannot be afterthoughts. "How do you bake that in very early in the model? So we get the maturity into the organisation." This is particularly acute for the FCA given its dual role as both regulator and AI adopter. "Being a regulator, it's double challenging. We want to innovate, we want to grow faster, but at the same time there is a bigger responsibility that we have in ensuring that we still regulate the market." 

Cost control emerged as another critical consideration. "Everyone wants to do a proof of concept on something. How do we make sure we have an eye on the cost? What is going to be the cost profile when it all scales up?" Shyamsundar asked. The FCA has been deliberately building what they call an "AI factory model" – bringing together diverse teams from technology, security, legal, and other departments to scale capabilities in a coordinated fashion. 

Training has proved essential. The FCA developed "AI Unlocked," a series of training modules spanning from basic understanding through to coding and configuration. "In a regulatory environment, not all of them are technologists," Shyamsundar observed. "How do we bridge the gap, so everyone appreciates, understands the risks of doing something and ethically that's going to be the key." 

Key Lessons for the Public Sector 

When Ellis asked for the single biggest learning from their journey, Shyamsundar's answer was unequivocal: "Start with a problem statement. You don't need to do AI because you want to – just understand the problem you're trying to solve." 

She emphasised several other critical principles: security, ethics, and regulatory frameworks cannot be afterthoughts but must be embedded from the outset; stakeholder buy-in is essential and the vision must be top-down, not bottom-up; technology cannot solve organisational problems alone; cross-functional collaboration is non-negotiable; and human-in-the-loop oversight remains absolutely critical. "The models are as good as you train them," she warned. "Always ensure that the human in the loop element is always, always reinforced." 

Ellis closed by highlighting AWS's Cloud Adoption Framework for AI, which helps organisations assess their maturity across multiple lenses and develop action plans for scaling. As delegates filed out, many headed to the AWS stand to explore demos and continue conversations, the message was clear: whilst the technology exists today, success in AI adoption depends far more on getting the culture, governance, and processes right than on choosing the latest model.