The rise of artificial intelligence has fundamentally shifted how both fraudsters and public sector organisations operate. During a recent webinar, senior counter fraud professionals shared their insights on this evolving landscape and the challenges it presents.
Prefer to listen? Click here for the audio version.
DWP: Scale and Sophistication in Fraud Attacks
Kelly Murphy from the Department for Work & Pensions highlighted how AI has enabled fraudsters to operate at unprecedented scale and pace, particularly during the pandemic when emergency support measures were introduced. While fraudsters are increasingly using AI to develop existing technology or create entirely new approaches, Kelly emphasised an important caveat: "There are still relatively unsophisticated methods which the fraudsters use and do so successfully."
The DWP response has been to establish an Enhanced Review Team to tackle AI-enabled fraud, while maintaining vigilance against traditional fraud methods at the same time.
NAO: Cross-Government Vulnerabilities
Joshua Reddaway from the National Audit Office revealed that while organisations aren't yet reporting being "overwhelmed" by AI-driven attacks, they are seeing fraudsters use AI as part of traditional phishing and social engineering campaigns. More concerning is the emergence of cross-departmental attacks, where fraudsters compromise data systems in one government department to establish fraudulent identities for use against other departments.
As government becomes increasingly networked through data sharing, Reddaway warns of an "arms race for governance" where departments must improve their data protection while ensuring effective cross-government collaboration. The NAO recommends developing a comprehensive strategy for data sharing across government to provide necessary assurance.
Local Government: Resource Constraints vs. Emerging Threats
Shelley Osborne from local government painted a stark picture: "Fraudsters and cyber criminals are really in the driving seat at the moment with AI." She highlighted emerging risks such as AI-generated images being submitted as false evidence for insurance claims and grant schemes, noting the current difficulty in detecting sophisticated AI-generated content.
Unfortunately, local authorities face significant barriers to adopting AI defences, including costs limitations and balancing other conflicting priorities e.g. getting existing applications interfacing effectively across services - making it difficult to justify large AI investments.
GIAA: Balancing Efficiency with Caution
Alan Gibbons from the Government Internal Audit Agency acknowledged AI's potential for driving efficiencies through data processing and pattern recognition. However, he stressed the need for caution when using AI in investigative work, raising concerns about data accuracy, source verification, and the risk of inadvertently training AI systems with sensitive information.
The GIAA has focused on using AI for risk assessment and efficiency gains rather than direct investigation, giving Investigators their time back and allowing them to spend more time on the actual investigative work.
Thank you to our panelists:
- CHAIR: Rachael Tiffen, Director, Public Sector, CIFAS
- Kelly Murphy, Head of Performance, Planning and Workflow, DWP
- Joshua Reddaway, Director Fraud and Propriety, National Audit Office
- Shelley Osborne, Fraud Prevention Officer, Counter Fraud Shared Service, London Borough of Lambeth
- Alan Gibbons, Senior Counter Fraud Manager, Counter Fraud and Investigation, Government Internal Audit Agency
Jessica Kimbell, GovNet