Dr Laura Gilbert, Senior Director of AI at the Tony Blair Institute and a former member of the 10 Downing Street data science team, delivered a provocative session on Day 2 of DigiGov Expo 2025, challenging the fundamental basis of evidence-based policy: the attempt to predict the future. Opening with a stark question: "What's the point of data and evidence?", she answered simply: "To make better decisions."

Dr Gilbert shared her experience of having her data models dismissed by policy experts, leading to the insight that expert decision-making often relies on personal confidence rather than objective accuracy. To illustrate this flaw, she presented findings from the Tetlock Study, which tracked 284 political "experts" making approximately 100 predictions over a 20-year period. The results were damning: experts performed only slightly better than random chance, worse than minimally sophisticated statistical models, and performed better outside their field of expertise than within it. Yet these experts remained highly confident in their predictions.
This pattern of misplaced confidence extends beyond policy circles. The session highlighted that 80% of technology-trend predictions prove wrong, regardless of whether they come from self-proclaimed experts, and 74% of fund managers report having delivered above-average performance, which is a statistical impossibility that reveals the sector's systemic overconfidence.
The discussion then turned to futurist Ray Kurzweil's predictions and his concept of the "singularity", an event horizon beyond which humans cannot envision what will happen, which he predicts for 2045 whilst claiming an 86% success rate in tech predictions. However, the session presented a reality check: an analysis from the LessWrong community assessed Kurzweil's predictions, finding that of those evaluated, only 12 were deemed true, 15 weakly true, 52 could not be decided, 10 were weakly false, and none were outright false. This suggests that even highly regarded futurists struggle with the fundamental challenge of objective ignorance about what lies ahead.

Dr Gilbert then shifted focus to practical, data-backed ways to mitigate poor human judgement, advocating for measured diversity in teams, inducing rationality through engagement with numerical information, and remaining vigilant about cognitive biases like anchoring, where external numbers can dangerously skew decisions.
The presentation took a sobering turn when examining real-world failures of algorithmic decision-making in government. Examples included the UK passport photo checking system, which disproportionately rejected photos of women with darker skin at more than twice the rate of lighter-skinned individuals; the 2020 exam grading algorithm debacle that resulted in street protests and widespread criticism for poor transparency around the model's limitations and appeal processes; and the US criminal justice system's use of proprietary risk assessment algorithms that, despite excluding race as a direct input, introduced racial bias through proxy data sources whilst remaining closed to public scrutiny.
A particularly concerning example from healthcare illustrated the "black box" problem: clinicians using AI diagnostic tools cannot determine whether the model is identifying clinically relevant features, such as airspace opacity or heart border shapes, or relying on inhuman features like particular pixel values or image acquisition artefacts that have nothing to do with the underlying disease.
Looking ahead, the session presented a balanced view of AI's potential impacts. On the positive side: increased efficiency, productivity, and cost savings through reduced repetitive tasks; intelligent teaching systems; affordable, targeted, and preventative healthcare; safer roads and workplaces; better decision-making; and even automated space travel. However, these opportunities come with significant risks: easier and cheaper disinformation campaigns; increased vulnerability to fraud and security breaches; potential obsolescence of the human workforce; the risk of catastrophic outcomes; and the potential to widen existing inequalities rather than close them.
The session concluded with a powerful call to action, encapsulated in a quote displayed on screen: "World domination begins with an MVP" (linked to ai.gov.uk). Dr Gilbert urged the tech sector to stop attempting to answer, "Can we guess what's happening next?" and instead focus on "How do we build the future we want?" The imperative was clear: build digital public goods: free tools and services, to empower governments and charities to tackle complex citizen-facing issues, from human trafficking to urban planning. Her final message was unequivocal: "BE PREPARED."
Liuba Pignataro



