Legaltech had its biggest year ever in 2025. Around $6 billion of new investment driving technology innovation at a pace the legal industry has never seen. Is this good news for courts and criminal justice agencies? With the right approach, yes! In this article we explore how courts can benefit from it without taking on risks that are unacceptable.
Legaltech Has Grown Up
In 2025, global investment in legal technology reached $6 billion (Legalcomplex / Artificial Lawyer, January 2026). The UK Ministry of Justice renewed funding for the LawtechUK programme in March 2025, supporting more than 176 legaltech startups since 2023. The tools emerging from this wave are serious: document review, source-cited legal research, plain-language case summaries, scheduling intelligence — capabilities that would have required a five-year procurement programme a decade ago, now arriving as software subscriptions.
For justice organisations, this is a remarkable opportunity. But remarkably few of these tools have made it inside courts. Why?
Because Courts Are Different
Legaltech startups are optimised for speed. They iterate, they ship, they get feedback from users, they improve. Courts are optimised for something else entirely: the integrity of the administration of justice. A court that gets something wrong does not just lose a customer; it can damage someone’s liberty, their livelihood, or public confidence in the rule of law.
Regulation reflects that difference. Under the EU AI Act, which took effect in August 2024 and reaches full applicability for high-risk systems in August 2026, any AI used in “the administration of justice and democratic processes” is classified as high-risk (EU Regulation 2024/1689 “EU AI Act”, Annex III). High-risk means a specific set of obligations: risk management systems, data governance, technical documentation, record-keeping of outputs, transparency to users, human oversight, and demonstrated accuracy and robustness.
The UK has taken a more principles-based path, but the direction is the same. The Judicial Office published its first AI guidance in December 2023, refreshed it in April 2025, and refreshed it again in October 2025 - the speed of those updates is itself a signal (Courts and Tribunals Judiciary, Artificial Intelligence (AI) Guidance for Judicial Office Holders, 31 October 2025). A Lead Judge for Artificial Intelligence has been appointed. The guidance covers hallucinations, bias, confidentiality, and the personal responsibility of judicial office holders for anything produced in their name. Across the North Atlantic and beyond, the message is consistent: AI in justice is welcome, but it must be governed.
Fast-moving innovation and slow-moving rules are both doing their jobs properly. The question is what sits between them.
The Platform Answers the Middle
How does a court adopt ten different tools without running ten different compliance programmes? Each tool comes with its own vendor, its own data handling, its own update cycle, and its own compliance story. By August 2026, every one of them will need to demonstrate conformance with a high-risk AI regime that did not exist when they were built.
The answer most procurement teams are quietly arriving at is: you do not buy ten tools. You buy a platform that lets you adopt ten tools safely.
A court-specific case management system is that platform. Done well, it provides the governance wrapper that high-risk AI deployment requires:
- Record-keeping - every AI prompt, every output, every human decision stored against the case, time-stamped, source-cited, retrievable years later.
- Human oversight - role-based workflows that ensure an AI output is reviewed by a qualified person before it influences any substantive decision.
- Transparency - audit trails that let a court answer, years after the fact, which tool was used, which version, which prompt, which source material.
- Data governance - case data stays inside the court’s own environment; the AI tool sees only what the workflow permits, for only as long as it needs to.
- Substitution safety - when the AI model is updated, retired, or replaced, the governance framework stays constant, and the case record remains intact.
Figure 3 shows what this looks like in practice: three tools - each from a different vendor, each with a different capability - connected to a single platform, with their compliance status visible at a glance. The governance progress bars at the bottom tell the procurement lead everything they need to know: which obligations are met, where action is outstanding, and whether any tool is drifting out of alignment.

Figure 1 - The platform dashboard showing connected tools, their compliance status, and governance coverage. Tools plug in and out; the governance framework stays constant (tools illustrated, other than Lagaviti are for demo purposes, all data is fictional and for demo purposes).
None of this is novel. These are exactly the deployer obligations the EU AI Act spells out. The question is where they live. Build them into each AI tool and you have ten compliance programmes. Build them into the platform the tools plug into, and you have one.
What Plugs in Today
Think of the platform as a dock. Tools arrive, moor alongside for as long as they are useful, and leave when something better comes in. The dock handles the governance; the tools handle the work. Here are three examples of what that can look like in practice:
1. Plain-Language Summaries for Tribunal Panels
A legal AI assistant reads a complex medical or benefits history and produces a plain-language summary for tribunal panel members, with every statement linked back to a specific page of the source document. The CMS stores the original, the summary, and the link between them. Panel members read the summary faster; any dispute about what was said sends them straight to the source. Figure 1 shows how this looks in the interface: the AI summary on the left, the source document on the right, with teal badges linking each claim to its origin.

Figure 2 - A legal AI assistant summarises a complex benefits history for the tribunal panel. Every statement links to a specific page of the source bundle.
Behind what the panel sees, every AI interaction generates a compliance event. Figure 2 shows the governance log for this same case: the prompt that was sent, the output that was returned, the human review that followed, and the compliance mapping against both the EU AI Act and the UK Judicial Guidance. This is the record a court would produce if anyone - an appeal, an audit, a freedom of information request - asked how a particular output was generated.

Figure 3 - The governance wrapper in action. Every AI interaction is logged as a compliance event: what was asked, what was produced, who reviewed it, and what they decided.
2. Scheduling Intelligence for Listing Officers
A tool ingests case metadata and constraints - custody time limits, counsel availability, special measures, interpreter bookings - and suggests optimised lists with conflicts flagged. Listing officers retain the decision. The CMS logs which suggestions were accepted, which were overridden, and why. Over time, the record becomes a useful dataset in its own right.
3. Cross-System Consistency Checks
A lightweight intelligence layer watches for mismatches between court records, prosecution systems, and police case systems: different charges listed, missing exhibits, conflicting hearing dates. It surfaces the discrepancies to staff before they cause adjournments. It replaces nothing. It simply notices things humans would, if humans had the time.
None of these tools needs to be owned by Casedoc. Some of them are ours. Most are not. What matters is that the court has a platform where tools like these can be added as they mature - and removed if they do not - without the court’s compliance posture changing each time.
Why This is a Procurement Story, not a Technology One
The hardest question in justice modernisation is how to make any choice at all when the market moves faster than the procurement cycle. Five years ago, a court that picked a legaltech vendor could expect the product to look similar in five years' time.
A platform approach solves this without forcing the court to predict the future. The CMS is the long-lived asset. The AI tools are interchangeable. Pick what is useful today. Replace it when something better arrives. The audit trail, the human oversight, the data governance - all the things that satisfy the Judicial Office guidance and the EU AI Act - stay where they have always been: in the court’s own system, under the court’s own control.
The platform is bought once. The tools are bought as needed. The governance never moves.
A Friendlier Kind of Modernisation
Modernisation does not need to be disruptive. It does not need to start with a crisis, a scandal, or a collapsed trial. It can begin quite cheerfully: with a court that looks at a growing market, likes what it sees in parts of it, and decides to build a platform that lets those parts be used well.
The legaltech boom will continue. Regulation will continue to tighten. Somewhere in between, a lot of good work will get done by courts that set themselves up to benefit from both.
Casedoc builds court-specific case management software, a configurable COTS solutions designed to sit alongside national systems. Bjarni Sv. Gudmundsson will be speaking at Modernising Criminal Justice, 9th June 2026. If you have a process you’d like to talk through, we’re always interested in hearing what the real bottlenecks look like: bjarni@casedoc.com
Bjarni Sv. Gudmundsson, Casedoc

.png?width=1200&height=500&name=MJIT%20-%20Blog%20CTAs%20(12).png)
