January 8, 2024 By Eckard Schindler 4 min read

The judiciary, like the legal system in general, is considered one of the largest “text processing industries.” Language, documents, and texts are the raw material of legal and judicial work. That data plays a crucial role in the judicial system, helping investigators, lawyers and judges fit together the circumstances surrounding a particular case in an effort to see that justice is served.

As such, the judiciary has long been a field ripe for the use of technologies like automation to support the processing of documents. Efforts to further expand the use of emerging technologies to address this ongoing need put responsible artificial intelligence (AI) at the center of possible solutions.

The legal system has undergone a big transformation thanks to the adoption of technology. With digitization adopted by law firms and court systems, a trove of data in the form of court opinions, statutes, regulations, books, practice guides, law reviews, legal white papers and news reports are available to be used to train both traditional and generative AI foundation models by judicial agencies. These models could then be used by the court’s staff to help organize, search and summarize decades worth of legal text.

As the use cases of AI and other technologies continue to permeate the judiciary, judges, lawyers and staff must continue to be at the center of all decisions. Courts, and the legal system as a whole, must also be on alert for bias in data and algorithms that might have the potential to perpetuate inequalities those same courts seek to root out by deploying systems rooted on principles of trustworthy AI such as transparency and explainability, so all stakeholders understand how the system was trained, how it works and what’s the scope of its use.

Germany’s judicial system leads the way in AI

As the use cases of AI and generative AI in the judiciary continue to expand, some countries like Germany offer a few examples of how this might work as several of its jurisdictions throughout are experimenting with the technology as a means of supporting legal professionals and improving their service.

The demand for an automated solution arrives as Germany’s government has mandated that electronic file management be implemented by courts in all civil, administrative, social and criminal proceedings by 2026 as part of digitalization goals established by the European Union (EU). Now that all pleadings are in electronic form, the data can be used in new ways. AI, with its ability to understand natural language, allows the core work of the judiciary to completely rethink the production processes of analyzing, creating and archiving texts.

AI helps German courts manage backlog

One of the most effective uses of AI in the judiciary today is simply helping courts cope with the large number of cases they handle. In recent years, German courts have received an unprecedented flood of proceedings that have overwhelmed the judiciary and resulted in delayed proceedings, hearings, and outcomes. At the Stuttgart Higher Regional Court in Frankfurt, judges working on these cases were soon faced with a backlog of more than 10,000 cases. Unfortunately, the courts didn’t have any technology initially to cope with the volume of cases. Most of their work was done manually and was highly repetitive. The judges have to spend hours reading long electronic pleading files in the proceedings. The documents could be hundreds of pages and usually differ in only a few case-specific features.

The Ministry of Justice in Baden-Württemberg recommended using AI with natural language understanding (NLU) and other capabilities to help categorize each case into the different case groups they were handling. The courts needed a transparent, traceable system that protected data. IBM ® created an AI assistant named OLGA that offered case categorization, extracted metadata and could help bring cases to faster resolution. With OLGA, judges and clerks can sift through thousands of documents faster and use specific search criteria to find relevant information from various documents. Additionally, the system would provide information on the lawsuit to contextualize the information that surfaced from the search. The algorithm preserved the case history and gave users a comprehensive view of all the information for the case and where it originated. The judges are relieved of highly repetitive tasks and can concentrate on the complex issues, and the courts report they envision the processing time of cases can potentially be reduced by over 50%.

Frauke helps German courts with air passenger suits

Elsewhere in Germany, IBM worked with the Frankfurt District Court to successfully test an AI system known as “Frauke” (Frankfurt Judgment Configurator Electronic) for air passenger rights lawsuits. Between 10,000 and 15,000 cases related to passenger rights (e.g. related to delays) end up at the Frankfurt District Court every year. The court asked for help for the process of drafting the judgements. This was a very laborious and repetitive task for the judges, who had to collect the relevant data and, in the end, repeatedly write almost identical judgements.

In a proof-of-concept last year, Frauke extracted the case-individual data (including flight number and delay time) from the pleadings and in accordance to the judge’s verdict it has helped expedite the drafting of judgement letters by using pre-written text modules. So far, by using this technology, Frauke has been able to significantly reduce the processing time in the preparation of judgments.

Legal professionals must remain in the loop

These case studies provide real-world examples for the many benefits of incorporating AI into judicial proceedings to aide in the processing of documents and automation of manual tasks. At the same time, there is broad consensus on limits to AI’s use in the judiciary.

Some lawyers have already begun taking advantage of the technology to automate the creation of legal briefs, prompting judges to demand the subsequent disclosure that generative AI has been used. As legal systems grapple with the new technology’s pros and cons, lawyers relying on generative AI might also have to certify it hasn’t resulted in the disclosure of confidential or propriety client information.

The use of any type of AI by public entities, including the judiciary, should be anchored on the fundamental properties of trustworthy AI used by IBM. Explainability will play a key role. For example, to demonstrate that generative AI is used in a responsible way, it is necessary to offer the ability to trace the steps the system takes in categorizing, summarizing, and comparing documents.

See how IBM is helping drive the digital transformation of governments
Was this article helpful?

More from Government

Responsible AI can revolutionize tax agencies to improve citizen services

3 min read - The new era of generative AI has spurred the exploration of AI use cases to enhance productivity, improve customer service, increase efficiency and scale IT modernization. Recent research commissioned by IBM® indicates that as many as 42% of surveyed enterprise-scale businesses have actively deployed AI, while an additional 40% are actively exploring the use of AI technology. But the rates of exploration of AI use cases and deployment of new AI-powered tools have been slower in the public sector because of potential…

AI governance is rapidly evolving — here’s how government agencies must prepare

5 min read - The global AI governance landscape is complex and rapidly evolving. Key themes and concerns are emerging, however government agencies should get ahead of the game by evaluating their agency-specific priorities and processes. Compliance with official policies through auditing tools and other measures is merely the final step. The groundwork for effectively operationalizing governance is human-centered, and includes securing funded mandates, identifying accountable leaders, developing agency-wide AI literacy and centers of excellence and incorporating insights from academia, non-profits and private industry.…

Building trust in the government with responsible generative AI implementation

5 min read - At the end of 2023, a survey conducted by the IBM® Institute for Business Value (IBV) found that respondents believe government leaders often overestimate the public's trust in them. They also found that, while the public is still wary about new technologies like artificial intelligence (AI), most people are in favor of government adoption of generative AI.   The IBV surveyed a diverse group of more than 13,000 adults across nine countries including the US, Canada, the UK, Australia and Japan.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters