Project summary
The project will aim to tackle the problems associated with contemporary AI technology, especially its inscrutable algorithms. Such black boxed technology used in the context of legal decision making, challenges several rule-of law ideals such as transparency in reasoning, accountability and relevancy of the explanation to the case at hand. In short, the use of AI for legal decision-making challenges lawʼs legitimacy. To better understand this problem and how it may be overcome, LEXplain investigates the legal explainability requirement in both historical, cross-jurisdictional and empirical dimensions and probes into how hybrid-AI, which combines Machine Learning with symbolic AI, might be a solution to the rule of law concerns associated with black boxed AI.
There is a strong need to better understand the relationship between XAI and legal justificatory explanation and how it might be possible to design a hybrid AI architecture that support legal reason-giving for individual decision-making. Investigating “human-in-the-loop” approaches to legal decision-making, LEXplain will examine how public institutions can gain many of the advantages that can be had from AI, while still retain human control over the decision-making process and thereby uphold explainability and rule of law values.
The overarching aim of LEXplain is to create a new knowledge space, where AI explainability meets legal explainability in order to push the “XAI for law” research frontier. To do so, LEXplain organizes its research around the following research question: How can legal justifcatory explainability be understood, supported and implemented in decision-making practices where AI is increasingly becoming available?
Project objectives
The primary objective of LEXplain is to establish new interdisciplinary knowledge on explainable AI (XAI) in the context of law by researching the explainability culture embedded in legal practice, as a basis for understanding how AI can support decision-making under the rule of law.
The secondary objective is to investigate how new forms of hybrid-AI systems can be used to support legal decision making by combining Large Language Models (LLMs) with knowledge and structure obtained from legislation, legal practice and other legal sources.
Research questions and design
LEXplain will focus on AI recommendations in the context of individual legal decision-making in public administration in a rule of law. We find that this focus, rather than full AI automation, presents us with the most enriching field of research in terms of both societal and scientific impact. With a “human-in-the-loop” approach to legal decision-making, public institutions can gain many of the advantages that can be had from AI, while still retaining human control over the decision-making process. LEXplain will pursue this approach via the investigation of how a new form hybrid-AI systems can be developed to support legal decision making by combining Large Language Models (LLMs) with knowledge and structure obtained from legislation, legal practice and other legal sources.
The overarching aim of LEXplain, then, is to create a new knowledge space, where AI explainability meets legal explainability in order to push the “XAI for law” research frontier. To do so, LEXplain organizes its research around the following research question:
RQ: How can legal justifcatory explainability be understood, supported and implemented in decision-making practices where AI is increasingly becoming available?
To research the interaction between AI systems for legal decision-making support and the justificatory explainability requirements pertaining to legal decision-making, LEXplain conducts an in-depth exploration of the relationship between legal and computational explainability. It does so through a three-dimensional inquiry into the RQ. Thus, LEXplain will seek to answer the RQ through three overlapping Research Streams (RS).
RS 1: Evolution and differentiation investigates how legal explainability requirements and explainability culture have evolved up through the second half of 20th and into the 21st Century, primarily through institutional interplay.
RS 2: AI explainability support investigates to what extent AI can support legal explainability. This RS will investigate and review the use of hybrid-AI architectures for legal explainability.
RS 3: Implementation and transformation investigates how hybrid AI systems can be developed and implemented in ways that are compatible with the upcoming AI Act.