XHAILe: Explainable Hybrid AI for Computational Law and Accurate Legal Chatbots
Thomas Hildebrandt
University of Copenhagen
The NLP Reading Group is excited to host Thomas Hildebrandt who will be presenting his work on XHAILE and Hybrid AI for Computational Law and Legal Decision Support.
Logistics
Date: Tuesday July 22
Time: 10AM
Location: on Google Meet, to be screencast at Mila in A14
Abstract
Since OpenAI opened the eyes of the world for the capabilities of large language models and the promises of conversational agents, we have seen a surge of legal chatbots in legal decision support and decision making. However, as the public is perhaps starting to realize, language models are challenged by the lack of accuracy, or so-called hallucinations. The talk will describe the background and vision of two new inter-disciplinary research projects, XHAILe and LEXplain, combining Law, Linguistics and Computer Science at Copenhagen University, initiated in the Spring 2025 at Copenhagen University. The projects both tackle the question of what it takes to provide explainable and accurate legal chatbots. The LEXplain project is funded by the Research Council of Norway and is rooted at the two Faculties of Law at University of Bergen (Norway) and University of Copenhagen. The focus of the project is to research what a legal explanation and from that grounding investigate the possibilities and limitations of using artificial intelligence to make legal decisions in public administration. The XHAILe project is funded by Innovation Fund Denmark. The project is rooted at the Computer Science Department and has a strong focus on building AI prototypes for computational law and conversational agents, jointly with industrial partners and governmental organisations. The Faculty of Law plays an important role of researching how to provide trustworthy benchmarks and guidelines for legal use of the technology, while the Department of Nordic Studies and Linguistics collaborates with researchers in NLP in developing tools for translating Law into AI models. A central hypothesis of the project is that we may succeed by combining symbolic AI for knowledge representation, NLP and sub-symbolic AI for conversational interaction and translation of law into symbolic AI models. The project builds on top of the declarative process modelling language of Dynamic Condition Response (DCR) graphs, which has been developed over the last 15 years and now widely used for supporting and automating governmental knowledge work processes in Denmark.
There are two open 2-year postdoc/assistant professorship positions, see details here
Speaker Bio
Thomas Hildebrandt is professor at Department of Computer Science at University of Copenhagen.