Preprint / Version 1

Customizing Large Language Models for Legal Consultations

##article.authors##

  • Jiatao Lin Henan Agricultural University
  • Yuan Wu

DOI:

https://doi.org/10.31224/4374

Abstract

In this paper, we present a novel approach to enhancing the performance of large language models (LLMs) for legal consultation tasks. Our method leverages multi-turn prompt engineering to iteratively refine responses, enabling the model to provide more accurate, legally coherent, and contextually relevant advice. The core of our approach lies in dynamically adjusting the prompt based on previous model outputs, ensuring that the legal reasoning process evolves with each iteration. We evaluate the effectiveness of our method through experiments on a manually curated legal dataset and compare it with multiple baseline approaches. The results demonstrate that our method outperforms existing models across various evaluation metrics, such as legal precision, coherence, and clarity. Additionally, human evaluators consistently rated the outputs generated by our model as more relevant and complete compared to other methods. Our approach shows great potential for real-world legal applications, offering a scalable solution for improving access to legal advice.

Downloads

Download data is not yet available.

Downloads

Posted

2025-02-12