Customizing Large Language Models for Legal Consultations
DOI:
https://doi.org/10.31224/4374Abstract
In this paper, we present a novel approach to enhancing the performance of large language models (LLMs) for legal consultation tasks. Our method leverages multi-turn prompt engineering to iteratively refine responses, enabling the model to provide more accurate, legally coherent, and contextually relevant advice. The core of our approach lies in dynamically adjusting the prompt based on previous model outputs, ensuring that the legal reasoning process evolves with each iteration. We evaluate the effectiveness of our method through experiments on a manually curated legal dataset and compare it with multiple baseline approaches. The results demonstrate that our method outperforms existing models across various evaluation metrics, such as legal precision, coherence, and clarity. Additionally, human evaluators consistently rated the outputs generated by our model as more relevant and complete compared to other methods. Our approach shows great potential for real-world legal applications, offering a scalable solution for improving access to legal advice.
Downloads
Downloads
Posted
License
Copyright (c) 2025 Jiatao Lin, Yuan Wu

This work is licensed under a Creative Commons Attribution 4.0 International License.