Official implementation of "Legal Query RAG: A Retrieval-Augmented Generation Framework with Recursive Feedback for Legal Applications"
Published in IEEE Access · Read the paper ↗
LQ-RAG explicitly incorporates an agent-based iterative refinement mechanism during inference. It first generates an initial response to a user query and then utilizes an evaluation agent to assess its quality based on contextual relevance and factual grounding. If the response does not meet predefined criteria, the evaluation agent provides feedback to the prompt-engineering agent, which modifies the query to improve the next response. This iterative feedback loop continues until the evaluation scores approach optimal values. The below figure illustrates the overall schematic diagram of the proposed LQ-RAG system.
The schematic diagram of the proposed Legal Query RAG. The diagram is divided into two main components: Fine-Tuning (FT) Layer and RAG Layer. The FT Layer focuses on fine-tuning processes of embedding LLM and generative LLM. On the other hand, the RAG Layer incorporates different RAG modules with fine-tuned LLMs, an evaluation agent, and a feedback system designed to enhance the accuracy and quality of the generated responses.
- Custom Evaluation Agent – custom scorer to evaluate factual correctness and legal context
- Fine-Tuned Response Generator – LLM fine-tuned on legal texts
- Prompt Engineering Agent – dynamically adapts queries based on evaluation feedback
- Legal Embedding Model – specialized vector store for legal document retrieval
- +13% Hit Rate improvement
- +15% boost in Mean Reciprocal Rank (MRR)
- +24% performance gain over general LLM baselines
- +23% relevance score improvement vs. naive RAG setups
Follow these steps to install and run the project:
# Clone the repository
git clone https://github.com/wahidur028/Legal-Query-RAG.git
cd Legal-Query-RAG
# Install dependencies
pip install -r requirements.txt
# Run the application
python app.py📌 Note: Before running the application, make sure to update your
.envfile with the required API keys and environment variables.
If you use this work in your research or find it helpful, please cite:
@ARTICLE{10887211,
author={Wahidur, Rahman S. M. and Kim, Sumin and Choi, Haeung and Bhatti, David S. and Lee, Heung-No},
journal={IEEE Access},
title={Legal Query RAG},
year={2025},
volume={13},
number={},
pages={36978-36994},
keywords={Law;Retrieval augmented generation;Accuracy;Tuning;Semantics;Hybrid power systems;Adaptation models;Training;Reliability;Mathematical models;Retrieval-augmented generation;legal query;LLM agent;information retrieval},
doi={10.1109/ACCESS.2025.3542125}
}

