Search : [ keyword: Hallucination ] (2)

A Retrieval Augmented Generation(RAG) System Using Query Rewritting Based on Large Langauge Model(LLM)

Minsu Han, Seokyoung Hong, Myoung-Wan Koo

http://doi.org/10.5626/JOK.2025.52.6.474

This paper proposes a retrieval pipeline that can be effectively utilized in fields requiring expert knowledge without requiring fine-tuning. To achieve high accuracy, we introduce a query rewriting retrieval method that leverages large language models to generate examples similar to the given question, achieving higher similarity than existing retrieval models. The proposed method demonstrates excellent performance in both automated evaluations and expert qualitative assessments, while also providing explainability in retrieval results through generated examples. Additionally, we suggest prompts that can be utilized in various domains requiring specialized knowledge during the application of this method. Furthermore, we propose a pipeline method that incorporates a Top-1 retrieval model, which chooses the most relevant document from the three returned by the query rewriting retrieval model. This aims to prevent the hallucination issue caused by the input of unnecessary documents into the large language model.

Hallucination Detection and Explanation Model for Enhancing the Reliability of LLM Responses

Sujeong Lee, Hayoung Lee, Seongsoo Heo, Wonik Choi

http://doi.org/10.5626/JOK.2025.52.5.404

Recent advancements in large language models (LLMs) have achieved remarkable progress in natural language processing. However, reliability issues persist due to hallucination, which remains a significant challenge. Existing hallucination research primarily focuses on detection, lacking the capability to explain the causes and context of hallucinations. In response, this study proposes a hallucination-specialized model that goes beyond mere detection by providing explanations for identified hallucinations. The proposed model was designed to classify hallucinations while simultaneously generating explanations, allowing users to better trust and understand the model’s responses. Experimental results demonstrated that the proposed model surpassed large-scale models such as Llama3 70B and GPT-4 in hallucination detection accuracy while consistently generating high-quality explanations. Notably, the model maintained stable detection and explanation performance across diverse datasets, showcasing its adaptability. By integrating hallucination detection with explanation generation, this study introduces a novel approach to evaluating hallucinations in language models.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr