Search : [ author: Seongsoo Heo ] (1)

Hallucination Detection and Explanation Model for Enhancing the Reliability of LLM Responses

Sujeong Lee, Hayoung Lee, Seongsoo Heo, Wonik Choi

http://doi.org/10.5626/JOK.2025.52.5.404

Recent advancements in large language models (LLMs) have achieved remarkable progress in natural language processing. However, reliability issues persist due to hallucination, which remains a significant challenge. Existing hallucination research primarily focuses on detection, lacking the capability to explain the causes and context of hallucinations. In response, this study proposes a hallucination-specialized model that goes beyond mere detection by providing explanations for identified hallucinations. The proposed model was designed to classify hallucinations while simultaneously generating explanations, allowing users to better trust and understand the model’s responses. Experimental results demonstrated that the proposed model surpassed large-scale models such as Llama3 70B and GPT-4 in hallucination detection accuracy while consistently generating high-quality explanations. Notably, the model maintained stable detection and explanation performance across diverse datasets, showcasing its adaptability. By integrating hallucination detection with explanation generation, this study introduces a novel approach to evaluating hallucinations in language models.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr