TY - JOUR T1 - Efficient Large Language Model Based Passage Re-Ranking Using Single Token Representations AU - Na, Jeongwoo AU - Kwon, Jun AU - Choi, Eunseong AU - Lee, Jongwuk JO - Journal of KIISE, JOK PY - 2025 DA - 2025/1/14 DO - 10.5626/JOK.2025.52.5.395 KW - Information Retrieval KW - document reranking KW - knowledge distillation KW - Fusion-in-Decoder KW - context compression KW - Efficiency AB - In information retrieval systems, document re-ranking involves reordering a set of candidate documents based on evaluation of their relevance to a given query. Leveraging extensive natural language understanding capabilities of large language models(LLMs), numerous studies on document re-ranking have been conducted, demonstrating groundbreaking performance. However, studies utilizing large language models focus solely on improving reranking performance, resulting in degraded efficiency due to excessively long input sequences and the need for repetitive inference. To address these limitations, we propose ListT5++, a novel model that represents the relevance between a query and a passage using single token embedding and significantly improves the efficiency of LLM-based reranking through a single-step decoding strategy that minimizes the decoding process. Experimental results showed that ListT5++ could maintain accuracy levels comparable to existing methods while reducing inference latency by a factor of 29.4 relative to the baseline. Moreover, our approach demonstrates robust characteristics by being insensitive to th initial ordering of candidate documents, thereby ensuring high practicality in real-time retrieval environments.