Digital Library[ Search Result ]
Enhancing LLM-based Zero-Shot Conversational Recommendation via Reasoning Path
Heejin Kook, Seongmin Park, Jongwuk Lee
http://doi.org/10.5626/JOK.2025.52.7.617
Conversational recommender systems provide personalized recommendations through bi-directional interactions with users. Traditional conversational recommender systems rely on external knowledge, such as knowledge graphs, to effectively capture user preferences. While recent rapid advancement of large language models has enabled zero-shot recommendations, challenges remain in understanding users' implicit preferences and designing optimal reasoning paths. To address these limitations, this study investigates the importance of appropriate reasoning path construction in zero-shot based conversational recommender systems and explores the potential of using a new approach based on this foundation. The proposed framework consists of two stages: (1) comprehensively extracting both explicit and implicit preferences from conversational context, and (2) constructing reasoning trees to select optimal reasoning paths based on these preferences. Experimental results on benchmark datasets INSPIRED and ReDial show that our proposed method achieves up to 11.77% improvement in Recall@10 compared to existing zero-shot methods, It even outperforms some learning-based models.
Pretrained Large Language Model-based Drug-Target Binding Affinity Prediction for Mutated Proteins
Taeung Song, Jin Hyuk Kim, Hyeon Jun Park, Jonghwan Choi
http://doi.org/10.5626/JOK.2025.52.6.539
Drug development is a costly and time-consuming process. Accurately predicting the impact of protein mutations on drug-target binding affinity remains a major challenge. Previous studies have utilized long short-term memory (LSTM) and transformer models for amino acid sequence processing. However, LSTMs suffer from long-sequence dependency issues, while transformers face high computational costs. In contrast, pretrained large language models (pLLMs) excel in handling long sequences, yet prompt-based approaches alone are insufficient for accurate binding affinity prediction. This study proposed a method that could leverage pLLMs to analyze protein structural data, transform it into embedding vectors, and use a separate machine learning model for numerical binding affinity prediction. Experimental results demonstrated that the proposed approach outperformed conventional LSTM and prompt-based methods, achieving lower root mean square error (RMSE) and higher Pearson correlation coefficient (PCC), particularly in mutation-specific predictions. Additionally, performance analysis of pLLM quantization confirmed that the method maintained sufficient accuracy with reduced computational cost.
KULLM: Learning to Construct Korean Instruction-Following Large Language Models
Seungjun Lee, Yoonna Jang, Jeongwook Kim, Taemin Lee, Heuiseok Lim
http://doi.org/10.5626/JOK.2024.51.9.817
The emergence of Large Language Models (LLMs) has revolutionized the research paradigm in natural language processing. While instruction-tuning techniques have been pivotal in enhancing LLM performance, the majority of current research has focused predominantly on English. This study addresses the need for multilingual approaches by presenting a method for developing and evaluating Korean instruction-following models. We fine-tuned LLM models using Korean instruction datasets and conducted a comprehensive performance analysis using various dataset combinations. The resulting Korean instruction-following model is made available as an open-source resource, contributing to the advancement of Korean LLM research. Our work aims to bridge the language gap in LLM development and promote more inclusive AI technologies.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr