TY - JOUR T1 - OCR post-processing, Korean OCR error correction, Prompt engineering, LLM AU - Hwang, Hyunsun AU - Jung, Youngjun AU - Lee, Changki JO - Journal of KIISE, JOK PY - 2025 DA - 2025/1/14 DO - 10.5626/JOK.2025.52.11.948 KW - Semantic Role Labeling KW - Large Language Model KW - In-context learning KW - example selection KW - example reordering AB - Recent large language models utilize In-context Learning (ICL) techniques, which process existing tasks by inserting examples into prompts without requiring additional training data. This approach leverages their inherent language understanding capabilities developed during pre-training on massive datasets. However, these example-based ICL techniques rely on few-shot examples, leading to significant performance variations depending on the selection and structure of the examples in the prompt. This paper proposes methods to enhance example selection and reorganization when applying ICL techniques to Semantic Role Labeling, a challenging task that requires outputting semantic structures. In particular, we found that simply ordering examples in reverse similarity order can achieve performance close to the optimal example ordering for semantic role labeling tasks.