Digital Library[ Search Result ]
Root Cause Analysis for Microservice Systems Using Anomaly Propagation by Resource Sharing
Junho Park, Joyce Jiyoung Whang
http://doi.org/10.5626/JOK.2025.52.4.341
Identifying root causes of failures in microservice systems remains a critical challenge due to intricate interactions among resources and propagation of errors. We propose AnoProp, a novel model for root cause analysis to address challenges by capturing inter-resource interactions and the resulting propagation of anomalies. AnoProp incorporates two core techniques: the anomaly score measurement for metrics using regression models and the root cause score evaluation for resources based on the propagation rate of these anomalies. Experimental results using an Online Boutique dataset demonstrated that AnoProp surpassed existing models across various evaluation metrics, validating its ability to provide balanced performance for different types of root causes. This study underscores the potential of AnoProp to enhance system stability and boost operational efficiency in microservice environments.
Prompt Tuning For Korean Aspect-Based Sentiment Analysis
Bong-Su Kim, Seung-Ho Choi, Si-hyun Park, Jun-Ho Wang, Ji-Yoon Kim, Hyun-Kyu Jeon, Jung-Hoon Jang
http://doi.org/10.5626/JOK.2024.51.12.1043
Aspect-based sentiment analysis examines how emotions in text relate to specific aspects, such as product characteristics or service features. This paper presents a comprehensive methodology for applying prompt tuning techniques to multi-task token labeling challenges using aspect-based sentiment analysis data. The methodology includes a pipeline for identifying emotion expression domains, which generalizes the token labeling problem into a sequence labeling problem. It also suggests selecting templates to classify separated sequences based on aspects and emotions, and expanding label words to align with the dataset’s characteristics, thus optimizing the model's performance. Finally, the paper provides several experimental results and analyses for the aspect-based sentiment analysis task in a few-shot setting. The constructed data and baseline model are available on AIHUB. (www.aihub.or.kr).
Korean Dependency Parsing Using Sequence Labeling
http://doi.org/10.5626/JOK.2024.51.12.1053
Dependency parsing is a crucial step in language analysis. It identifies relationships between words within a sentence. Recently, many models based on a pre-trained transformer have shown impressive performances in various natural language processing research. hey have been also applied to dependency parsing. Generally, traditional approaches to dependency parsing using pre-trained models consist of two main stages: 1) merging token-level embeddings generated by the pre-trained model into word-level embeddings; and 2) analyzing dependency relations by comparing or classifying the merged embeddings. However, due to a large number of parameters and additional layers required for embedding construction, comparison, and classification, these models can be inefficient in terms of time and memory usage. This paper proposes a dependency parsing technique based on sequential labeling to improve the efficiency of training and inference by defining dependency parsing units and simplifying model layers. The proposed model eliminates the necessity of the word-level embedding merging step by utilizing special tokens to define parsing units. It also effectively reduces the number of parameters by simplifying model layers. As a result, the training and inference time is significantly shortened. With these optimizations, the proposed model maintains meaningful performance in dependency parsing.
Constructing a Korean Knowledge Graph Using Zero Anaphora Resolution and Dependency Parsing
Chaewon Lee, Kangbae Lee, Sungyeol Yu
http://doi.org/10.5626/JOK.2024.51.8.736
This study introduces a novel approach to creating a Korean-based knowledge graph by employing zero anaphora resolution, dependency parsing, and knowledge base extraction using ChatGPT. In order to overcome the limitations of conventional language models in handling the grammatical and morphological characteristics of Korean, this research incorporates prompt engineering techniques that combine zero anaphora resolution and dependency parsing. The main focus of this research is the 'Ko-Triple Extraction' method, which involves restoring omitted information in sentences and analyzing dependency structures to extract more sophisticated and accurate triple structures. The results demonstrate that this method greatly enhances the efficiency and accuracy of Korean text processing, and the validity of the triples has been confirmed through precision metrics. This study serves as fundamental research in the field of Korean text processing and suggests potential applications in various industries. Future research aims to apply this methodology to different industrial sectors and by expanding and connecting knowledge graph, generate valuable business insights. This approach is expected to contribute significantly make an important contribution not only to the advancement of natural language processing technologies but also to the effective of Korean in the field of artificial intelligence.
Korean Dependency Parsing using Subtree Linking based on Machine Reading Comprehension
Jinwoo Min, Seung-Hoon Na, Jong-Hoon Shin, Young-Kil Kim, Kangil Kim
http://doi.org/10.5626/JOK.2022.49.8.617
In Korean dependency parsing, biaffine attention models have shown state-of-the-art performances; they first obtain head-level and modifier-level representations by applying two multi-layer perceptrons (MLP) on the encoded contextualized word representation, perform the attention by regarding modifier-level representation as a query and head-level one as a key, and take the resulting attention score as a probability of forming a dependency arc between the corresponding two words. However, given two target words (i.e., candidate head and modifier), biaffine attention methods are basically limited to their word-level representations, not being aware of the explicit boundaries of their phrases or subtrees. Thus, without relying on semantically and syntactically enriched phrase-level and subtree-level representations, biaffine attention methods might be not effective in the case that determining a dependency arc is not simple but complicated such as identifying a dependency between “far-distant” words, where these cases may often require subtree or phrase-level information surrounding target words. To address this drawback, this paper presents the use of dependency paring framework based on machine reading comprehension (MRC) that explicitly utilizes the subtree-level information by mapping a given child subtree and its parent subtree to a question and an answer, respectively. The experiment results on standard datasets of Korean dependency parsing shows that the MRC-based dependency paring outperforms the biaffine attention model. In particular, the results further given observations that improvements in performances are likely strong in long sentences, comparing to short ones.
Deletion-based Korean Sentence Compression using Graph Neural Networks
Gyoung-Ho Lee, Yo-Han Park, Kong Joo Lee
http://doi.org/10.5626/JOK.2022.49.1.32
Automatic sentence compression aims at generating a concise sentence from a lengthy source sentence. Most common approaches to sentence compression is deletion-based compression. In this paper, we implement deletion-based sentence compression systems based on a binary classifier and long short-term memory (LSTM) networks with attention layers. The binary classifier, which is a baseline model, classifies words in a sentence into words that need to be deleted and words that will remain in a compressed sentence. We also introduce a graph neural network (GNN) in order to employ dependency tree structures when compressing a sentence. A dependency tree is encoded by a graph convolutional network (GCN), one of the most common GNNs, and every node in the encoded tree is input into the sentence compression module. As a conventional GCN deals with only undirected graphs, we propose a directed graph convolutional network (D-GCN) to differentiate between parent and child nodes of a dependency tree in sentence compression. Experimental results show that the baseline model is improved in terms of the sentence compression accuracy when employing a GNN. Regarding the performance comparison of graph networks, a D-GCN achieves higher F1 scores than a GCN when applied to sentence compression. Through experiments, it is confirmed that better performance can be achieved for sentence compression when the dependency syntax tree structure is explicitly reflected.
RNN model for Emotion Recognition in Dialogue by incorporating the Attention on the Other’s State
http://doi.org/10.5626/JOK.2021.48.7.802
Emotion recognition has increasingly received much attention in artificial intelligence, lately. In this paper, we present an RNN model that analyzes and identifies a speaker’s emotion appeared through utterances in conversation. There are two kinds of speaker considered context, self-dependency and inter-speaker dependency. In particular, we focus more on inter-speaker dependency by considering that the state context information of the relative speaker can affect the emotions of the current speaker. We propose a DialogueRNN based model that adds a new GRU Cell for catching inter-speaker dependency. Our model shows higher performance than the performances of DialogueRNN and its three variants on multiple emotion classification datasets.
HTTP/3 Stream Prioritization based on Web Object Dependency
http://doi.org/10.5626/JOK.2021.48.7.850
HTTP/3 is an application layer protocol that includes new features to meet the needs of the modern web. IETF standardization of HTTP/3 has come to its final stage. HTTP/3 provides transport layer level stream multiplexing and accordingly it has encountered stream prioritization problem. The problem states the determination of which stream to transmit amongst multiple streams on a connection within limited network resources and this contributes to the completion time of web object loading. Meanwhile, dependency relationship between web activities exists and this implies that dependency relationship between web object loading activities also exists. In order to transfer web objects in accordance with the web page load process at the browser, we proposed a HTTP/3 stream prioritization scheme based on web object dependency. Particularly, we conducted the evaluation on a browser-based testbed we built rather than on HTTP/3 library. The proposed prioritization scheme was evaluated using the testbed and it was shown that the application of the scheme could improve the user’s experience.
Building a Korean Sentence-Compression Corpus by Analyzing Sentences and Deleting Words
GyoungHo Lee, Yo-Han Park, Kong Joo Lee
http://doi.org/10.5626/JOK.2021.48.2.183
Developing a sentence-compression system based on deep learning models requires a parallel corpus consisting of both original sentences and compressed sentences. In this paper, we propose a sentence-compression algorithm that can compress an original sentence into a short sentence. Our basic approach is to delete nodes from a syntactic-dependency tree of the original sentence while maintaining the grammaticality of the compressed sentence. The algorithm chooses nodes to be deleted using the structural constraints and semantically obligatory information of the sentence. By applying the algorithm to the first sentences and headlines of news articles, we built a Korean sentence-compression corpus consisting of approximately 140,000 pairs. We manually assessed the quality of the compression in terms of readability and informativeness, which yielded results of 4.75 and 4.53 out of 5, respectively.
Korean Dependency Parsing using Token-Level Contextual Representation in Pre-trained Language Model
http://doi.org/10.5626/JOK.2021.48.1.27
Dependency parsing is a problem of disambiguating sentence structure by recognizing dependencies and labels between words in sentences. In contrast to previous studies that have applied additional RNNs to the pre-trained language model, this paper proposes a dependency parsing method that uses fine-tuning alone to maximize the self-attention mechanism of the pre-trained language model, and also proposes a technique for using relative distance parameters and SEP tokens. In the results of evaluating the Sejong parsing corpus of TTA standard guidelines, the KorBERT_base model showed 95.73% UAS and 93.39% LAS while the KorBERT_large model showed 96.31% UAS and 94.17% LAS. This represents an improvement of about 3% compared to the results of previous studies that did not use the pre-trained language model. Next, the results of the word-morpheme mixed transformation corpus of the previous study showed that the KorBERT_base model was 94.19% UAS and that the KorBERT_large model was 94.76% UAS.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr