Digital Library[ Search Result ]
An Improved Algorithm of Finding a Maximal Common Subsequence
Hyeonjun Shin, Joong Chae Na, Jeong Seop Sim
http://doi.org/10.5626/JOK.2023.50.9.737
A maximal common subsequence (MCS) of two strings is a common subsequence that cannot be extended by inserting any character. Unlike the longest common subsequence (LCS), the length of MCS can vary as the longest MCS is an LCS. Although LCS is commonly used to compare similarities of two sequences, computing can take a significant amount of time. Hence, finding a longer MCS is important, as it can be computed faster than the LCS. An algorithm was proposed to compute one of the MCSs of two strings X and Y of total length n using O(kn) space and O(n√(logn/loglogn)) time. Improved algorithms were also proposed. In this paper, we present an algorithm that can check for more characters to compute an MCS. The algorithm proposed in this paper runs in O(kn) space and O(n√(logn/loglogn)) time for a given constant k. Experimental results using both real and randomly generated data showed that the length of the MCS computed by the algorithm proposed in this paper could be up to 6.31 times longer than those computed by previous algorithms.
Improving the Quality of Generating Imbalance Data in GANs through an Exhaustive Contrastive Learning Method
Hyeonjun Shin, Sangbaek Lee, Kyuchul Lee
http://doi.org/10.5626/JOK.2023.50.4.295
As the performance of deep learning algorithms has improved, they are being used as a way to solve various problems in the real world. In the case of data that reflect the real world, imbalance data may occur depending on the frequency of occurrence of events or the difficulty of collection. Data with an inconsistent number of classes that make up the data are called imbalance data, and in particular, it is difficult to learn the minority classes with relatively little data through Deep Learning algorithms. Recently, Generative Adversarial Nets (GANs) have been applied as a method for data augmentation, and self-supervised learning-based pre-learning has been proposed for minority class learning. However, because class information of imbalance data is utilized in the process of learning the Generative Model, the quality of generated data is poor due to poor learning of minority classes. To solve this problem, this paper proposes a similarity-based exhaustive contrast learning method. The proposed method is quantitatively evaluated through the Frechet Inception Distance (FID) and Inception Score (IS). The method proposed in this paper confirmed the performance improvement of the Frechet Inception Distance of 16.32 and the Inception Score of 0.38, as compared to the existing method.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr