Digital Library[ Search Result ]
Applying Deep Neural Networks and Random Forests to Predict the Pathogenicity of Single Nucleotide Variants in Hereditary Cancer-associated Genes
Da-Bin Lee, Seonhwa Kim, Moonjong Kang, Changbum Hong, Kyu-Baek Hwang
http://doi.org/10.5626/JOK.2023.50.9.746
The recent proliferation of genetic testing has made it possible to explore an individual"s genetic variants and use pathogenicity information to diagnose and prevent genetic diseases. However, the number of identified variants with pathogenicity information is quite small. A method for predicting the pathogenicity of variants by machine learning was proposed to address this problem. In this study, we apply and compare deep neural networks with random forests and logistic regression, which have been widely used in previous studies, to predict variant pathogenicity. The experimental data consisted of 1,068 single-nucleotide variants in genes associated with hereditary cancers. Experiments on 100 random data-sets generated for hyperparameter selection showed that random forests performed best in terms of area under the precision-recall curve. On 15 holdout gene data-sets, deep neural networks performed best on average, but the difference in performance from the second-best random forest was not significant. Logistic regression was also statistically significantly worse than that of either model. In conclusion, we found that deep neural networks and random forests were generally better than logistic regression at predicting the pathogenicity of single-nucleotide variants associated with hereditary cancer.
Efficient Compilation Error Localization with DNN
http://doi.org/10.5626/JOK.2022.49.6.434
There are few programs with no compilation errors. The compiler provides the programmers with compiler error messages as clues to solve the problem, but analyzing the error messages correctly also consumes much time. Although there are many proposals that suggest the error localization method and how to repair the error, most of the proposals are using data from novice programmers, or can be applied only to one specific programming language. It is difficult to apply practically in large-scale projects conducted in the company. In this study, to increase the efficiency of compile error handling in practical projects, we propose DeepErrorFinder which identifies the location of compilation errors using DNN. This model, which is based on the LSTM model, predicts the error location after training based on compilation error logs, and repair changes from mobile phone software development projects. As a result of the experiments, it showed an accuracy of 52% and reduced the elapsed time compared to a manual search. It can facilitate quickly finding the location of the compilation error code in practice projects.
Semantic Face Transformations for Attacking Deep Neural Networks and Improving Robustness
http://doi.org/10.5626/JOK.2021.48.7.809
Deep neural networks(DNNs) have achieved great successes in various vision fields such as autonomous driving, face recognition, and object detection. However, a well-trained network can be manipulated if the input of the deep neural networks is disturbed by perturbations. Currently a common attack method is by adding perturbations to the pixel space of images by limiting the Lp-norm of the perturbations. Pixel-based transformations are easily detected by the naked eye so a realistic effective attack can be a method of disturbing the network by unnaturally transforming the image. In this paper, we proposed a new attack method to use natural color transformation through the segmentation of face images. We generated face transformation images based on semantic face transformation and conducted comprehensive experiments to show that using our face transformation reduced the accuracy rate of the classification network. Our face transformation images were also used for robustness training of the neural network. The robustness of the deep neural network was improved.
Performance Comparison and Analysis Between Neural and Non-neural Autoencoder-based Recommender Systems
http://doi.org/10.5626/JOK.2020.47.11.1078
While deep neural networks have been bringing advances in many domains, recent studies have shown that the performance gain from deep neural networks is not as extensive as reported, compared to the higher computational complexity they require. This phenomenon is caused by the lack of shared experimental settings and strict analysis of proposed methods. In this paper, 1) we build experimental settings for fair comparison between the different recommenders, 2) provide empirical studies on the performance of the autoencoder-based recommender, which is one of the main families in the literature, and 3) analyze the performance of a model according to user and item popularity. With extensive experiments, we found that there was no consistent improvement between the neural and the non-neural models in every dataset and there is no evidence that the non-neural models have been improving over time. Also, the non-neural models achieved better performance on popular item accuracy, while the neural models relatively perform better on less popular items.
Branchpoint Prediction Using Self-Attention Based Deep Neural Networks
http://doi.org/10.5626/JOK.2020.47.4.343
Splicing is a ribonucleic acid (RNA) process of creating a messenger RNA (mRNA) translated into proteins. Branchpoints are sequence elements of RNAs essential in splicing. This paper proposes a novel method for branchpoint prediction. Identification of branchpoints involves several challenges. Branchpoint sites are known to depend on several sequence patterns, called motifs. Also, a branchpoint distribution is highly biased, imposing a class-imbalanced problem. Existing approaches are limited in that they either rely on handcrafted sequential features or ignore the class imbalance. To address those difficulties, the proposed method incorporates 1) Attention mechanisms to learn sequence-positional long-term dependencies, and 2) Regularization with triplet loss to alleviate the class imbalance. Our method is comparable to the state-of-the-art performance while providing rich interpretability on its decisions.
Object Recognition in Low Resolution Images using a Convolutional Neural Network and an Image Enhancement Network
Injae Choi, Jeongin Seo, Hyeyoung Park
http://doi.org/10.5626/JOK.2018.45.8.831
Recently, the development of deep learning technologies such as convolutional neural networks have greatly improved the performance of object recognition in images. However, object recognition still has many challenges due to large variations in images and the diversity of object categories to be recognized. In particular, studies on object recognition in low-resolution images are still in the primary stage and have not shown satisfactory performance. In this paper, we propose an image enhancement neural network to improve object recognition performance of low resolution images. We also use the enhanced images for training an object recognition model based on convolutional neural networks to obtain robust recognition performance with resolution changes. To verify the efficiency of the proposed method, we conducted computational experiments on object recognition in a low-resolution environment using the CIFAR-10 and CIFAR-100 databases. We confirmed that the proposed method can greatly improve the recognition performance in low-resolution images while keeping stable performance in the original resolution images.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr