Search : [ keyword: 인공지능 ] (31)

Exploring Neural Network Models for Road Classification in Personal Mobility Assistants: A Comparative Study on Accuracy and Computational Efficiency

Gwanghee Lee, Sangjun Moon, Kyoungson Jhang

http://doi.org/10.5626/JOK.2023.50.12.1083

With the increasing use of personal mobility devices, the frequency of traffic accidents has also risen, with most accidents resulting from collisions with cars or pedestrians. Notably, the compliance rate of the traffic rules on the roads is low. Auxiliary systems that recognize and provide information about roads could help reduce the number of accidents. Since road images have distinct material characteristics, models studied in the field of image classification are suitable for application. In this study, we compared the performance of various road image classification models with parameter counts ranging from 2 million to 30 million, enabling the selection of the appropriate model based on the situation. The majority of the models achieved an accuracy of over 95%, with most models surpassing 99% in the top-2 accuracy. Of the models, MobileNet v2 had the fewest parameters while still exhibiting excellent performance and EfficientNet had stable accuracy across all classes, surpassing 90% accuracy.

Multidimensional Subset-based Systems for Bias Elimination Within Binary Classification Datasets

KyeongSu Byun, Goo Kim, Joonho Kwon

http://doi.org/10.5626/JOK.2023.50.5.383

As artificial intelligence technology develops, artificial intelligence-related fairness issues are drawing attention. As a result, many related studies have been conducted on this issue, but most of the research has focused on developing models and training methods. Research on removing bias existing in data used for learning, which is a fundamental cause, is still insufficient. Therefore, in this paper, we designed and implemented a system that divides the biases existing within the data into label biases and subgroup biases and removes the biases to generate datasets with improved fairness. The proposed system consists of two steps: (1) subset generation and (2) bias removal. First, the subset generator divides the existing data into subsets on formed by a combination of values in an datasets. Subsequently, the subset is divided into dominant and weak groups based on the fairness indicator values obtained by validating the existing datasets based on the validation datasets. Next, the bias remover reduces the bias shown in the subset by repeating the process of sequentially extracting and verifying the dominant group of each subset to reduce the difference from the weak group. Afterwards, the biased subsets are merged and a fair data set is returned. The fairness indicators used for the verification use the F1 score and the equalized odd. Comprehensive experiments with real-world Census incoming data, COMPAS data, and bank marketing data as verification data demonstrated that our proposed system outperformed the existing technique by yielding a better fairness improvement rate and providing more accuracy in most machine learning algorithms.

CommonAI: Quantitative and Qualitative Analysis for Automatic-generation of Commonsense Reasoning Conversations Suitable for AI

Hyeon Gyu Shin, Hyun Jo You, Young Sook Song

http://doi.org/10.5626/JOK.2023.50.5.407

Human-like common sense reasoning is now considered an essential component for improving the quality of natural language generation for chatbots and conversational agents, However, there is no clear consensus at present about to what extent AI systems require common sense. We discussed common sense requirements for AI chatbots based on quantitative and qualitative analysis of results from two experimental surveys to show differences between gender and age groups and variations according to conversation topics. The contribution of this paper is to refine preferences for chatbot conversations that provide useful information and show an appropriate level of empathy.

Improving the Performance of Knowledge Tracing Models using Quantized Correctness Embeddings

Yoonjin Im, Jaewan Moon, Eunseong Choi, Jongwuk Lee

http://doi.org/10.5626/JOK.2023.50.4.329

Knowledge tracing is a task of monitoring the proficiency of knowledge based on learners" interaction records. Despite the flexible usage of deep neural network-based models for this task, the existing methods disregard the difficulty of each question and result in poor performance for learners who get the easy question wrong or the hard question correct. In this paper, we propose quantizing the learners’ response information based on the question difficulty so that the knowledge tracing models can learn both the response and the difficulty of the question in order to improve the performance. We design a method that can effectively discriminate between negative samples with a high percentage of correct answer rate and positive samples with a low percentage of correct answer rate. Toward this end, we use sinusoidal positional encoding (SPE) that can maximize the distance difference between embedding representations in the latent space. Experiments show that the AUC value is improved to a maximum of 17.89% in the target section compared to the existing method.

Prediction of Antibiotic Resistance to Ciprofloxacin in Patients with Upper Urinary Tract Infection through Exploratory Data Analysis and Machine Learning

Jongbub Lee, Hyungyu Lee

http://doi.org/10.5626/JOK.2023.50.3.263

Emergency medicine physicians use an empirical treatment strategy to select antibiotics before clinically confirming an antibiotic resistance profile for a patient with a urinary tract infection. Empirical treatment is a challenging task in the context of concern for increased antibiotic resistance of urinary tract pathogens in the community. As a single-institution retrospective study, this study proposed a method for predicting antibiotic resistance using a machine learning algorithm for patients diagnosed with upper urinary tract infection in the emergency department. First, we selected significant predictors using statistical test methods and a game theory based SHAP (SHapley Additive exPlanation), respectively. Next, we compared four classifier performances and proposed an algorithm to assist decision-making in empirical treatment by adjusting the prediction probability threshold. As a result, the SVM classifier using predictors selected through SHAP (65% of the total) showed the highest AUROC (0.775) among all conditions used in the experiment. By adjusting the predictive probability threshold in the SVM, we achieved classification accuracy with a specificity that was 3.9 times higher than empirical treatment while preserving the sensitivity of the doctor"s empirical treatment at 98%.

A Study of Metric and Framework Improving Fairness-utility Trade-off in Link Prediction

Heeyoon Yang, YongHoon Kang, Gahyung Kim, Jiyoung Lim, SuHyun Yoon, Ho Seung Kim, Jee-Hyong Lee

http://doi.org/10.5626/JOK.2023.50.2.179

The advance in artificial intelligence (AI) technology has shown remarkable improvements over the last decade. However, sometimes, AI makes biased predictions based on real-world big data that intrinsically contain discriminative social factors. This problem often arises in friend recommendations in Social Network Services (SNS). In the case of social network datasets, Graph Neural Network (GNN) is utilized for training these datasets, but it has a high tendency to connect similar nodes (Homophily effect). Furthermore, it is more likely to make a biased prediction based on socially sensitive attributes, such as, gender or religion, making it ethically more problematic. To overcome these problems, various fairness-aware AI models and fairness metrics have been proposed. However, most of the studies used different metrics to evaluate fairness and did not consider the trade-off relationship that existed between accuracy and fairness. Thus, we propose a novel fairness metric called Fairβ-metri which takes both accuracy and prediction into consideration, and a framework called FairU that shows outstanding performance in the proposed metric.

A Pre-processing Method for Learning Data Using eXplainable Artificial Intelligence

Changhong Lee, Jaemin Lee, Donghyun Kim, Jongdeok Kim

http://doi.org/10.5626/JOK.2023.50.2.133

Artificial intelligence model generation proceeds to the stages of learning data processing, model learning, and model evaluation. Data pre-processing techniques for creating quality learning data contribute many of the methods for improving model accuracy. Existing pre-processing techniques tend to rely heavily on the experience of model generators. If pre-processing is performed based on experience, it is difficult to explain the basis for selecting the corresponding pre-processing technique. However, the reason why generators are forced to rely on experience is that the learning model becomes huge and complicated to a level that is difficult for humans to interpret. Therefore, research is being conducted to explain the operation method of the model by introducing eXplainable AI. In this paper, we propose a learning data pre-processing system using eXplainable AI. The system operation process is trained with data that has not been pre-processed, the learned model is analyzed using eXplainable AI, and the data pre-processing is repeated based on that information. Finally, we will improve the model performance, explain pre-processing reliability, and show the practicality of the system.

Generating Counterfactual Examples through Generating Adversarial Examples

Hyungyu Lee, Dahuin Jung

http://doi.org/10.5626/JOK.2022.49.12.1132

The advance of artificial intelligence (AI) has brought numerous conveniences. However, the complex structure of AI models makes it challenging to understand the inner working of AI. Counterfactual explanation is a method using counterfactual examples, in which minimum perceptible perturbations are applied to change classification results, to explain AI. Adversarial examples are data modified for causing AI models to misclassify the data. Unlike counterfactual examples, perturbations applied to adversarial examples are difficult for humans to perceive. In a simple model, generating adversarial examples is similar to generating counterfactual examples. In contrast, it is different in deep learning because the cognitive difference between humans and deep learning models is often huge. Nevertheless, we confirmed that adversarial examples generated by certain deep learning models were similar to counterfactual examples. In this paper, we analyzed the structure and conditions of deep learning models in which adversarial examples were similar to counterfactual examples. We also proposed a new metric, partial concentrated change (PCC), and compared adversarial examples generated from different models using existing metrics and the proposed PCC.

Explainable Graph Neural Network for Medical Science Research

Yewon Shin, Kisung Moon, Youngsuk Jung, Sunyoung Kwon

http://doi.org/10.5626/JOK.2022.49.11.999

Explainable AI (XAI) is a technology that provides explainability for our end-users to comprehend prediction results of ML algorithms. In particular, the reliability of the decision-making process of an AI algorithm through XAI technology is the most critical in the medical field in terms of real applications. However, complex interaction-based medical data restrict the application of existing XAI technologies developed mostly for image or text data. Graph Neural Network (GNN)-based XAI research has been highlighted in recent years because GNN is technically specialized to capture complex relationships in data. In this paper, we proposed a taxonomy according to the application method and algorithm of GNN-based XAI technology with current XAI research trends and its use-cases in four detailed areas of the medical field. We also expounded on the technical limitations and future works of XAI research specialized in the biomedical area.

GPT-2 for Knowledge Graph Completion

Sang-Woon Kim, Won-Chul Shin

http://doi.org/10.5626/JOK.2021.48.12.1281

Knowledge graphs become an important resource in many artificial intelligence (AI) tasks. Many studies are being conducted to complete the incomplete knowledge graph. Among them, interest in research that knowledge completion by link prediction and relation prediction is increasing. The most talked-about language models in AI natural language processing include BERT and GPT-2, among which KG-BERT wants to solve knowledge completion problems with BERT. In this paper, we wanted to solve the problem of knowledge completion by utilizing GPT-2, which is the biggest recent issue in the language model of AI. Triple information-based knowledge completion and path-triple-based knowledge completion were proposed and explained as methods to solve the knowledge completion problem using the GPT-2 language model. The model proposed in this paper was defined as KG-GPT2, and experiments were conducted by comparing the link prediction and relationship prediction results of TransE, TransR, KG-BERT, and KG-GPT2 to evaluate knowledge completion performance. For link prediction, WN18RR, FB15k-237, and UMLS datasets were used, and for relation prediction, FB15K was used. As a result of the experiment, in the case of link prediction in the path- triple-based knowledge completion of KG-GPT2, the best performance was recorded for all experimental datasets except UMLS. In the path-triple-based knowledge completion of KG-GPT2, the model"s relationship prediction work also recorded the best performance for the FB15K dataset.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr