Dual Paraboloid Map-Based Real-Time Indirect Illumination Rendering

Jaewon Choi, Sungkil Lee

http://doi.org/10.5626/JOK.2019.46.11.1099

Indirect light rendering, which expresses the light expression more finely and delicately, has been studied in terms of the indirect illumination effect in real-time rendering environment due to the load of the physical calculation process. Among them, the Light Propagation Volumes technique achieved real-time performance by approximating the indirect lighting effect by propagating the volume containing the light information to the adjacent volume. However, as the size of the geometry increases, performance degradation occurs as the Reflective Shadow Map containing the light information is generated as a cube map in the rendering process. Although it is possible to replace the Reflective Shadow Map with other types of textures other than the cube map to reduce the occurrence of bottlenecks, distortion occurs in the nonlinear projection transformation of other type textures. In this study, the Reflective Shadow Map is generated as a dual paraboloid map types to reduce the bottleneck. Distortions occurring in the process of paraboloid map transformation were corrected by using fixed point iteration-based backward warping.

Real-Time Depth-of-Field Rendering Using Depth Range Shift and Compression

Jeseon Lee, Sungkil Lee

http://doi.org/10.5626/JOK.2019.46.11.1106

In computer graphics, many post-processing methods have been studied to approximate depth-of-field rendering in real-time. A multi-layer based rendering method eliminates problems such as intensity leakage and depth discontinuity, seen in a single-layer method. However, this method has artifacts such as boundary discontinuities that show blurred occlusion pixels. GPU-based pyramidal image processing eliminates the boundary discontinuity in real time, but has the problem that the focus area is blurred because the color of the blurred layer flows into the focused layer. We propose a technique to eliminates artifacts with blurred focus areas based on a multi-layer method using pyramidal image processing and a blurring technique to reduce the difference between the degree of blurring of physically-based depth-of-field. Artifacts with blurred focus areas are eliminated by the depth range shift and compression of the object, and the degree of blurring proportional to the circles of confusion improves the quality by reflecting the physical characteristics of the camera.

Component-based Software Architecture Design Method for Defense Software

Sungwon Lee, Jonghwan Shin, Taehyung Kim

http://doi.org/10.5626/JOK.2019.46.11.1113

Component-based software engineering is widely used in a variety of embedded software developments. However, most methodologies for component-based software engineering have certain limitations in coping with the software configuration structure governed by Korean regulations for weapon system software development. The software configuration structure by rule assumes that the software development is based on object-oriented language and tries to present different perspectives in one diagram. In this paper, we propose a component-based software architecture design method for defense software that can be used in software development with non-object-oriented language. Further, the proposal aims to compose a software configuration structure and desired documentation products such as diagrams through a design process. To help comprehend each step of the proposed design method, real samples of ongoing projects are presented.

Construction of Test Environments based on Information Extracted from Test Plan and IUT

Dong Hun Song, Yongjin Seo, Hyeon Soo Kim, Nak-Jung Choi, YoungKeun Go, Chumsu Kim

http://doi.org/10.5626/JOK.2019.46.11.1122

Software testing is a way to increase the reliability of software thereby resulting in a probable enhancement in the quality of the software by increasing the number of systematic tests performed. However, in reality, the resources allocated to the test are limited, and there are difficulties that require expertise in the construction of the test environment, or that the test execution environment is constrained. In this paper, we have defined the components needed to construct the test environment and a method for deriving them from the test plan and the IUT to be tested. Also, we have proposed a construction procedure for realizing the test environment in the virtual environment by combining these components. This method could contribute towards improvement in the reliability of the software by allowing user to concentrate on conducting tests rather than making efforts to build the test environment, as the test environment can simply be provided by preparing IUTs and test plans.

A Study on Improvement of Scoring in Programming Practice Questions Using Concolic Testing Technique

Kangbok Seo, Sunghee Lee, Deokyeop Kim, Woojin Lee

http://doi.org/10.5626/JOK.2019.46.11.1133

Recently, as programming education has attracted increasing interest, studies on effective programming education have been actively investigated. Various automatic scoring systems have been developed and are currently used in programming education, but in these systems, there are some contents that the instructor must write, such as test cases for scoring. The test cases used for scoring should not simply use the correct input, but should also be written in consideration of the various parts that students might add to the source code. If the instructor writes such a test case, the author"s mistake may lead to insufficient test cases or incorrect test cases, ultimately resulting in incorrect scoring. In order to solve these problems, this paper proposed a tool to improve the test cases used in existing scoring by applying the concolic testing technique to the source code submitted by the students. In addition, using the proposed tool, we found a case in which the scoring was incorrect, so we added an improved test case to perform the scoring again.

News Stream Summarization for an Event based on Timeline

Ian Jung, Su Jeong Choi, Seyoung Park

http://doi.org/10.5626/JOK.2019.46.11.1140

This paper explores the summarization task in news stream, as it is continuously produced and has sequential characteristic. Timeline based summarization is widely adopted in news stream summarization because timeline can represent events sequentially. However, previous work relies on the time of collection of news article, thus they cannot consider for dates other than out of the collected period. In addition, previous work lacked consideration of conciseness, informativeness, and coherence. To address these problems, we propose a news stream summarization model with an expanded timeline. The model takes into consideration the expanded timeline by using time points that are referenced in given news articles and selects sentences that are concise, informative and consistent with neighboring time points. First, we constitute expanded timeline by selecting dates which are from all identified time points in the news articles. Then, we extract sentences as summary with consideration of informativeness based on keyword for each time points, and on coherence between two consecutive time points, and on continuity of named entities except for long sentence in the articles. Experimental results show that the proposed model generated higher quality summaries compared to previous work.

An Approach for Recognition of Elderly Living Patterns Based on Event Calculus Using Percept Sequence

Hyun-Kyu Park, Young-Tack Park

http://doi.org/10.5626/JOK.2019.46.11.1149

This paper proposes a method for recognizing the intentions of human activity based on percept sequences that represent the activities of daily living (ADL) in a residential space. Based on the activity intention ontology, which represents actions and poses related to human activity intentions, the proposed method identifies the intention of a human activity by using event calculus when a percept sequence is entered. Based on the action intent identified, frequency and pattern analysis of the action intention is used to characterize the lifestyle patterns of the elderly. The intentions of everyday behavior occurring in an elderly living space are complex, and it is difficult to recognize the pattern of life through these intentions, which makes it difficult to recognize the intention of a complex occurrence. To solve these problems, this paper constructs an ontology of percept sequences expressed as daily behavioral information, and makes inferences to help recognize activity intent based on event calculus. When evaluating the techniques proposed in this paper, the results of the act intention cognition experiment based on the perceptual information recorded showed 84% precision and 85% recall.

OpenViBE2Unity: Open Source API for Brain-Computer Interface and Unity3D Application

Sooyong Kim, Seongjoon Jeong, Eunmin Lee, Sunghan Lee, Sung Chan Jun, Minkyu Ahn

http://doi.org/10.5626/JOK.2019.46.11.1157

Brain-computer interface, which can quantify a person’s intention, cognition, and feelings, is an active research field. However, building a system that performs well is not a simple task, as it requires data acquisition, real-time processing, and multimodal feedback, and often necessitates considerable cost in time and effort, particularly for untrained users and researchers. Thus, it is important to have an interface that is designed well and works with open platforms, such as Unity3D and OpenViBE that are used widely in BCI development and application. With this goal, we developed OpenViBE2Unity (O2U), an Application Programming Interface that can be used easily for the development of BCI applications. This open API (available through Github) provides special functions that facilitate communication between two popular platforms, OpenViBE and Unity3D. In this paper, we introduce O2U’s architecture and a step-by-step procedure for its application. Finally, we demonstrate one exemplary application developed under O2U.

ESS Operation Scheduling Scheme Using LSTM for Peak Demand Reduction

Yeongung Seo, Seungyoung Park, Myungjin Kim, Sungbin Lim

http://doi.org/10.5626/JOK.2019.46.11.1165

In recent years, blackouts have become more likely in South Korea as the peak demand has sharply increased. In order to address this issue, an energy storage system (ESS) operation scheduling technique has been investigated for its ability to reduce the peak demand by utilizing the power stored in the ESS. If the power demand information is known in advance, an optimal ESS operation scheduling technique can be applied in consideration of both the power stored in the ESS and the power demand to be generated in the future. However, it is difficult to predict the peak demand in advance because it only occurs in a relatively short time period, and the instance of its occurrence differs substantially from day-to-day. Therefore, it is very difficult to implement an optimal ESS operation scheduling technique that requires exact information on power demands in advance. Thus, in this paper, we proposed an ESS operation scheduling method with which to reduce the peak demand by using only historical power demands. Specifically, we employed a long short-term memory (LSTM) network and trained it using the historical power demands and their corresponding optimal ESS discharge powers. Then, we applied the trained network to approximate the optimal ESS operation scheduling. We showed the validity of the proposed method through computer simulations using historical power demand data from four customers. In particular, it was shown that the proposed scheme reduced the peak demand per year by up to about 82.42% compared to the optimal scheme that is only feasible when the exact future power demands are available.

Transition-based Korean Dependency Analysis System Using Semantic Abstraction

ChungSeon Jeong, JoonChoul Shin, JuSang Lee, CheolYoung Ock

http://doi.org/10.5626/JOK.2019.46.11.1174

The existing learning-based dependency studies used as a learning features by combining the lemma and the part-of-speech tag. The part-of-speech tag is suitable for use as a feature due to its high recall, but there is a limit to increase the accuracy of analysis of dependency by using only the part-of-speech tag. In case of lemma, when the lemma is recalled, it shows high dependency accuracy, but it shows low recall compared to the part-of-speech tag. In this paper, we propose a transition-based dependency analysis method that uses abstractions of nouns as a feature by using lexical semantic network (UWordMap) in order to increase the recall rate of lemma. When the semantic abstraction of lemmas is used as a feature, the accuracy of dependency analysis is increased by up to 7.55% compared to the case of using only the lemma. In case of using word(eojeol), morphological and syllable unit features including semantic abstraction features, 90.75% dependence analysis accuracy was shown. With the learning speed of 562 sentences per second and the speed of 631 dependency analysis per second, the proposed method can be used practically.

Relation Extraction among Multiple Entities using Dual-Pointer Network

Seongsik Park, Harksoo Kim

http://doi.org/10.5626/JOK.2019.46.11.1186

Information Extraction is the process of automatically extracting structured information from unstructured machine-readable texts. The rapid increase in large-scale unstructured texts in recent years has led to many studies investigating information extraction. Information extraction consists of two sub-tasks: an entity linking task and a relation extraction task. Most previous studies examining relation extraction have assumed that a single sentence contains a single entity pair mention. They have also focused on extracting a single entity pair (i.e., Subject-Relation-Object triple) per sentence. However, sentences can also contain multiple entity pairs. Therefore, in this paper, we propose a Dual-pointer network model that can entirely extract all possible entity pairs from a given text. In relation extraction experiments with two kinds of representative English datasets, NYT and ACE-2005, the proposed model achieved state-of-the-art performances with an F1-score of 0.8050 in ACE-2005 and an F1-score of 0.7834 in NYT.

Graph Convolutional Networks with Elaborate Neighborhood Selection

Yeonsung Jung, Joyce Jiyoung Whang

http://doi.org/10.5626/JOK.2019.46.11.1193

Graph Convolutional Networks (GCNs) utilize the convolutional structure to obtain an effective insight on representation by aggregating the information from neighborhoods. In order to demonstrate high performance, it is necessary to select neighborhoods that can propagate important information to target nodes, and acquire appropriate filter values during training. Recent GCNs algorithms adopt simple neighborhood selection methods, such as taking all 1-hop nodes. In the present case, unnecessary information was propagated to the target node, resulting in degradation of the performance of the model. In this paper, we propose a GCN algorithm that utilizes valid neighborhoods by calculating the similarity between the target node and neighborhoods.

Performance Analysis of Radio Link for Mobile xHaul Network

Seungkwon Baek, Seokki Kim, Kijun Han

http://doi.org/10.5626/JOK.2019.46.11.1199

Due to the exponential growth of mobile data and the densification of base stations, it is necessary to newly install various types of transport networks such as fronthaul, mid-haul, and backhaul, to enhance mobile communication networks. However, as the existing wired line based solution was not profitable for mobile communication operators in proportion to data volume growth, there is a need of a new concept and technology that can support flexible deployment of radio access network and enhance the cost efficiency of mobile communication carriers. In this paper, we design a network architecture and a transmission scheme for xHaul link as a unified wireless transport solution for MXN(Mobile xHaul Network) that can improve the installation flexibility and the network mobility with mobile multi-hop relay technology. Then we evaluate the performance of xhaul link in various deployment scenarios through system level simulator. As the results of performance evaluation, we verify that xHaul can provide up to 20Gbps capacity and that MXN can meet the performance requirements of 5G mobile service.

Malware Variants Detection based on Dhash

Hongbi Kim, Hyunseok Shin, Junho Hwang, Taejin Lee

http://doi.org/10.5626/JOK.2019.46.11.1207

Malicious codes are becoming more intelligent due to the popularization of malware generation tools and obfuscation techniques, but existing malware detection techniques suffer from incomplete detection of malicious codes. Considering the facts that many newly emerging malicious codes are variants of existing malicious codes, and that they have binary data similar to those of the original malicious codes, a Dhash-based malware detection technique is presented here that classifies images based on the binary data in a file, along with a 10-gram algorithm that improves the long time taken by the analysis due to the full comparison of the Dhash algorithm. A comparison with the superior ssdep technique in variant malware detection shows that the Dhash algorithm can detect areas that ssdep does not detect, and the superiority of the proposed algorithm through the existing Dhash algorithm and the detection speed comparison experiment of the algorithms proposed in this paper. Future work will continue to develop variety of malware analysis technologies that are linked to other LSH-based detection techniques.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr