Inter-GuestOS Communications in Multicore-based ARM TrustZone

Moowoong Jeon, Sewon Kim, Hyuck Yoo

http://doi.org/

The technology using ARM TrustZone draws attention as a new embedded virtualization approach. The ARM TrustZone defines two virtual execution environment, the secure world and the normal world. In such an environment, the inter-world communication is important to extend function of software. However, the current monitor software does not sufficiently support the interworld communication. This paper presents a new inter guestOS communication scheme, for each world, for the ARM TrustZone virtualization. The proposed communication scheme supports bidirectional inter-world communication for single core and multi-core environment. In this paper, It is implemented on a NVIDIA Tegra3 processor based on the ARM Cortex-A9 MPCore and it showed a bandwidth of 30MB/s.

A Study on Selecting Key Opcodes for Malware Classification and Its Usefulness

Jeong Been Park, Kyung Soo Han, Tae Gune Kim, Eul Gyu Im

http://doi.org/

Recently, the number of new malware and malware variants has dramatically increased. As a result, the time for analyzing malware and the efforts of malware analyzers have also increased. Therefore, malware classification helps malware analyzers decrease the overhead of malware analysis, and the classification is useful in studying the malware’s genealogy. In this paper, we proposed a set of key opcode to classify the malware. In our experiments, we selected the top 10-opcode as key opcode, and the key opcode decreased the training time of a Supervised learning algorithm by 91% with preserving classification accuracy.

A QoS-Aware Energy Optimization Technique for Smartphone GPUs

Dohan Kim, Wook Song, HyungHoon Kim, Jihong Kim

http://doi.org/

We proposed a novel energy optimization technique for smartphone GPUs, more aggressively lowering the GPU frequency while obtaining higher energy efficiency with a negligible negative impact on the GPU performance. In order to achieve the Quality of Service (QoS) specified by the smartphone application, the proposed optimization technique employed the minimal acceptable GPU frequency based on average Frames per Second (FPS) for each GPU frequency level. Our experimental results on a smartphone development board showed that the proposed technique can reduce the GPU energy consumption by up to 23% over the default DVFS algorithm with only a 0.45 frame drop.

Effective Distributed Supercomputing Resource Management for Large Scale Scientific Applications

Seungwoo Rho, Jik-Soo Kim, Sangwan Kim, Seoyoung Kim, Soonwook Hwang

http://doi.org/

Nationwide supercomputing infrastructures in Korea consist of geographically distributed supercomputing clusters. We developed High-Throughput Computing as a Service(HTCaaS) based on these distributed national supecomputing clusters to facilitate the ease at which scientists can explore large-scale and complex scientific problems. In this paper, we present our mechanism for dynamically managing computing resources and show its effectiveness through a case study of a real scientific application called drug repositioning. Specifically, we show that the resource utilization, accuracy, reliability, and usability can be improved by applying our resource management mechanism. The mechanism is based on the concepts of waiting time and success rate in order to identify valid computing resources. The results show a reduction in the total job completion time and improvement of the overall system throughput.

Ontology-based Monitoring Approach for Efficient Power Management in Datacenters

Jungmin Lee, Jin Lee, Jungsun Kim

http://doi.org/

Recently, the issue of efficient power management in datacenters as a part of green computing has gained prominence. For realizing efficient power management, effective power monitoring and analysis must be conducted for servers in a datacenter. However, an administrator should know the exact structure of the datacenter and its associated databases, and is required to analyze relationships among the observed data. This is because according to previous monitoring approaches, servers are usually managed using only databases. In addition, it is not possible to monitor data that are not indicated in databases. To overcome these drawbacks, we proposed an ontology-based monitoring approach. We constructed domain ontology for management servers at a datacenter, and mapped observed data onto the constructed domain ontology by using semantic annotation. Moreover, we defined query creation rules and server state rules. To demonstrate the proposed approach, we designed an ontology based monitoring system architecture, and constructed a knowledge system. Subsequently, we implemented the pilot system to verify its effectiveness.

Service Level Agreement Specification Model of Software and Its Mediation Mechanism for Cloud Service Broker

Taewoo Nam, Keunhyuk Yeom

http://doi.org/

SLA (Service Level Agreement) is an essential factor that must be guaranteed to provide a reliable and consistent service to user in cloud computing environment. Especially, a contract between user and service provider with SLA is important in an environment using a cloud service brokerage. The cloud computing is classified into IaaS, PaaS, and SaaS according to IT resources of the various cloud service. The existing SLA is difficult to reflect the quality factors of service, because it only considers factors about the physical Network environment and have no methodological approach. In this paper, we suggested a method to specify the quality characteristics of software and proposed a mechanism and structure that can exchange SLA specification between the service provider and consumer. We defined a meta-model for the SLA specification in the SaaS level, and quality requirements of the SaaS were described by the proposed specification language. Through case studies, we verified proposed specification language that can present a variety of software quality factors. By using the UDDI-based mediation process and architecture to interchange this specification, it is stored in the repository of quality specifications and exchanged during service binding time.

Design and Implementation of a Hybrid Spatial Reasoning Algorithm

Sangha Nam, Incheol Kim

http://doi.org/

In order to answer questions successfully on behalf of the human contestant in DeepQA environments such as ‘Jeopardy!’, the American quiz show, the computer needs to have the capability of fast temporal and spatial reasoning on a large-scale commonsense knowledge base. In this paper, we present a hybrid spatial reasoning algorithm, among various efficient spatial reasoning methods, for handling directional and topological relations. Our algorithm not only improves the query processing time while reducing unnecessary reasoning calculation, but also effectively deals with the change of spatial knowledge base, as it takes a hybrid method that combines forward and backward reasoning. Through experiments performed on the sample spatial knowledge base with the hybrid spatial reasoner of our algorithm, we demonstrated the high performance of our hybrid spatial reasoning algorithm.

Modeling of Visual Attention Probability for Stereoscopic Videos and 3D Effect Estimation Based on Visual Attention

Boeun Kim, Wonseok Song, Taejeong Kim

http://doi.org/

Viewers of videos are likely to absorb more information from the part of the screen that attracts visual attention. This fact has led to the visual attention models that are being used in producing and evaluating videos. In this paper, we investigate the factors that are significant to visual attention and the mathematical form of the visual attention model. We then estimated the visual attention probability using the statistical design of experiments. The analysis of variance (ANOVA) verifies that the motion velocity, distance from the screen, and amount of defocus blur affect human visual attention significantly. Using the response surface modeling (RSM), we created a visual attention score model that concerns the three factors, from which we calculate the visual attention probabilities (VAPs) of image pixels. The VAPs are directly applied to existing gradient based 3D effect perception measurement. By giving weights according to our VAPs, our algorithm achieves more accurate measurement than the existing method. The performance of the proposed measurement is assessed by comparing them with subjective evaluation as well as with existing methods. The comparison verifies that the proposed measurement outperforms the existing ones.

Similarity Analysis and API Mapping with HLA and DDS for L-V-C Realization

Kunryun Cho, Giseop No, Chongkwon Kim

http://doi.org/

The rapid growth of network technology makes the high-tech weapon. Thus, in the modern war, the ability to immediately use of the high-tech weapon is important. To realize this ability, continuous trainning is necessary but, this trainning spends many money. To improve the budget efficiency, Modeling and Simulation(M&S) are used. However, they seriously decrease the reality. Recently, the system which can support the combination of Live with Virtual simulation is on the rise. The typical example is L-V-C Environment and many kind of middleware which can support the L-V-C Envrionment are already proposed. Previous middleware can support the interoperability between different simulations but, it cannot completely interoperate three(Live, Virtual, Constructive) simulation environments. In this paper, to solve this problem, we propose the scheme which is combination between different middlewares. And we conduct the API mapping between HLA and DDS which are typical middleware and verify the scheme.

Improving Recall for Context-Sensitive Spelling Correction Rules using Conditional Probability Model with Dynamic Window Sizes

Hyunsoo Choi, Hyukchul Kwon, Aesun Yoon

http://doi.org/

The types of errors corrected by a Korean spelling and grammar checker can be classified into isolated-term spelling errors and context-sensitive spelling errors (CSSE). CSSEs are difficult to detect and to correct, since they are correct words when examined alone. Thus, they can be corrected only by considering the semantic and syntactic relations to their context. CSSEs, which are frequently made even by expert wiriters, significantly affect the reliability of spelling and grammar checkers. An existing Korean spelling and grammar checker developed by P University (KSGC 4.5) adopts hand-made correction rules for correcting CSSEs. The KSGC 4.5 is designed to obtain very high precision, which results in an extremely low recall. Our overall goal of previous works was to improve the recall without considerably lowering the precision, by generalizing CSSE correction rules that mainly depend on linguistic knowledge. A variety of rule-based methods has been proposed in previous works, and the best performance showed 95.19% of average precision and 37.56% of recall. This study thus proposes a statistics based method using a conditional probability model with dynamic window sizes. in order to further improve the recall. The proposed method obtained 97.23% of average precision and 50.50% of recall.

Timeline Tag Cloud Generation for Broadcasting Contents using Blog Postings

Jeong-Woo Son, Hwa-Suk Kim, Sun-Joong Kim, Keeseong Cho

http://doi.org/

Due to the recent increasement of user created contents like SNS, blog posts, and so on, broadcast contents are actively re-construction by its users. Especially, on some genres like drama, movie, various information from cars and film sites to clothes and watches in a content is spreaded out to other users through blog postings. Since such information can be an additional information for the content, they can be used for providing high-quality broadcast services. For this purpose, in this paper, we propose timeline tag cloud generation method for broadcasting contents. In the proposed method, blog postings on the target contents are first gathered and then, images and words around images are extracted from a blog post as a tag set. An extracted tag set is tagged on a specific timeline of the target content. In experiments, to prove the efficiency of the proposed method, we evaluated the performances of the proposed image matching and tag cloud generation methods.

A Vanishing Point Detection Method Based on the Empirical Weighting of the Lines of Artificial Structures

Hang-Tae Kim, Wonseok Song, Hyuk Choi, Taejeong Kim

http://doi.org/

A vanishing point is a point where parallel lines converge, and they become evident when a camera’s lenses are used to project 3D space onto a 2D image plane. Vanishing point detection is the use of the information contained within an image to detect the vanishing point, and can be utilized to infer the relative distance between certain points in the image or for understanding the geometry of a 3D scene. Since parallel lines generally exist for the artificial structures within images, line-detection-based vanishing point-detection techniques aim to find the point where the parallel lines of artificial structures converge. To detect parallel lines in an image, we detect edge pixels through edge detection and then find the lines by using the Hough transform. However, the various textures and noise in an image can hamper the line-detection process so that not all of the lines converging toward the vanishing point are obvious. To overcome this difficulty, it is necessary to assign a different weight to each line according to the degree of possibility that the line passes through the vanishing point. While previous research studies assigned equal weight or adopted a simple weighting calculation, in this paper, we are proposing a new method of assigning weights to lines after noticing that the lines that pass through vanishing points typically belong to artificial structures. Experimental results show that our proposed method reduces the vanishing point-estimation error rate by 65% when compared to existing methods.

A Spatial Transformation Scheme Supporting Data Privacy and Query Integrity for Outsourced Databases

Hyeong-Il Kim, Young-Ho Song, Jaewoo Chang

http://doi.org/

Due to the popularity of location-based services, the amount of generated spatial data in daily life has been dramatically increasing. Therefore, spatial database outsourcing has become popular for data owners to reduce the spatial database management cost. The most important consideration in database outsourcing is meeting the privacy requirements and guarantying the integrity of the query result. However, most of existing database transformation techniques do not support both of the data privacy and integrity of the query result. To solve this problem, we propose a spatial data transformation scheme that utilizes the shearing transformation with rotation shifting. In addition, we described the attack models to measure the data privacy of database transformation schemes. Finally, we demonstrated through the experimental evaluations that our scheme provides high level of data protection against different kinds of attack models, compared to the existing schemes, while guaranteeing the integrity of the query result sets.

The YouTube Video Recommendation Algorithm using Users" Social Category

SoYeop Yoo, OkRan Jeong

http://doi.org/

With the rapid progression of the Internet and smartphones, YouTube has grown significantly as a social media sharing site and has become popular all around the world. As users share videos through YouTube, social data are created and users look for video recommendations related to their interests. In this paper, we extract users" social category based on their social relationship and social category classification list using YouTube data. We propose the YouTube recommendation algorithm using the extracted users" social category for more accurate and meaningful recommendations. We show experiment results of its validation.

An Energy-Aware Cooperative Communication Scheme for Wireless Multimedia Sensor Networks

Jeong-Oh Kim, Hyunduk Kim, Wonik Choi

http://doi.org/

Numerous clustering schemes have been proposed to increase energy efficiency in wireless sensor networks. Clustering schemes consist of a hierarchical structure in the sensor network to aggregate and transmit data. However, existing clustering schemes are not suitable for use in wireless multimedia sensor networks because they consume a large quantity of energy and have extremely short lifetime. To address this problem, we propose the Energy-Aware Cooperative Communication (EACC) method which is a novel cooperative clustering method that systematically adapts to various types of multimedia data including images and video. An evaluation of its performance shows that the proposed method is up to 2.5 times more energy-efficient than the existing clustering schemes.

A Study on Service-based Secure Anonymization for Data Utility Enhancement

Chikwang Hwang, Jongwon Choe, Choong Seon Hong

http://doi.org/

Personal information includes information about a living human individual. It is the information identifiable through name, resident registration number, and image, etc. Personal information which is collected by institutions can be wrongfully used, because it contains confidential information of an information object. In order to prevent this, a method is used to remove personal identification elements before distributing and sharing the data. However, even when the identifier such as the name and the resident registration number is removed or changed, personal information can be exposed in the case of a linking attack. This paper proposes a new anonymization technique to enhance data utility. To achieve this, attributes that are utilized in service tend to anonymize at a low level. In addition, the anonymization technique of the proposal can provide two or more anonymized data tables from one original data table without concern about a linking attack. We also verify our proposal by using the cooperative game theory.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr