Vol. 42, No. 6,
Jun. 2015
Digital Library
File-System-Level SSD Caching for Improving Application Launch Time
Changhee Han, Junhee Ryu, Dongeun Lee, Kyungtae Kang, Heonshik Shin
Application launch time is an important performance metric to user experience in desktop and laptop environment, which mostly depends on the performance of secondary storage. Application launch times can be reduced by utilizing solid-state drive (SSD) instead of hard disk drive (HDD). However, considering a cost-performance trade-off, utilizing SSDs as caches for slow HDDs is a practicable alternative in reducing the application launch times. We propose a new SSD caching scheme which migrates data blocks from HDDs to SSDs. Our scheme operates entirely in the file system level and does not require an extra layer for mapping SSD-cached data that is essential in most other schemes. In particular, our scheme does not incur mapping overheads that cause significant burdens on the main memory, CPU, and SSD space for mapping table. Experimental results conducted with 8 popular applications demonstrate our scheme yields 56% of performance gain in application launch, when data blocks along with metadata are migrated.
A Post-Verification Method of Near-Duplicate Image Detection using SIFT Descriptor Binarization
In recent years, as near-duplicate image has been increasing explosively by the spread of Internet and image-editing technology that allows easy access to image contents, related research has been done briskly. However, BoF (Bag-of-Feature), the most frequently used method for near-duplicate image detection, can cause problems that distinguish the same features from different features or the different features from same features in the quantization process of approximating a high-level local features to low-level. Therefore, a post-verification method for BoF is required to overcome the limitation of vector quantization. In this paper, we proposed and analyzed the performance of a post-verification method for BoF, which converts SIFT (Scale Invariant Feature Transform) descriptors into 128 bits binary codes and compares binary distance regarding of a short ranked list by BoF using the codes. Through an experiment using 1500 original images, it was shown that the near-duplicate detection accuracy was improved by approximately 4% over the previous BoF method.
A Function Level Static Offloading Scheme for Saving Energy of Mobile Devices in Mobile Cloud Computing
Hong Min, Jinman Jung, Junyoung Heo
Mobile cloud computing is a technology that uses cloud services to overcome resource constrains of a mobile device, and it applies the computation offloading scheme to transfer a portion of a task which should be executed from a mobile device to the cloud. If the communication cost of the computation offloading is less than the computation cost of a mobile device, the mobile device commits a certain task to the cloud. The previous cost analysis models, which were used for separating functions running on a mobile device and functions transferring to the cloud, only considered the amount of data transfer and response time as the offloading cost. In this paper, we proposed a new task partitioning scheme that considers the frequency of function calls and data synchronization, during the cost estimation of the computation offloading. We also verified the energy efficiency of the proposed scheme by using experimental results.
Detection of an Open-Source Software Module based on Function-level Features
As open-source software (OSS) becomes more widely used, many users breach the terms in the license agreement of OSS, or reuse a vulnerable OSS module. Therefore, a technique needs to be developed for investigating if a binary program includes an OSS module. In this paper, we propose an efficient technique to detect a particular OSS module in an executable program using its function-level features. The conventional methods are inappropriate for determining whether a module is contained in a specific program because they usually measure the similarity between whole programs. Our technique determines whether an executable program contains a certain OSS module by extracting features such as its function-level instructions, control flow graph, and the structural attributes of a function from both the program and the module, and comparing the similarity of features. In order to demonstrate the efficiency of the proposed technique, we evaluate it in terms of the size of features, detection accuracy, execution overhead, and resilience to compiler optimizations.
Hierarchically Encoded Multimedia-data Management System for Over The Top Service
The OTT service that provides multimedia video has spread over the Internet for terminals with a variety of resolutions. The terminals are in communication via a networks such as 3G, LTE, VDSL, ADSL. The service of the network has been increased for a variety of terminals giving rise to the need for a new way of encoding multimedia is increasing. SVC is an encoding technique optimized for OTT services. We proposed an efficient multimedia management system for the SVC encoded multimedia data. The I/O trace was generated using a zipf distribution, and were comparatively evaluated for performance with the existing system.
A Study on the Acceptance Factors of Healthcare Information Services Converged with Cognitive Computing
Young-Woo Pae, Jin-Sook Bong, Wonki Min, Yongtae Shin
The aging population and the advancement of science and technology are transforming the healthcare industry to focus on health management for the prevention of diseases. The U-health and remote healthcare services have not yet achieved the social agreement in the nation; however, these have been extensively used in the global scale. The innovation of user experience through cognitive computing are expected to increase the health effects of consumers, by converging with healthcare information services. This study suggests the conceptual model of healthcare information service converged with cognitive computing. Then, the acceptance factors for consumers have been investigated. For this purpose, reliability and validity analysis have been conducted using an online survey. The path analysis was performed to verify the hypotheses and moderating effect based upon the gender, by using structural equation modeling.
Parallel Gaussian Processes for Gait and Phase Analysis
This paper proposes a sequential state estimation model consisting of continuous and discrete variables, as a way of generalizing all discrete-state factorial HMM, and gives a design of gait motion model based on the idea. The discrete state variable implements a Markov chain that models the gait dynamics, and for each state of the Markov chain, we created a Gaussian process over the space of the continuous variable. The Markov chain controls the switching among Gaussian processes, each of which models the rotation or various views of a gait state. Then a particle filter-based algorithm is presented to give an approximate filtering solution. Given an input vector sequence presented over time, this finds a trajectory that follows a Gaussian process and occasionally switches to another dynamically. Experimental results show that the proposed model can provide a very intuitive interpretation of video-based gait into a sequence of poses and a sequence of posture states.
Estimating the Time to Fix Bugs Using Bug Reports
Kimun Kwon, Kwanghue Jin, Byungjeong Lee
As fixing bugs is a large part of software development and maintenance, estimating the time to fix bugs -bug fixing time- is extremely useful when planning software projects. Therefore, in this study, we propose a way to estimate bug fixing time using bug reports. First, we classify previous bug reports with meta fields by applying a k-NN method. Next, we compute the similarity of the new bug and previous bugs by using data from bug reports. Finally, we estimate how long it will take to fix the new bug using the time it took to repair similar bugs. In this study, we perform experiments with open source software. The results of these experiments show that our approach effectively estimates the bug fixing time.
A Method of Constructing Robust Descriptors Using Scale Space Derivatives
Requirement of effective image handling methods such as image retrieval has been increasing with the rising production and consumption of multimedia data. In this paper, a method of constructing more effective descriptor is proposed for robust keypoint based image retrieval. The proposed method uses information embedded in the first order and second order derivative images, in addition to the scale space image, for the descriptor construction. The performance of multi-image descriptor is evaluated in terms of the similarities in keypoints with a public domain image database that contains various image transformations. The proposed descriptor shows significant improvement in keypoint matching with minor increase of the length.
A Smoothing Method for Digital Curve by Iterative Averaging with Controllable Error
Smoothing a digital curve by averaging its connected points is widely employed to minimize sharp changes of the curve that are generally introduced by noise. An appropriate degree of smoothing is critical since the area or features of the original shape can be distorted at a higher degree while the noise is insufficiently removed at a lower degree. In this paper, we provide a mathematical relationship between the parameters, such as the number of iterations, average distance between neighboring points, weighting factors for averaging and the moving distance of the point on the curve after smoothing. Based on these findings, we propose to control the smoothed curve such that its deviation is bounded particular error level as well as to significantly expedite smoothing for a pixel-based digital curve.
A MapReduce based Algorithm for Spatial Aggregation of Microblog Data in Spatial Social Analytics
Hyun Gu Cho, Pyoung Woo Yang, Ki Hyun Yoo, Kwang Woo Nam
In recent times, microblogs have become popular owing to the development of the Internet and mobile environments. Among the various types of microblog data, those containing location data are referred to as spatial social Web objects. General aggregations of such microblog data include data aggregation per user for a single piece of information. This study proposes a spatial aggregation algorithm that combines a general aggregation with spatial data and uses the Geohash and MapReduce operations to perform spatial social analysis, by using microblog data with the characteristics of a spatial social Web object. The proposed algorithm provides the foundation for a meaningful spatial social analysis.
Contribution-Level-Based Opportunistic Flooding for Wireless Multihop Networks
Seung-gyu Byeon, Hyeong-yun Seo, Jong-deok Kim
In this paper, we propose the contribution-level-based opportunistic flooding in a wireless multihop network which achieves outstanding transmission efficiency and reliability. While the potential of the the predetermined relay node to fail in its receipt of broadcast packets is due to the inherent instability in wireless networks, our proposed flooding actually increases network reliability by applying the concept of opportunistic routing, whereby relay-node selection is dependent on the transmission result. Additionally, depending on the contribution level for the entire network, the proposed technique enhances transmission efficiency through priority adjustment and the removal of needless relay nodes. We use the NS-3 simulator to compare the proposed scheme with dominant pruning. The analysis results show the improved performance in both cases: by 35% compared with blind flooding from the perspective of the transmission efficiency, and by 20~70% compared to dominant pruning from the perspective of the reliability.
A Re-configuration Scheme for Social Network Based Large-scale SMS Spam
Sihyun Jeong, Giseop Noh, Hayoung Oh, Chong-Kwon Kim
The Short Message Service (SMS) is one of the most popular communication tools in the world. As the cost of SMS decreases, SMS spam has been growing largely. Even though there are many existing studies on SMS spam detection, researchers commonly have limitation collecting users" private SMS contents. They need to gather the information related to social network as well as personal SMS due to the intelligent spammers being aware of the social networks. Therefore, this paper proposes the Social network Building Scheme for SMS spam detection (SBSS) algorithm that builds synthetic social network dataset realistically, without the collection of private information. Also, we analyze and categorize the attack types of SMS spam to build more complete and realistic social network dataset including SMS spam.
Efficient Packet Transmission Utilizing Vertical Handover in IoT Environment
The Internet of Things (IoT) has recently been showered with much attention worldwide. Various kinds of devices, communicating with each other in the IoT, demand multiple communication technologies to coexist. In this environment, mobile devices may utilize the vertical handover between different wireless radio interfaces such as Wi-Fi and Bluetooth, for efficient data transfer. In this paper, an IoT broker is implemented to support the vertical handover, which can also support and manage heterogeneous devices and communication interfaces. The handover is activated based on RSSI, Link Quality values, and real time traffic. The experimental results show that the proposed handover system substantially improves QoS in Bluetooth and reduces power consumption in mobile devices as compared with a system using only Wi-Fi.
One-Class Classification Model Based on Lexical Information and Syntactic Patterns
Hyeon-gu Lee, Maengsik Choi, Harksoo Kim
Relation extraction is an important information extraction technique that can be widely used in areas such as question-answering and knowledge population. Previous studies on relation extraction have been based on supervised machine learning models that need a large amount of training data manually annotated with relation categories. Recently, to reduce the manual annotation efforts for constructing training data, distant supervision methods have been proposed. However, these methods suffer from a drawback: it is difficult to use these methods for collecting negative training data that are necessary for resolving classification problems. To overcome this drawback, we propose a one-class classification model that can be trained without using negative data. The proposed model determines whether an input data item is included in an inner category by using a similarity measure based on lexical information and syntactic patterns in a vector space. In the experiments conducted in this study, the proposed model showed higher performance (an F1-score of 0.6509 and an accuracy of 0.6833) than a representative one-class classification model, one-class SVM(Support Vector Machine).
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr