Digital Library[ Search Result ]
Deadline Task Scheduling for Mitigating the CPU Performance Interference in Android Systems
Jeongwoong Lee, Taehyung Lee, Young Ik Eom
http://doi.org/10.5626/JOK.2020.47.1.11
In the Android Linux kernel, most of the tasks are expected to run fairly, and so, there can be delays in processing time-sensitive applications. In particular, since the user may feel inconveniences when the delay occurs in media data processing or biometrics processing such as fingerprint recognition, the tasks requiring completion within a given time should be considered as deadline tasks. However, using the deadline scheduler in current Android systems can cause two problems. First, as deadline tasks come to the system and are executed, the CPU energy consumption can be increased. Second, the high priority of the deadline tasks can cause performance degradation of the normal tasks. To mitigate these problems, this paper proposes a method of scheduling deadline tasks on Android systems, which reduces the performance impact on normal tasks, while trying to minimize energy consumption. Our evaluation on the CPU benchmark shows that the proposed method improves the CPU performance by about 10% compared with the conventional deadline scheduler, but does not increase power consumption by effectively utilizing CPU frequency.
An NVM-based Efficient Write-Reduction Scheme for Block Device Driver Performance Improvement
http://doi.org/10.5626/JOK.2019.46.10.981
Recently, non-volatile memory (NVRAM) has attracted substantial attention as a next-generation storage device due to the fact that it shows higher read/write performance than flash-based storage as well as higher cost-effectiveness than DRAM. One way to use NVRAM as a storage device is to modify the existing file system layer or block device layer. Leveraging the NVRAM block device driver is advantageous in terms of overall system compatibility, as it does not require any modification of the existing storage stack. However, when considering the byte-level addressing of the NVRAM device, the block write is not effective in terms of durability or performance. In this paper, we propose a block device driver that attempts to optimize the existing block write operations while considering the existing functionalities of the file system. The proposed block write reduction scheme provides a partial block write by classifying the type of blocks according to the structure of the file system as well as the amount of data modified in the block using XOR operation. Several experiments are performed to validate the performance of the proposed block device driver under various workloads, and the results show that, compared to the conventional block write operations, the amount of writes is reduced by up to 90%.
A Compression-based Data Consistency Mechanism for File Systems
Dong Hyun Kang, Sang-Won Lee, Young Ik Eom
http://doi.org/10.5626/JOK.2019.46.9.885
Data consistency mechanism is a crucial component in any file system; the mechanism prevents the corruption of data from system crashes or power failures. For the sake of performance, the default journal mode of the Ext4 file system guarantees only the consistency of metadata while compromising with the consistency of normal data. Specially, it does not guarantee full consistency of the whole data of the file system. In this paper, we propose a new crash consistency scheme which guarantees strong data consistency of the data journal mode by still providing higher or comparable performance to the weak default journal mode of the Ext4 file system. By leveraging a compression mechanism, the proposed scheme can halve the amount of write operations as well as the number of fsync() system calls. For evaluation of the performance, we modified the codes related to the jbd2 and compared the proposed scheme with two journaling modes in Ext4 on SSD and HDD. The results clearly confirm that the proposed scheme outperforms the default journal mode by 8.3x times.
I/O Completion Technique of Virtualized System Considering CPU Usage with High-Performance Storage Devices
Hyeji Lee, Taehyung Lee, Minho Lee, Yongju Song, Young Ik Eom
http://doi.org/10.5626/JOK.2019.46.7.612
Recently, the advent of high-performance storage devices such as Samsung Z-SSD and Intel Optane SSD has shifted the I/O systems’ performance overhead from the storage devices to the software I/O layer. To optimize the I/O performance of high-performance storage devices, the hypervisor and operating system have focused on the effectiveness of polling technique, which is one of the I/O completion techniques applied in virtualized systems, and new techniques such as hybrid and adaptive polling are being adopted. This paper reveals the problem of the existing adaptive polling techniques provided by QEMU-KVM hypervisor and proposes a new I/O completion technique, which saves on CPU usage while fully utilizing high-performance storage devices. Our evaluation indicates that the proposed technique reduces CPU usage by up to 39.7% while delaying I/O latency to less than 5.3% only, in comparison to conventional systems.
An Efficient SLC-buffer Management Scheme for TLC NAND Flash-based Storage
Kirock Kwon, Dong Hyun Kang, Young Ik Eom
http://doi.org/10.5626/JOK.2018.45.7.611
In recent years, almost all consumer devices have adopted NAND flash storage as their main storage, and their performance and capacity requirements are getting higher. To meet these requirements, many researchers have focused on combined SLC-TLC storage consisting of high-speed SLC and high-density TLC. In this paper, we redesign the internal structure of the combined SLC-TLC storage to efficiently manage the SLC region inside the storage and propose a scheme that improves the performance of the storage by employing the I/O characteristics of file system journaling. We implemented our scheme on the real storage platform, the OpenSSD jasmine board, and compared it with the conventional techniques. Our evaluation results show that our technique improves the storage performance by up to 65%, compared with the conventional techniques.
Streaming Compression Scheme for Reducing Network Resource Usage in Hadoop System
http://doi.org/10.5626/JOK.2018.45.6.516
Recently, the Hadoop system has become one of the most popular large-scale distributed systems used in enterprises, and the amount of data on the system has been increasing continually. As the amount of data in the Hadoop system is increased, the scale of Hadoop clusters is also growing. Resources in a node, such as processor, memory, and storage, are isolated from other nodes, and hence, even though resource usage is increased by data processing requests from clients, it doesn’t affect the performance of other nodes. However, all the nodes in a Hadoop cluster are connected to the network resource, a shared resource in the Hadoop cluster, and so, if some nodes dominate the network resource, other nodes would experience less network resources, which could cause overall performance degradation in the Hadoop system. In this paper, we propose a streaming compression scheme that can decrease the network traffic generated by write operations in the system. We also evaluate the performance of our streaming compression scheme and analyze the overhead of the proposed scheme. Our experimental results with a real-world workload show that our proposed scheme decreases the network traffic in a Hadoop cluster by 56% over the existing HDFS systems.
An Analysis of the Overhead of Multiple Buffer Pool Scheme on InnoDB-based Database Management Systems
Yongju Song, Minho Lee, Young Ik Eom
The advent of large-scale web services has resulted in gradual increase in the amount of data used in those services. These big data are managed efficiently by DBMS such as MySQL and MariaDB, which use InnoDB engine as their storage engine, since InnoDB guarantees ACID and is suitable for handling large-scale data. To improve I/O performance, InnoDB caches data and index of its database through a buffer pool. It also supports multiple buffer pools to mitigate lock contentions. However, the multiple buffer pool scheme leads to the additional data consistency overhead. In this paper, we analyze the overhead of the multiple buffer pool scheme. In our experimental results, although multiple buffer pool scheme mitigates the lock contention by up to 46.3%, throughput of DMBS is significantly degraded by up to 50.6% due to increased disk I/O and fsync calls.
Priority-based Hint Management Scheme for Improving Page Sharing Opportunity of Virtual Machines
Yeji Nam, Minho Lee, Dongwoo Lee, Young Ik Eom
Most data centers attempt to consolidate servers using virtualization technology to efficiently utilize limited physical resources. Moreover, virtualized systems have commonly adopted contents-based page sharing mechanism for page deduplication among virtual machines (VMs). However, previous page sharing schemes are limited by the inability to effectively manage accumulated hints which mean sharable pages in stack. In this paper, we propose a priority-based hint management scheme to efficiently manage accumulated hints, which are sent from guest to host for improving page sharing opportunity in virtualized systems. Experimental results show that our scheme removes pages with low sharing potential, as compared with the previous schemes, by efficiently managing the accumulated pages.
An Efficient Cleaning Scheme for File Defragmentation on Log-Structured File System
Jonggyu Park, Dong Hyun Kang, Euiseong Seo, Young Ik Eom
When many processes issue write operations alternately on Log-structured File System (LFS), the created files can be fragmented on the file system layer although LFS sequentially allocates new blocks of each process. Unfortunately, this file fragmentation degrades read performance because it increases the number of block I/Os. Additionally, read-ahead operations which increase the number of data to request at a time exacerbates the performance degradation. In this paper, we suggest a new cleaning method on LFS that minimizes file fragmentation. During a cleaning process of LFS, our method sorts valid data blocks by inode numbers before copying the valid blocks to a new segment. This sorting re-locates fragmented blocks contiguously. Our cleaning method experimentally eliminates 60% of file fragmentation as compared to file fragmentation before cleaning. Consequently, our cleaning method improves sequential read throughput by 21% when read-ahead is applied.
Analyses of the Effect of System Environment on Filebench Benchmark
Yongju Song, Junghoon Kim, Dong Hyun Kang, Minho Lee, Young Ik Eom
In recent times, NAND flash memory has become widely used as secondary storage for computing devices. Accordingly, to take advantage of NAND flash memory, new file systems have been actively studied and proposed. The performance of these file systems is generally measured with benchmark tools. However, since benchmark tools are executed by software simulation methods, many researchers get non-uniform benchmark results depending on the system environments. In this paper, we use Filebench, one of the most popular and representative benchmark tools, to analyze benchmark results and study the reasons why the benchmark result variations occur. Our experimental results show the differences in benchmark results depending on the system environments. In addition, this study substantiates the fact that system performance is affected mainly by background I/O requests and fsync operations.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr