Digital Library[ Search Result ]
Adjusting OS Scheduler Parameters to Improve Server Application Performance
Taehyun Han, Hyeonmyeong Lee, Heeseung Jo
http://doi.org/10.5626/JOK.2020.47.7.643
Modern Linux servers are used in a variety of ways, from large servers to small IOTs, and most machines run their services through the default scheduler provided by Linux. Although it is possible to optimize for a specific purpose, there is a problem in which the average user cannot optimize all modern Linux applications. In this paper, we propose SCHEDTUNE to automatically optimize the scheduler configuration to maximize Linux server performance. SCHEDTUNE allows users to improve performance without modification to the application or basic kernel source running on the server. This makes it easy for administrators to configure schedulers that operate specifically for their servers. Experimental results showed that when SCHEDTUNE is applied, the maximum performance is achieved up to 19 %, and in most cases performance improvement is achieved as well.
Boosting the Forwarding Performance of Virtual Switches through Kernel-level Memory Optimization
Heungsik Choi, Kyoungwoon Lee, Chuck Yoo
http://doi.org/10.5626/JOK.2018.45.6.511
A virtual switch enables network resources to be utilized by a wide range of virtual machines or containers. Many types of virtual switches have been developed to offer a variety of functions. However, due to the inefficient processing of existing virtual switches and the Linux networking stack, current high bandwidth requirements cannot be met. To solve this problem, various studies have been carried out to propose a method using a unique networking stack in a user-level rather than an existing kernel. However, various problems still exist such as reimplementation overhead, relatively low security, excessive memory usage, etc. This paper proposes kernel-level optimization techniques to improve network processing of the kernel networking stack as well as to overcome the limitations of existing techniques.
qtar: Design and Implementation of an Optimized tar Command with FTL-level Remapping
Jeongseok Ryoo, Sangwook Shane Hahn, Jihong Kim
http://doi.org/10.5626/JOK.2018.45.1.9
Tar is a Linux command that combines several files into a single file. Combining multiple small files into large files increases the compression efficiency and data transfer speed. However, tar has a problem in that smaller target files, result in a lower performance. In this paper, we show that this performance degradation occurs when tar reads the data from the target files and propose qtar (quick tar) to solve this problem via flash-level remapping. When the size of an I/O request is less than 1 MB, the I/O performance decreases proportionally to the decrease in size of the I/O request. Since tar reads the data of files one by one, a smaller file size results in a lower performance. Therefore, the remapping technique is implemented in qtar to read data from the target files at the maximum I/O size regardless of the size of each file. Our evaluations show that the execution time with qtar is reduced by up to 3.4 times compared to that with tar.
AIOPro: A Fully-Integrated Storage I/O Profiler for Android Smartphones
Sangwook Shane Hahn, Inhyuk Yee, Donguk Ryu, Jihong Kim
Application response time is critical to end-user response time in Android smartphones. Due to the plentiful resources of recent smartphones, storage I/O response time becomes a major key factor in application response time. However, existing storage I/O trace tools for Android and Linux give limited information only for a specific I/O layer which makes it difficult to combine I/O information from different I/O layers, because not helpful for application developer and researchers. In this paper, we propose a novel storage I/O trace tool for Android, called AIOPro (Android I/O profiler). It traces storage I/O from application - Android platform - system call - virtual file system - native file system - page cache - block layer - SCSI layer and device driver. It then combines the storage I/O information from I/O layers by linking them with file information and physical address. Our evaluations of real smartphone usage scenarios and benchmarks show that AIOPro can track storage I/O information from all I/O layers without any data loss under 0.1% system overheads.
Design and Implementation of a Linux-based Message Processor to Minimize the Response-time Delay of Non-real-time Messages in Multi-core Environments
Sangho Wang, Younghun Park, Sungyong Park, Seungchun Kim, Cheolhoe Kim, Sangjun Kim, Cheol Jin
A message processor is server software that receives non-realtime messages as well as realtime messages from clients that need to be processed within a deadline. With the recent advances of micro-processor technologies and Linux, the message processor is often implemented in Linux-based multi-core servers and it is important to use cores efficiently to maximize the performance of system in multi-core environments. Numerous research efforts on a real-time scheduler for the efficient utilization of the multi-core environments have been conducted. Typically, though, they have been conducted theoretically or via simulation, making a subsequent real-system application difficult. Moreover, many Linux-based real-time schedulers can only be used in a specific Linux version, or the Linux source code needs to be modified. This paper presents the design of a Linux-based message processor for multi-core environments that maps the threads to the cores at user level. The message processor is implemented through a modification of the traditional RM algorithm that consolidates the real-time messages into certain cores using a first-fit-based bin-packing algorithm; this minimizes the response-time delay of the non-real-time messages, while guaranteeing the violation rate of the real-time messages. To compare the performances, the message processor was implemented using the two multi-core-scheduling algorithms GSN-EDF and P-FP, which are provided by the LITMUS framework. The benchmarking results show that the response-time delay of non-real-time messages in the proposed system was improved up to a maximum of 17% to 18%.
A Function-characteristic Aware Thread-mapping Strategy for an SEDA-based Message Processor in Multi-core Environments
Heeeun Kang, Sungyong Park, Younjeong Lee, Seungbae Jee
A message processor is server software that receives various message formats from clients, creates the corresponding threads to process them, and lastly delivers the results to the destination. Considering that each function of an SEDA-based message processor has its own characteristics such as CPU-bound or IO-bound, this paper proposes a thread-mapping strategy called "FC-TM" (function-characteristic aware thread mapping) that schedules the threads to the cores based on the function characteristics in multi-core environments. This paper assumes that message-processor functions are static in the sense that they are pre-defined when the message processor is built; therefore, we profile each function in advance and map each thread to a core using the information in order to maximize the throughput. The benchmarking results show that the throughput increased by up to a maximum of 72 % compared with the previous studies when the ratio of the IO-bound functions to the CPU-bound functions exceeds a certain percentage.
In-Memory File System Backed by Cloud Storage Services as Permanent Storages
Kyungjun Lee, Jiwon Kim, Sungtae Ryu, Hwansoo Han
As network technology advances, a larger number of devices are connected through the Internet. Recently, cloud storage services are gaining popularity, as they are convenient to access anytime and anywhere. Among cloud storage services, object storage is the representative one due to their characteristics of low cost, high availability, and high durability. One limitation of object storage services is that they can access data on the cloud only through the HTTP-based RESTful APIs. In our work, we resolve this limitation with the in-memory file system which provides a POSIX interface to the file system users and communicates with cloud object storages with RESTful APIs. In particular, our flush mechanism is compatible with existing file systems, as it is based on the swap mechanism of the Linux kernel. Our in-memory file system backed by cloud storage reduces the performance overheads and shows a better performance than S3QL by 57% in write operations. It also shows a comparable performance to tmpfs in read operations.
Framework-assisted Selective Page Protection for Improving Interactivity of Linux Based Mobile Devices
Seungjune Kim, Jungho Kim, Seongsoo Hong
While Linux-based mobile devices such as smartphones are increasingly used, they often exhibit poor response time. One of the factors that influence the user-perceived interactivity is the high page fault rate of interactive tasks. Pages owned by interactive tasks can be removed from the main memory due to the memory contention between interactive and background tasks. Since this increases the page fault rate of the interactive tasks, their executions tend to suffer from increased delays. This paper proposes a framework-assisted selective page protection mechanism for improving interactivity of Linux-based mobile devices. The framework-assisted selective page protection enables the run-time system to identify interactive tasks at the framework level and to deliver their IDs to the kernel. As a result, the kernel can maintain the pages owned by the identified interactive tasks and avoid the occurrences of page faults. The experimental results demonstrate the selective page protection technique reduces response time up to 11% by reducing the page fault rate by 37%.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr