Digital Library[ Search Result ]
Design and Implementation of a Concurrency Error Detection Method for Embedded Software Using Machine Learning
Dongeon Lee, Jiwon Kim, Junghun Jin, Kyutae Cho
http://doi.org/10.5626/JOK.2022.49.5.327
Unlike general-purpose software, embedded software is designed by optimizing hardware for a specific purpose, so it is important to satisfy the target performance in a limited environment. Embedded software is increasing significantly in scale and complexity compared to the past. As the scale and complexity increase, the types of errors that occur in the software also diversify. Among them, there are many issues regarding concurrency errors that may occur between complex software modules. To detect concurrency errors in such embedded software, we have previously relied on manual input from the user. However, in this study, we propose a machine learning-based concurrency error detection tool (MCED) using SVM and deep learning.
An Automatic Parameter Optimizing Scheme for RocksDB
Jiwon Kim, Hyeonmyeong Lee, Sungmin Jung, Heeseung Jo
http://doi.org/10.5626/JOK.2021.48.11.1167
For users with low understanding of application, it is very difficult to optimize a complex application. Leading studies that optimize application using one or two parameters can enhance the performance of an application. However, it is difficult to consider the relationship between various parameters using a single parameter optimization. In this paper, we proposed two techniques, LDH-Force and PF-LDH, that could optimize several parameters at the same time. The LDH-Force technique could efficiently reduce the number of searches by adding an LDH process, while simultaneously finding the optimal parameter combination for several parameters. The PF-LDH technique could further reduce the search cost by adding a filtering process and confirming that the degree to which the parameter affects the performance is different. Evaluation results confirmed that the proposed scheme had performance improvement of up to 42.55 times. The proposed scheme was able to find the optimal parameter combination at the lowest search cost without user intervention under various workloads.
In-Memory File System Backed by Cloud Storage Services as Permanent Storages
Kyungjun Lee, Jiwon Kim, Sungtae Ryu, Hwansoo Han
As network technology advances, a larger number of devices are connected through the Internet. Recently, cloud storage services are gaining popularity, as they are convenient to access anytime and anywhere. Among cloud storage services, object storage is the representative one due to their characteristics of low cost, high availability, and high durability. One limitation of object storage services is that they can access data on the cloud only through the HTTP-based RESTful APIs. In our work, we resolve this limitation with the in-memory file system which provides a POSIX interface to the file system users and communicates with cloud object storages with RESTful APIs. In particular, our flush mechanism is compatible with existing file systems, as it is based on the swap mechanism of the Linux kernel. Our in-memory file system backed by cloud storage reduces the performance overheads and shows a better performance than S3QL by 57% in write operations. It also shows a comparable performance to tmpfs in read operations.
Performance Analysis of Cloud-Backed File Systems with Various Object Sizes
Jiwon Kim, Kyungjun Lee, Sungtae Ryu, Hwansoo Han
Recent cloud infrastructures provide competitive performances and operation costs for many internet services through pay-per-use model. Particularly, object storages are highlighted, as they have unlimited file holding capacity and allow users to access the stored files anytime and anywhere. Several lines of research are based on cloud-backed file systems, which support traditional POSIX interface rather than RESTful APIs via HTTP. However, these existing file systems handle all files with uniform size backing objects. Consequently, the accesses to cloud object storages are likely to be inefficient. In our research, files are profiled according to characteristics, and appropriate backing unit sizes are determined. We experimentally verify that different backing unit sizes for the object storage improve the performance of cloud-backed file systems. In our comparative experiments with S3QL, our prototype cloud-backed file system shows faster performance by 18.6% on average.
Mapping Cache for High-Performance Memory Mapped File I/O in Memory File Systems
Jiwon Kim, Jungsik Choi, Hwansoo Han
The desire to access data faster and the growth of next-generation memories such as non-volatile memories, contribute to the development of research on memory file systems. It is recommended that memory mapped file I/O, which has less overhead than read-write I/O, is utilized in a high-performance memory file system. Memory mapped file I/O, however, brings a page table overhead, which becomes one of the big overheads that needs to be resolved in the entire file I/O performance. We find that same overheads occur unnecessarily, because a page table of a file is removed whenever a file is opened after being closed. To remove the duplicated overhead, we propose the mapping cache, a technique that does not delete a page table of a file but saves the page table to be reused when the mapping of the file is released. We demonstrate that mapping cache improves the performance of traditional file I/O by 2.8x and web server performance by 12%.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr