Search : [ keyword: SSD ] (11)

Managing DISCARD Commands in F2FS File System for Improving Lifespan and Performance of SSD Devices

Jinwoong Kim, Donghyun Kang, Young Ik Eom

http://doi.org/10.5626/JOK.2024.51.8.669

The DISCARD command is an interface that helps improve the lifespan and performance of SSDs by informing the SSD devices about invalid file system blocks. However, in the F2FS file system, the DISCARD command is only sent to the SSD during idle time, which limits the potential for improving lifespan and performance. In this paper, we propose an EPD scheme to efficiently transfer DISCARD commands during short idle times, as well as a seg-ment allocation scheme called PSA, which replaces DISCARD commands with overwrite commands. To evaluate the effectiveness of these proposed schemes, we conducted several experiments using various workloads to verify the lifespan and performance of real SSD devices. The results showed that the proposed schemes can improve the write amplification factor (WAF) by up to 40% and throughput by up to 160%, when compared to the traditional F2FS file system.

Addressing Write-Warm Pages in OLTP Workloads

Kyong-Shik Lee, Mijin An, Sang-Won Lee

http://doi.org/10.5626/JOK.2023.50.11.1002

One of the most important purposes of buffer management policies is to cache frequently accessed data in the buffer pool to minimize disk I/O. However, even if frequently referenced pages are effectively stored, a small number of pages can still result in excessive disk I.O. This is because of write-warm pages, which are repeatedly fetched and evicted from the buffer pool. In this paper, we introduce a “(Write-)Warm Page Thrashing” problem and confirm the existence of write-warm pages. Specifically, we found that 10% of flushed pages accounted for 41% of writes. This could degrade the performance, particularly for flash memory devices with slow write speeds. Therefore, a new buffer management policy is required to detect and prevent such thrashing problem.

MQSim-E: Design and Implementation of an NVMe SSD Simulator for Enterprise SSDs

Duwon Hong, Dusol Lee, Jihong Kim

http://doi.org/10.5626/JOK.2022.49.4.271

In the study of storage systems such as SSD, a simulator that accurately mimic the operation of SW/HW inside the system plays an important role. In this paper, MQSim, which is widely used in research on NVMe SSDs, was shown to be inappropriate for the development of enterprise-SSD, and we propose an MQSim-E simulator that supports optimized techniques adopted in enterprise-SSD. MQSim-E fully utilizes the parallelism of flash memory and minimizes the performance overhead of garbage collection, improving IOPS, which is an important design goal for enterprise-SSDs, by up to 210% and reducing tail latency by up to 16,000% compared to the existing simulator (MQSim) to accurately reflect the characteristics of commercial enterprise SSDs.

Sequentiality-Aware Hash-based FTL

Jaemin Shin, Ilbo Jeong, Li Xiaochang, Jihong Kim

http://doi.org/10.5626/JOK.2020.47.8.717

As the capacity of an SSD significantly increases, the SSD needs a larger DRAM for managing SSD-internal information. Since the cost of DRAM is an important factor in deciding the overall SSD price, it is important to reduce the DRAM cost without a performance degradation. In this paper, we propose a novel hash-based FTL mapping technique that meets this goal. Unlike an existing hash-based scheme, our technique introduces a virtual block scheme which enables to exploit the sequentiality of the logical address which effectively reduces the garbage collection overhead. Experimental results showed that SEQhFTL can reduce this overhead as much as PFTL while only maintaining 39% of PFTL’s metadata used on average.

Host-Level I/O Scheduler for Achieving Performance Isolation with Open-Channel SSDs

Sooyun Lee, Kyuhwa Han, Dongkun Shin

http://doi.org/10.5626/JOK.2020.47.2.119

As Solid State Drives (SSDs) provide higher I/O performance and lower energy consumption compared to Hard Disk Drives (HDDs), SSDs are currently widening its adoption in areas such as datacenters and cloud computing where multiple users share resources. Based on this trend, there is currently greater research effort being made on ensuring Quality of Service (QoS) in environments where resources are shared. The previously proposed Workload-Aware Budget Compensation (WA-BC) scheduler aims to ensure QoS among multiple Virtual Machines (VMs) sharing an NVMe SSD. However, the WA-BC scheduler has a weakness in that it misuses multi-stream SSDs for identifying workload characteristics. In this paper, we propose a new host-level I/O scheduler, which complements this vulnerability of the WA-BC scheduler. It aims to eliminate performance interference between different users that share an Open-Channel SSD. The proposed scheduler identifies workload characteristics without having to allocate separate SSD streams by observing the sequentiality of I/O requests. Although the proposed scheduler exists within the host, it can reflect the status of device internals by exploiting the characteristics of Open-Channel SSDs. We show that by identifying those that attribute more to garbage collection, a source of I/O interference within SSDs, using workload characteristics and penalizing such users helps to achieve performance isolation amongst different users sharing storage resources.

VNSIM: Virtual Machine based Multi-core SSD Simulator for supporting NVM Express

Jinsoo Yoo, Youjip Won

http://doi.org/10.5626/JOK.2018.45.5.427

Solid State Drives (SSD) continue to improve its performance and capacity through the adoption of new host interfaces and the use of multi-channel/multi-way I/O parallelism with multiple core controllers. In order to design and evaluate the structure of the SSDs, a new SSD simulator needs to be developed that supports the latest storage techniques. In this study, we develop a SSD simulator, the Virtual-machine based NVMe SSD SIMulator (VNSIM), which supports the latest host controller interface, NVM Express. The VNSIM simulates the entire I/O stack, from applications to Flash memories. Unlike the existing SSD simulators, the VNSIM provides an environment for simulating and evaluating SSD structures with two or more Flash Translation Layer (FTL) cores running in the SSD. We developed the Flash I/O emulator which simulates the I/O performance of the Flash memory including page cache registers. The VNSIM was validated using the Samsung 950 Pro NVMe SSD, showing that the VNSIM models the 950 Pro SSD with a 6.2%~8.9% offset.

Multi-core Scalable Fair I/O Scheduling for Multi-queue SSDs

Minjung Cho, Hyeongseok Kang, Kanghee Kim

http://doi.org/

The emerging NVMe-based multi-queue SSDs provides a high bandwidth by parallel I/O, i.e., each core performs I/O through its dedicated queue in parallel with other cores. To provide a bandwidth share for each application with I/O, a fair-share scheduler that provides a bandwidth share to each core is required. In this study, we proposed a multi-core scalable fair-queuing algorithm for multi-queue SSDs. The algorithm adopts randomization to minimize the inter-core synchronization overheads and provides a weight-proportional bandwidth share to each core. The results of our experiments indicated that the proposed algorithm gives accurate bandwidth partitioning and outperforms the existing FlashFQ scheduler, regardless of the number of cores for a Linux kernel with block-mq.

Implementation of a Prefetch method for Secondary Index Scan in MySQL InnoDB Engine

Dasom Hwang, Sang-Won Lee

http://doi.org/

Flash SSDs have many advantages over the existing hard disks such as energy efficiency, shock resistance, and high I/O throughput. For these reasons, in combination with the emergence of innovative technologies such as 3D-NAND and V-NAND for cheaper cost-per-byte, flash SSDs have been rapidly replacing hard disks in many areas. However, the existing database engines, which have been developed mainly assuming hard disks as the storage, could not fully exploit the characteristics of flash SSDs (e.g. internal parallelism). In this paper, in order to utilize the internal parallelism intrinsic to modern flash SSDs for faster query processing, we implemented a prefetching method using asynchronous input/output as a new functionality for secondary index scans in MySQL InnoDB engine. Compared to the original InnoDB engine, the proposed prefetching-based scan scheme shows three-fold higher performance in the case of 16KB-page sizes, and about 4.2-fold higher performance in the case of 4KB-page sizes.

SSD Caching for Improving Performance of Virtualized IoT Gateway

Dongwoo Lee, Young Ik Eom

http://doi.org/

It is important to improve the performance of storage in the home cloud environment within the virtualized IoT gateway since the performance of applications deeply depends on storage. Though SSD caching is applied in order to improve the storage, it is only used for read-cache due to the limitations of SSD such as poor write performance and small write endurance. However, it isimportant to improve performance of the write operation in the home cloud server, in order to improve the end-user experience. This paper propose a novel SSD caching which considers write-data as well as read-data. We validate the enhancement in the performance of random-write by transforming it to the sequential patterns.

File-System-Level SSD Caching for Improving Application Launch Time

Changhee Han, Junhee Ryu, Dongeun Lee, Kyungtae Kang, Heonshik Shin

http://doi.org/

Application launch time is an important performance metric to user experience in desktop and laptop environment, which mostly depends on the performance of secondary storage. Application launch times can be reduced by utilizing solid-state drive (SSD) instead of hard disk drive (HDD). However, considering a cost-performance trade-off, utilizing SSDs as caches for slow HDDs is a practicable alternative in reducing the application launch times. We propose a new SSD caching scheme which migrates data blocks from HDDs to SSDs. Our scheme operates entirely in the file system level and does not require an extra layer for mapping SSD-cached data that is essential in most other schemes. In particular, our scheme does not incur mapping overheads that cause significant burdens on the main memory, CPU, and SSD space for mapping table. Experimental results conducted with 8 popular applications demonstrate our scheme yields 56% of performance gain in application launch, when data blocks along with metadata are migrated.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr