Search : [ keyword: 컴파일러 ] (7)

Memory Model Design for Integer-Pointer Casting Support in C-like languages Via Dual Non-determinism

Yonghyun Kim, Chung-Kil Hur

http://doi.org/10.5626/JOK.2024.51.7.643

In system programming, pointers are essential elements. However, applying formal verification methods to programs involving integer-pointer casting poses an important challenge. To address this challenge, a mathematically defined memory model that supports integer-pointer casting, along with proof techniques for verification, is necessary. This study presents a memory model that supports integer-pointer casting within the Coq proof assistant. The model accommodates patterns associated with integer-pointer operations, including one-past-the-end pointers. Additionally, a simulation-based proof technique is introduced, which enables the utilization of the model for program verification. The adequacy of this technique is established through proof. To validate the effectiveness of the approach, the defined memory model is integrated into CompCert, a verified C compiler, replacing its original memory model. Subsequently, two proofs of CompCert's optimization verification are updated using the simulation technique. It is anticipated that the proposed memory model will find applications in program and compiler verification tasks involving integer-pointer operations.

Performance Analysis of Instruction Priority Functions using a List Scheduling Simulator

Changhoon Chung, Soo-Mook Moon

http://doi.org/10.5626/JOK.2023.50.12.1048

Instruction scheduling is an important compiler optimization technique, for reducing the execution time of a program by parallel processing. However, existing scheduling techniques show limited performance, because they rely on heuristics. This study examines the effect of instruction priority functions on list scheduling, through simulation. As a result, using a priority function based on the overall structure of the dependency graph can reduce schedule length by up to 4%, compared to using a priority function based on the original instruction order. Furthermore, the result gives a direction on which input features should be used when implementing a reinforcement learning-based scheduling model.

Code Generation and Data Layout Transformation Techniques for Processing-in-Memory

Hayun Lee, Gyungmo Kim, Dongkun Shin

http://doi.org/10.5626/JOK.2023.50.8.639

Processing-in-Memory (PIM) capitalizes on internal parallelism and bandwidth within memory systems, thereby achieving superior performance to CPUs or GPUs in memory-intensive operations. Although many PIM architectures were proposed, the compiler issues for PIM are not currently well-studied. To generate efficient program codes for PIM devices, the PIM compiler must optimize operation schedules and data layouts. Additionally, the register reuse of PIM processing units must be maximized to reduce data movement traffic between host and PIM devices. We propose a PIM compiler, which can support various PIM architectures. It achieves up to 2.49 times performance improvement in GEMV operations through register reuse optimization.

Optimizing Homomorphic Compiler HedgeHog for DNN Model based on CKKS Homomorphic Encryption Scheme

Dongkwon Lee, Gyejin Lee, Suchan Kim, Woosung Song, Dohyung Lee, Hoon Kim, Seunghan Jo, Kyuyeon Park, Kwangkeun Yi

http://doi.org/10.5626/JOK.2022.49.9.743

We present a new state-of-the-art optimizing homomorphic compiler HedgeHog based on high-level input language. Although homomorphic encryption enables safe and secure third party computation, it is hard to build high-performance HE applications without expertise. Homomorphic compiler lowers this hurdle, but most of the existing compilers are based on HE scheme that does not support real number computation and a few compilers based on the CKKS HE scheme that supports real number computation uses low-level input language. We present an optimizing compiler HedgeHog whose input language supports high-level DNN operators. In addition to its ease of use, compiled HE code shows a maximum of 22% more of speedup than the existing state-of-the-art compiler.

On Equi-LR automata

Gyung-Ok Lee

http://doi.org/10.5626/JOK.2021.48.3.352

LR parsing is a representative bottom-up parsing method, and LR automata have been used as the essential frame for the construction of LR parser. This paper defines an equivalence class of classical LR items, which is called Equi-LR class and defines Equi-LR automata by using Equi-LR class instead of classical LR items. This paper shows that Equi-LR automata have the advantage of reduced construction time over classical LR automata, and the size complexity of LR parser in the frame of Equi-LR automata is tighter compared with the frame of classical LR automata.

Parser Generators Sharing LR Automaton Generators and Accepting General Purpose Programming Language-based Specifications

Jintaeck Lim, Gayoung Kim, Seunghyun Shin, Kwanghoon Choi, Iksoon Kim

http://doi.org/10.5626/JOK.2020.47.1.52

This paper proposes two ways to develop LR parsers easily. First, one can write a parser specification in a general programming language and derive the benefits of syntax error checking, code completion, and type-error checking over the specification from the language’s development environment. Second, to make it easy to develop a parser tool for a new programming language, the automata generation for the parser specifications is in a modular form. With the idea proposed in this study, we developed a tool for writing parsers in Python, Java, C++, and Haskell. We also demonstrated the two aforementioned advantages in an experiment.

Optimizing Constant Value Generation in Just-in-time Compiler for 64-bit JavaScript Engine

Hyung-Kyu Choi, Jehyung Lee

http://doi.org/

JavaScript is widely used in web pages with HTML. Many JavaScript engines adopt Just-in-time compilers to accelerate the execution of JavaScript programs. Recently, many newly introduced devices are adopting 64-bit CPUs instead of 32-bit and Just-in-time compilers for 64-bit CPU are slowly being introduced in JavaScript engines. However, there are many inefficiencies in the currently available Just-in-time compilers for 64-bit devices. Especially, the size of code is significantly increased compared to 32-bit devices, mainly due to 64-bit wide addresses in 64-bit devices. In this paper, we are going to address the inefficiencies introduced by 64-bit wide addresses and values in the Just-in-time compiler for the V8 JavaScript engine and propose more efficient ways of generating constant values and addresses to reduce the size of code. We implemented the proposed optimization in the V8 JavaScript engine and measured the size of code as well as performance improvements with Octane and SunSpider benchmarks. We observed a 3.6% performance gain and 0.7% code size reduction in Octane and a 0.32% performance gain and 2.8% code size reduction in SunSpider.


Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr