Compression of Korean Phrase Structure Parsing Model using Knowledge Distillation 


Vol. 45,  No. 5, pp. 451-456, May  2018
10.5626/JOK.2018.45.5.451


PDF

  Abstract

A sequence-to-sequence model is an end-to-end model that transforms an input sequence into an output sequence of different lengths. However, it is difficult to apply to an actual service by using techniques such as attention mechanism and input-feeding to achieve high performance. In this paper, we apply the sequence-level knowledge distillation for natural language processing to the Korean phrase structure parsing, which is an effective technique for compressing the model. Experimental results show that when the size of the hidden layer is decreased from 500 to 50, the performance of F1 0.56% is improved and the speed is 60.71 times faster than that of the baseline model.


  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Cite this article

[IEEE Style]

H. Hwang and C. Lee, "Compression of Korean Phrase Structure Parsing Model using Knowledge Distillation," Journal of KIISE, JOK, vol. 45, no. 5, pp. 451-456, 2018. DOI: 10.5626/JOK.2018.45.5.451.


[ACM Style]

Hyunsun Hwang and Changki Lee. 2018. Compression of Korean Phrase Structure Parsing Model using Knowledge Distillation. Journal of KIISE, JOK, 45, 5, (2018), 451-456. DOI: 10.5626/JOK.2018.45.5.451.


[KCI Style]

황현선, 이창기, "지식의 증류기법(Knowledge Distillation)을 이용한 한국어 구구조 구문 분석 모델의 압축," 한국정보과학회 논문지, 제45권, 제5호, 451~456쪽, 2018. DOI: 10.5626/JOK.2018.45.5.451.


[Endnote/Zotero/Mendeley (RIS)]  Download


[BibTeX]  Download



Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr