Image Caption Generation using Recurrent Neural Network 


Vol. 43,  No. 8, pp. 878-882, Aug.  2016


PDF

  Abstract

Automatic generation of captions for an image is a very difficult task, due to the necessity of computer vision and natural language processing technologies. However, this task has many important applications, such as early childhood education, image retrieval, and navigation for blind. In this paper, we describe a Recurrent Neural Network (RNN) model for generating image captions, which takes image features extracted from a Convolutional Neural Network (CNN). We demonstrate that our models produce state of the art results in image caption generation experiments on the Flickr 8K, Flickr 30K, and MS COCO datasets.


  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Cite this article

[IEEE Style]

C. Lee, "Image Caption Generation using Recurrent Neural Network," Journal of KIISE, JOK, vol. 43, no. 8, pp. 878-882, 2016. DOI: .


[ACM Style]

Changki Lee. 2016. Image Caption Generation using Recurrent Neural Network. Journal of KIISE, JOK, 43, 8, (2016), 878-882. DOI: .


[KCI Style]

이창기, "Recurrent Neural Network를 이용한 이미지 캡션 생성," 한국정보과학회 논문지, 제43권, 제8호, 878~882쪽, 2016. DOI: .


[Endnote/Zotero/Mendeley (RIS)]  Download


[BibTeX]  Download



Search




Journal of KIISE

  • ISSN : 2383-630X(Print)
  • ISSN : 2383-6296(Electronic)
  • KCI Accredited Journal

Editorial Office

  • Tel. +82-2-588-9240
  • Fax. +82-2-521-1352
  • E-mail. chwoo@kiise.or.kr