COMIC: Toward A Compact Image Captioning Model With Attention

Recent works in image captioning have shown very promising raw performance. However, we realize that most of these encoder-decoder style networks with attention do not scale naturally to large vocabulary size, making them difficult to deploy on embedded systems with limited hardware resources. This...

Full description

Saved in:
Bibliographic Details
Main Authors: Tan, Jia Huei, Chan, Chee Seng, Chuah, Joon Huang
Format: Article
Published: Institute of Electrical and Electronics Engineers (IEEE) 2019
Subjects:
Online Access:http://eprints.um.edu.my/23306/
https://doi.org/10.1109/TMM.2019.2904878
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent works in image captioning have shown very promising raw performance. However, we realize that most of these encoder-decoder style networks with attention do not scale naturally to large vocabulary size, making them difficult to deploy on embedded systems with limited hardware resources. This is because the size of word and output embedding matrices grow proportionally with the size of vocabulary, adversely affecting the compactness of these networks. To address this limitation, this paper introduces a brand new idea in the domain of image captioning. That is, we tackle the problem of compactness of image captioning models which is hitherto unexplored. We showed that our proposed model, named COMIC for compact image captioning, achieves comparable results in five common evaluation metrics with state-of-the-art approaches on both MS-COCO and InstaPIC-1.1M datasets despite having an embedded vocabulary size that is 39×-99× smaller. © 1999-2012 IEEE.