Performance evaluation of recurrent neural networks applied to indoor camera localization
Researchers in robotics and computer vision are experimenting with the image-based localization of indoor cameras. Implementation of indoor camera localization problems using a Convolutional neural network (CNN) or Recurrent neural network (RNN) is more challenging from a large image dataset because...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IJETAE Publication House
2022
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/98713/1/MuhammadSAlam2022_PerformanceEvaluationofRecurrentNeura.pdf http://eprints.utm.my/id/eprint/98713/ http://dx.doi.org/10.46338/ijetae0822_15 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Researchers in robotics and computer vision are experimenting with the image-based localization of indoor cameras. Implementation of indoor camera localization problems using a Convolutional neural network (CNN) or Recurrent neural network (RNN) is more challenging from a large image dataset because of the internal structure of CNN or RNN. We can choose a preferable CNN or RNN variant based on the problem type and size of the dataset. CNN is the most flexible method for implementing indoor localization problems. Despite CNN's suitability for hyper-parameter selection, it requires a lot of training images to achieve high accuracy. In addition, overfitting leads to a decrease in accuracy. Introduce RNN, which accurately keeps input images in internal memory to solve these problems. Long-short-term memory (LSTM), Bi-directional LSTM (BiLSTM), and Gated recurrent unit (GRU) are three variants of RNN. We may choose the most appropriate RNN variation based on the problem type and dataset. In this study, we can recommend which variant is effective for training more speedily and which variant produces more accurate results. Vanishing gradient issues also affect RNNs, making it difficult to learn more data. Overcome the gradient vanishing problem by utilizing LSTM. The BiLSTM is an advanced version of the LSTM and is capable of higher performance than the LSTM. A more advanced RNN variant is GRU which is computationally more efficient than an LSTM. In this study, we explore a variety of recurring units for localizing indoor cameras. Our focus is on more powerful recurrent units like LSTM, BiLSTM, and GRU. Using the Microsoft 7-Scenes and InteriorNet datasets, we evaluate the performance of LSTM, BiLSTM, and GRU. Our experiment has shown that the BiLSTM is more efficient in accuracy than the LSTM and GRU. We also observed that the GRU is faster than LSTM and BiLSTM. |
---|