Food category recognition using SURF and MSER local feature representation
Food object recognition has gained popularity in recent years. This can perhaps be attributed to its potential applications in fields such as nutrition and fitness. Recognizing food images however is a challenging task since various foods come in many shapes and sizes. Besides having unexpected defo...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Book Section |
Language: | English |
Published: |
Springer
2017
|
Online Access: | http://psasir.upm.edu.my/id/eprint/63113/1/Food%20category%20recognition%20using%20SURF%20and%20MSER%20local%20feature%20representation.pdf http://psasir.upm.edu.my/id/eprint/63113/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Food object recognition has gained popularity in recent years. This can perhaps be attributed to its potential applications in fields such as nutrition and fitness. Recognizing food images however is a challenging task since various foods come in many shapes and sizes. Besides having unexpected deformities and texture, food images are also captured in differing lighting conditions and camera viewpoints. From a computer vision perspective, using global image features to train a supervised classifier might be unsuitable due to the complex nature of the food images. Local features on the other hand seem the better alternative since they are able to capture minute intricacies such as interest points and other intricate information. In this paper, two local features namely SURF (Speeded- Up Robust Feature) and MSER (Maximally Stable Extremal Regions) are investigated for food object recognition. Both features are computationally inexpensive and have shown to be effective local descriptors for complex images. Specifically, each feature is firstly evaluated separately. This is followed by feature fusion to observe whether a combined representation could better represent food images. Experimental evaluations using a Support Vector Machine classifier shows that feature fusion generates better recognition accuracy at 86.6%. |
---|