Delaunay triangulation based text detection from multi-view images of natural scene

Text detection in the wild is still considered as a challenging issue to the researchers because of its several real time applications like forensic application, where CCTV camera captures images at different angles of the same scene. Unlike the existing methods that consider a single view captured...

Full description

Saved in:
Bibliographic Details
Main Authors: Roy, Soumyadip, Shivakumara, Palaiahnakote, Pal, Umapada, Lu, Tong, Kumar, Govindaraj Hemantha
Format: Article
Published: Elsevier 2020
Subjects:
Online Access:http://eprints.um.edu.my/25238/
https://doi.org/10.1016/j.patrec.2019.11.021
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Text detection in the wild is still considered as a challenging issue to the researchers because of its several real time applications like forensic application, where CCTV camera captures images at different angles of the same scene. Unlike the existing methods that consider a single view captured orthogonally for text detection, this paper considers multi-view (view-1 and view-2 of the same spot) of the same scene captured at different angles or different height distances for text detection. For each pair of the same scene, the proposed method extracts features that describe characteristics of text components based on Delaunay Triangulation (DT), namely corner points, area and cavity of the DT. The features of corresponding DT in view-1 and view-2 are compared through cosine distance measure to estimate the similarity between two components of respective view-1 and view-2. If the pair satisfies the similarity condition, the components are considered as Candidate Text Components (CTC). In other words, these are the common components for view-1 and view-2 that satisfy the similarity condition. From each CTC of view-1 and view-2, the proposed method finds nearest neighbor components to restore the components of the same text line based on estimating degree of similarly between CTC and neighbor components using Chi-square and cosine distance measures. Furthermore, the proposed method uses a recognition step to detect correct texts by comparing recognition results of view-1 and view-2. The same recognition step is used for removing false positives to improve the performance of the proposed method. Experimental results on our own dataset, which contains pair of images of different situations, and the standard datasets, namely, ICDAR 2013, MSRATD-500, CTW1500, Total-text, ICDAR 2017 MLT and COCO-text, show that the proposed method outperforms the existing methods. © 2019