Visual compensation in localization of a robot on a ceiling map

In a visual odometry system, location of a mobile robot is automatically estimated (localized) from video. When the video is captured by an "upward" camera fixed to an indoor mobile robot, a panorama image of the ceiling (ceiling map) is generated by using a visual motion between two adjac...

Full description

Saved in:
Bibliographic Details
Main Authors: Matsumoto, T., Takahashi, T., Iwahashi, M., Kimura, T., Salbiah, S., Mokhtar, N.
Format: Article
Language:en
Published: Scientific Research and Essays 2011
Subjects:
Online Access:http://eprints.um.edu.my/6147/1/Visual_compensation_in_localization_of_a_robot_on_a_ceiling_map.pdf
http://eprints.um.edu.my/6147/
https://www.academicjournals.org/SRE/PDF/pdf2011/4%20Jan/Matsumoto%20et%20al.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In a visual odometry system, location of a mobile robot is automatically estimated (localized) from video. When the video is captured by an "upward" camera fixed to an indoor mobile robot, a panorama image of the ceiling (ceiling map) is generated by using a visual motion between two adjacent frames in the video. Similarly, location of another robot can be estimated on the ceiling map by using a visual motion between the current frame and the previously generated ceiling map. Under the assumption that the robot goes straight or rotates around a fixed point, there is no problem on the localization as far as the floor is flat. However, when there is debris on the floor, the estimated location contains error. In this paper, we reduce this error by utilizing visual motions in video from the "forward" camera fixed to the robot. This is a visual compensation of motions in the "upward" camera's video with those in the "forward" camera's video. It was experimentally confirmed that the maximum absolute value of the error was reduced to approximately 11.