Stability-certified deep reinforcement learning strategy for UAV and lagrangian floating platform

This paper presents a robust technique for an Unmanned Aerial Vehicle (UAV) with the ability to fly above a moving platform autonomously. The purpose of the study is to investigate the problem of certifying stability of reinforcement learning policy when linked with nonlinear dynamical systems since...

全面介绍

Saved in:
书目详细资料
Main Authors: Muslim, M. S. M., Ismail, Z. H.
格式: Conference or Workshop Item
出版: 2021
主题:
在线阅读:http://eprints.utm.my/id/eprint/95750/
http://dx.doi.org/10.1109/ECTI-CON51831.2021.9454688
标签: 添加标签
没有标签, 成为第一个标记此记录!
实物特征
总结:This paper presents a robust technique for an Unmanned Aerial Vehicle (UAV) with the ability to fly above a moving platform autonomously. The purpose of the study is to investigate the problem of certifying stability of reinforcement learning policy when linked with nonlinear dynamical systems since conventional control methods often fail to properly account for complex effects. However, deep reinforcement learning algorithms have been designed to monitor the robust stability of a UAV's position in three-dimensional space, such as altitude and longitude-latitude location, so that the UAV can fly over a moving platform in a stable manner. Plus, the input-output policy gradient method is regulated and capable of approving a large number of stabilization controllers to obtain robust stability by exploiting problem-specific structure. Inside the stability-certified parameter space, reinforcement learning agents will attain high efficiency while also exhibiting consistent learning behaviors over time, according to a formula assessment on a decentralized control task involving flight creation.