Double Deep RL-based strategy for UAV-assisted energy harvesting optimization in disaster-resilient IoT networks
Unmanned Aerial Vehicles (UAVs) are increasingly crucial for emergency-response scenarios, including tasks like wireless power transfer (WPT) and data collection in disaster zones. This paper proposes a Double Deep Reinforcement Learning (DDRL) framework for energy harvesting (EH) in such scenarios....
Saved in:
| Main Authors: | , , , , , , , |
|---|---|
| Format: | Proceeding Paper |
| Language: | en en |
| Published: |
IEEE
2024
|
| Subjects: | |
| Online Access: | http://irep.iium.edu.my/114449/1/114449_Double%20Deep%20RL-based%20strategy.pdf http://irep.iium.edu.my/114449/7/114449_Double%20Deep%20RL-based%20strategy_SCOPUS.pdf http://irep.iium.edu.my/114449/ https://ieeexplore.ieee.org/document/10652500 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Unmanned Aerial Vehicles (UAVs) are increasingly crucial for emergency-response scenarios, including tasks like wireless power transfer (WPT) and data collection in disaster zones. This paper proposes a Double Deep Reinforcement Learning (DDRL) framework for energy harvesting (EH) in such scenarios. Our framework involves a UAV swarm navigating an area to provide WPT. The primary goal is to enhance service quality in critical areas while enabling dynamic swarm management for tasks like recharging. We formulate this as a nonlinear programming (NLP) optimization problem, maximizing EH from IoT devices and optimizing UAV trajectories under constraints like mission duration and energy limits. Due to the problem's complexity, we propose a lightweight DDRL solution capable of efficiently learning system dynamics. Extensive simulations and comparisons with Deep RL and DDPG algorithms demonstrate the superior performance of DDRL in enhancing EH, covering strategic locations effectively, and achieving high satisfaction and accuracy rates. |
|---|
