FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks

Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive con...

Full description

Saved in:
Bibliographic Details
Main Authors: Noman, Hafiz Muhammad Fahad, Dimyati, Kaharudin, Noordin, Kamarul Ariffin, Hanafi, Effariza, Abdrabou, Atef
Format: Article
Published: Institute of Electrical and Electronics Engineers 2024
Subjects:
Online Access:http://eprints.um.edu.my/47101/
https://doi.org/10.1109/ACCESS.2024.3434619
Tags: Add Tag
No Tags, Be the first to tag this record!
id my.um.eprints.47101
record_format eprints
spelling my.um.eprints.471012024-11-22T04:56:36Z http://eprints.um.edu.my/47101/ FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks Noman, Hafiz Muhammad Fahad Dimyati, Kaharudin Noordin, Kamarul Ariffin Hanafi, Effariza Abdrabou, Atef QA75 Electronic computers. Computer science TK Electrical engineering. Electronics Nuclear engineering Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive connectivity, limited network resources, and interference management are the significant challenges to deploying multiple device-to-device pairs (DDPs) without disrupting cellular users. Hence, intelligent resource management and power control are indispensable for alleviating interference among DDPs to optimize overall system performance and global energy efficiency. Considering this, we present a Federated DRL-based method for energy-efficient resource management in a D2D-assisted heterogeneous network (HetNet). We formulate a joint optimization problem of power control and channel allocation to maximize the system's energy efficiency under QoS constraints for cellular user equipment (CUEs) and DDPs. The proposed scheme employs federated learning for a decentralized training paradigm to address user privacy, and a double-deep Q-network (DDQN) is used for intelligent resource management. The proposed DDQN method uses two separate Q-networks for action selection and target estimation to rationalize the transmit power and dynamic channel selection in which DDPs as agents could reuse the uplink channels of CUEs. Simulation results depict that the proposed method improves the overall system energy efficiency by 41.52% and achieves a better sum rate of 11.65%, 24.78%, and 47.29% than multi-agent actor-critic (MAAC), distributed deep-deterministic policy gradient (D3PG), and deep Q network (DQN) scheduling, respectively. Moreover, the proposed scheme achieves a 5.88%, 15.79%, and 27.27% reduction in cellular outage probability compared to MAAC, D3PG, and DQN scheduling, respectively, which makes it a robust solution for energy-efficient resource allocation in D2D-assisted 6G networks. Institute of Electrical and Electronics Engineers 2024 Article PeerReviewed Noman, Hafiz Muhammad Fahad and Dimyati, Kaharudin and Noordin, Kamarul Ariffin and Hanafi, Effariza and Abdrabou, Atef (2024) FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks. IEEE Access, 12. pp. 109775-109792. ISSN 2169-3536, DOI https://doi.org/10.1109/ACCESS.2024.3434619 <https://doi.org/10.1109/ACCESS.2024.3434619>. https://doi.org/10.1109/ACCESS.2024.3434619 10.1109/ACCESS.2024.3434619
institution Universiti Malaya
building UM Library
collection Institutional Repository
continent Asia
country Malaysia
content_provider Universiti Malaya
content_source UM Research Repository
url_provider http://eprints.um.edu.my/
topic QA75 Electronic computers. Computer science
TK Electrical engineering. Electronics Nuclear engineering
spellingShingle QA75 Electronic computers. Computer science
TK Electrical engineering. Electronics Nuclear engineering
Noman, Hafiz Muhammad Fahad
Dimyati, Kaharudin
Noordin, Kamarul Ariffin
Hanafi, Effariza
Abdrabou, Atef
FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks
description Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive connectivity, limited network resources, and interference management are the significant challenges to deploying multiple device-to-device pairs (DDPs) without disrupting cellular users. Hence, intelligent resource management and power control are indispensable for alleviating interference among DDPs to optimize overall system performance and global energy efficiency. Considering this, we present a Federated DRL-based method for energy-efficient resource management in a D2D-assisted heterogeneous network (HetNet). We formulate a joint optimization problem of power control and channel allocation to maximize the system's energy efficiency under QoS constraints for cellular user equipment (CUEs) and DDPs. The proposed scheme employs federated learning for a decentralized training paradigm to address user privacy, and a double-deep Q-network (DDQN) is used for intelligent resource management. The proposed DDQN method uses two separate Q-networks for action selection and target estimation to rationalize the transmit power and dynamic channel selection in which DDPs as agents could reuse the uplink channels of CUEs. Simulation results depict that the proposed method improves the overall system energy efficiency by 41.52% and achieves a better sum rate of 11.65%, 24.78%, and 47.29% than multi-agent actor-critic (MAAC), distributed deep-deterministic policy gradient (D3PG), and deep Q network (DQN) scheduling, respectively. Moreover, the proposed scheme achieves a 5.88%, 15.79%, and 27.27% reduction in cellular outage probability compared to MAAC, D3PG, and DQN scheduling, respectively, which makes it a robust solution for energy-efficient resource allocation in D2D-assisted 6G networks.
format Article
author Noman, Hafiz Muhammad Fahad
Dimyati, Kaharudin
Noordin, Kamarul Ariffin
Hanafi, Effariza
Abdrabou, Atef
author_facet Noman, Hafiz Muhammad Fahad
Dimyati, Kaharudin
Noordin, Kamarul Ariffin
Hanafi, Effariza
Abdrabou, Atef
author_sort Noman, Hafiz Muhammad Fahad
title FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks
title_short FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks
title_full FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks
title_fullStr FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks
title_full_unstemmed FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks
title_sort fedrl-d2d: federated deep reinforcement learning- empowered resource allocation scheme for energy efficiency maximization in d2d-assisted 6g networks
publisher Institute of Electrical and Electronics Engineers
publishDate 2024
url http://eprints.um.edu.my/47101/
https://doi.org/10.1109/ACCESS.2024.3434619
_version_ 1817841975578066944
score 13.223943