Reinforcement learning based load balancing for fog-cloud computing systems: an optimization approach

Fog-cloud computing is a promising approach to enhance distributed systems’ efficiency and performance. Though, managing resources and balancing workloads in such environments remains challenging due to their inherent complexity and dynamic nature. The need for effective load-balancing techniques in...

Full description

Saved in:
Bibliographic Details
Main Authors: Al-Hashimi, Mustafa, Rahiman, Amir Rizaan, Muhammed, Abdullah, Hamid, Nor Asilah Wati
Format: Article
Published: Little Lion Scientific 2023
Online Access:http://psasir.upm.edu.my/id/eprint/110240/
https://www.jatit.org/volumes/hundredone18.php
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Fog-cloud computing is a promising approach to enhance distributed systems’ efficiency and performance. Though, managing resources and balancing workloads in such environments remains challenging due to their inherent complexity and dynamic nature. The need for effective load-balancing techniques in fog-cloud computing systems is crucial to optimize resource allocation, minimize delays, and maximize throughput. This article presents a reinforcement learning (RL)-based load balancing system for fog-cloud computing, employing two RL agents: one for allocating new tasks to fog or cloud nodes and another for migrating tasks between fog nodes to ensure fair distribution and increased throughput. This study derived up with novel state, action, and reward models for both agents, facilitating collaboration during the load-balancing process. Three types of rewards for the RL agents are explored: single objective, multi-objective under non-dominated sorting, and multi-objective under lexicographical sorting. The performance of these methods is assessed using metrics such as average utilization, number of tasks completed, serve rate, and delay. The experimental results showed that RL-based scheduling methods, particularly the Reinforce Learning Multiple Objective (RLRLM) with RL-based migration method outperforms greedy on CPU (GRc) and greedy on reliability (GRr) methods across all performance metrics. The choice of migration method and reward type also influences performance. These finding highlight RL’s potential in optimizing fog-cloud computing and offer valuable insights for future research and practical applications in this field.