Volume 17 , Issue 1 , PP: 255-270, 2025 | Cite this article as | XML | Html | PDF | Full Length Article
Mohammad Khalid 1 , Hassan Mohamed Muhi-Aldeen 2 * , Basma Rashid Mahdi Alhamdani 3
Doi: https://doi.org/10.54216/JISIoT.170118
Due to the Internet's growing importance in our lives, Software-Defined Networking (SDN) networks have experienced high load traffic issues. Thus, network load has increased, lowering quality of service (Qos) performance. Modern networked systems depend on communication channels to transmit data between sources and destinations. High traffic loads exacerbate packet distribution inefficiencies, causing network congestion in specific channels, compromising these communication channels. Congestion delays packet delivery and generates significant packet loss, reducing network dependability and efficiency. Communication channels' improper packet allocation along accessible paths is the fundamental issue. Some paths are overcrowded during peak traffic, while others are underused. Bottlenecks slow packet transit and increase packet loss due to this imbalance. Current packet distribution techniques don't adapt effectively to dynamic traffic, resulting in poor network performance. Current traffic management solutions often rely on load balancing algorithms, but these methods may not adequately account for the dynamic and unpredictable nature of high-load traffic. This paper introduces Adaptive Load Balancing using Reinforcement Learning (ALBRL), which uses Q-learning and deep reinforcement learning to distribute traffic in real time in SDNs with high traffic loads. This model uses more network-specific indicators including packet loss ratio, latency, Jitter, and traffic pattern history to improve decision-making. ALBRL outperformed static routing and Q-learning with 15.34(ms) average delay, 2.11(ms) jitter, and 7.89% packet loss ratio.
Software-Defined Networking , Load Balancing , Q-learning , High-Load Traffic , Deep Q-learning , Deep Reinforcement Learning
[1] New approach to dynamic load balancing in software-defined network-based data centers, 2023.
[2] I. A. AlAblani and M. A. Arafah, “A2T-Boost: An adaptive cell selection approach for 5G/SDN-based vehicular networks,” IEEE Access, vol. 11, pp. 7085–7108, 2023.
[3] H. Alhilali and A. Montazerolghaem, “Artificial intelligence-based load balancing in SDN: A comprehensive survey,” Internet of Things, vol. 22, 2023.
[4] J. C. Altamirano, M. A. Slimane, H. Hassan, and K. Drira, “QoS-aware network self-management architecture based on deep reinforcement learning and SDN for remote areas,” in 2022 IEEE 11th IFIP International Conference on Performance Evaluation and Modeling in Wireless and Wired Networks (PEMWN), IEEE, 2022.
[5] S. Yassine and A. Stanulov, “A comparative analysis of machine learning algorithms for the purpose of predicting Norwegian air passenger traffic,” International Journal of Mathematics, Statistics, and Computer Science, vol. 2, pp. 28–43, 2024. [Online]. Available: https://doi.org/10.59543/ijmscs.v2i.7851
[6] M. R. Belgaum, F. Ali, Z. Alansari, S. Musa, M. M. Alam, and M. S. Mazliham, “Artificial intelligence-based reliable load balancing framework in software-defined networks,” Computers, Materials & Continua, vol. 70, no. 1, pp. 251–266, 2021.
[7] Y. Cao, L. Zhang, X. Chen, and others, “Fog computing in smart cities: Challenges and opportunities,” IEEE Transactions on Industrial Informatics, vol. 19, no. 5, pp. 3360–3367, 2023.
[8] J. Chen and others, “ALBRL: Automatic load-balancing architecture based on reinforcement learning in software-defined networking,” Wireless Communications and Mobile Computing, 2022.
[9] J. Chen and others, “AQMDRL: Automatic quality of service architecture based on multistep deep reinforcement learning in software-defined networking,” Sensors, vol. 23, no. 1, 2023.
[10] J. Chen, J. Chen, and H. Zhang, “DRLEC: Multi-agent deep reinforcement learning-based elasticity control for VNF migration in SDN/NFV networks,” in 2021 26th IEEE Asia-Pacific Conference on Communications (APCC), IEEE, 2021.
[11] D. K. Dake, G. S. Klogo, J. D. Gadze, and H. Nunoo-Mensah, “Traffic engineering in software-defined networks using reinforcement learning: A review,” International Journal of Advanced Computer Science and Applications, vol. 12, no. 5, pp. 330–345, 2021.
[12] R. Etengu, S. C. Tan, L. C. Kwang, F. M. Abbou, and T. C. Chuah, “AI-assisted framework for green-routing and load balancing in hybrid software-defined networking: Proposal, challenges, and future perspective,” IEEE Access, vol. 8, pp. 166384–166441, 2020.
[13] K. B. A. Isyaku, F. A. Ghaleb, and A. Al-Nahari, “Dynamic routing and failure recovery approaches for efficient resource utilization in OpenFlow-SDN: A survey,” IEEE Access, vol. 10, pp. 121791–121815, 2022.
[14] Y. Kamri, P. T. A. Quang, N. Huin, and J. Leguay, “Constrained policy optimization for load balancing,” in 2021 17th International Conference on Design of Reliable Communication Networks (DRCN), IEEE, 2021.
[15] K. Kanellopoulos and V. K. Sharma, “Dynamic load balancing techniques in the IoT: A review,” Symmetry, vol. 14, no. 12, 2022.
[16] D. Kumar, S. Anand, G. P. Joshi, and W. Cho, “Optimized load balancing technique for software-defined networks,” Computers, Materials & Continua, vol. 72, no. 1, pp. 1409–1426, 2022.
[17] G. Li, X. Wang, Z. Zhang, Y. Chen, and S. Liu, “A scalable load balancing scheme for software-defined datacenter networks based on fuzzy logic,” International Journal of Performability Engineering, vol. 15, no. 8, pp. 2217–2227, 2019.
[18] Z. Li, X. Zhou, J. Gao, and Y. Qin, “SDN controller load balancing based on reinforcement learning,” in Proceedings of the IEEE International Conference on Software Engineering and Service Science (ICSESS), pp. 1120–1126, 2018.
[19] Y. Liu, J. Zhang, W. Li, Q. Wu, and P. Li, “Load balancing oriented predictive routing algorithm for data center networks,” Future Internet, vol. 13, no. 2, pp. 1–13, 2021.
[20] N. M., A. Al, and S. A., “Load balancing with neural networks,” International Journal of Advanced Computer Science and Applications, vol. 4, no. 10, pp. 138–145, 2013.
[21] T. Malbašić, P. D. Bojović, Ž. Bojović, J. Šuh, and D. Vujošević, “Hybrid SDN networks: A multi-parameter server load balancing scheme,” Journal of Network and Systems Management, vol. 30, no. 2, 2022.
[22] M. Mousa and M. Abdullah, “A survey on load balancing, routing, and congestion in SDN,” Engineering and Technology Journal, vol. 40, no. 10, pp. 1–11, 2022.
[23] M. A. Ouamri, G. Barb, D. Singh, and F. Alexa, “Load balancing optimization in software-defined wide area networking (SD-WAN) using deep reinforcement learning,” in 2022 15th International Symposium on Electronics and Telecommunications (ISETC), 2022.
[24] A. Sharma, S. Tokekar, and S. Varma, “Meta-reinforcement learning based resource management in software-defined networks using Bayesian networks,” in 2023 IEEE 3rd International Conference on Technology, Engineering, Management for Societal Impact using Marketing, Entrepreneurship, and Talent (TEMSMET), 2023.
[25] C. Wang, M. Li, Y. Zhang, and X. Zhao, “Blockchain-based secure data sharing in healthcare systems,” Journal of Medical Systems, vol. 47, no. 5, pp. 563–578, 2023.
[26] L. Wang, X. Chen, Q. Zhou, and others, “Smart cities: Technologies, challenges, and opportunities,” Springer, 2023.
[27] W. Wang, X. Chen, H. Liu, and L. Zhang, “Autonomous vehicles and vehicular networks: A comprehensive review,” IEEE Internet of Things Journal, vol. 10, no. 3, pp. 2309–2329, 2023.
[28] O. Tkachova, A. R. Yahya, and H. M. Muhi-Aldeen, “A network load balancing algorithm for overlay-based SDN solutions,” in 2016 Third International Scientific-Practical Conference Problems of Infocommunications Science and Technology (PIC S&T), Kharkiv, Ukraine, 2016, pp. 139-141, doi: 10.1109/INFOCOMMST.2016.7905360.
[29] X. Wang, M. Liu, L. Zhang, and others, “Energy-efficient resource allocation in fog computing: A deep reinforcement learning approach,” IEEE Transactions on Green Communications and Networking, vol. 7, no. 1, pp. 108–119, 2023.
[30] Y. Wang, H. Zhang, Y. Li, and others, “Deep reinforcement learning for adaptive resource management in 5G heterogeneous networks,” IEEE Transactions on Mobile Computing, 2023, Early Access.
[31] T. Wu, P. Zhou, B. Wang, A. Li, X. Tang, Z. Xu, K. Chen, and X. Ding, “Joint traffic control and multi-channel reassignment for core backbone network in SDN-IoT: A multi-agent deep reinforcement learning approach,” IEEE Transactions on Network Science and Engineering, vol. 8, no. 1, 2021.
[32] Y. Zhang, H. Wang, X. Li, and others, “Blockchain-based security and privacy preservation in IoT-enabled smart cities,” in International Conference on Internet of Things (IoT), IEEE, 2023.
[33] H. M. Muhi-Aldeen, R. S. Mahmood, A. A. Abdulrahman, J. A. Eleiwy, F. S. Tahir, and Y. Khlaponin, “Improvement of color image analysis using a hybrid artificial intelligence algorithm,” EUREKA, Physics and Engineering, vol. 2024, no. 3, pp. 178–190, 2024. [Online]. Available: https://doi.org/10.21303/2461-4262.2024.003387
[34] Y. Zhou, H. Wang, X. Li, and W. Chen, “Reinforcement learning in robotics: A comprehensive review,” Robotics and Autonomous Systems, vol. 147, p. 103907, 2023.
[35] M. M. Issa, M. Aljanabi, and H. M. Muhialdeen, “Systematic literature review on intrusion detection systems: Research trends, algorithms, methods, datasets, and limitations,” Journal of Intelligent Systems, vol. 33, no. 1, Walter de Gruyter GmbH, 2024. [Online]. Available: https://doi.org/10.1515/jisys-2023-0248