Volume 27 , Issue 1 , PP: 220-233, 2026 | Cite this article as | XML | Html | PDF | Full Length Article
Bassim A. Hassan 1 , Issam A. R. Moghrabi 2 , Talal M. Alharbi 3 * , Alaa Luqman Ibrahim 4
Doi: https://doi.org/10.54216/IJNS.270120
Artificial neural networks have become a cornerstone of modern artificial intelligence, powering progress in a wide range of fields. Their effective training heavily depends on techniques from unconstrained optimization, with iterative methods based on gradients being especially common. This study presents a new variant of the conjugate gradient method tailored specifically for unconstrained optimization tasks. The method is carefully designed to meet the sufficient descent condition and ensures global convergence. Comprehensive numerical testing highlights its advantages over traditional conjugate gradient techniques, showing improved performance in terms of iteration counts, function evaluations, and overall computational time across a variety of problem sizes. Additionally, this new approach has been successfully used to improve neural network training. Experimental results show faster convergence and better accuracy, with fewer training iterations and reduced mean squared error compared to standard methods. Overall, this work offers a meaningful contribution to optimization strategies in neural network training, displaying the method is potential to tackle the complex optimization problems often encountered in machine learning.
Optimization , Conjugate Gradient , Quasi-Newton , conjugacy condition , Neural Networks
[1] T. Yuan, D. Bi, L. Pan, Y. Xie, and J. Lyu, "A compressive hybrid conjugate gradient image recovery approach for radial MRI," Imaging Sci. J., vol. 66, no. 5, pp. 278–288, Jul. 2018, doi: 10.1080/13682199.2018.1434956.
[2] M. Awwal et al., "A spectral RMIL+ conjugate gradient method for unconstrained optimization with applications in portfolio selection and motion control," IEEE Access, vol. 9, pp. 75398–75414, 2021, doi: 10.1109/ACCESS.2021.3081570.
[3] K. Gupta, R. Sharma, and P. R. Rao, "Adaptive conjugate gradient methods for image restoration," J. Image Process. Comput. Vis., vol. 12, no. 4, pp. 45–58, 2023, doi: 10.1016/j.jipcv.2023.03.005.
[4] H. N. Jabbar, Y. A. Laylani, I. A. R. Moghrabi, and B. A. Hassan, "Development of a new numerical conjugate gradient technique for image processing," WSEAS Trans. Comput. Res., vol. 12, 2024.
[5] F. A. and D. A. Merchant, Microscope Image Processing, 2nd ed. London, U.K.: Academic Press, 2023.
[6] X. Wei, J. Ruan, Z. Zhang, L. Yang, and Y. Liu, "A new DY conjugate gradient method and applications to image denoising," IEICE Trans. Inf. Syst., vol. E101-D, no. 12, pp. 2984–2990, Dec. 2018.
[7] G. Garg, V. Kuts, and G. Anbarjafari, "Digital twin for fanuc robots: Industrial robot programming and simulation using virtual reality," Sustainability, vol. 13, no. 18, p. 10336, Sep. 2021, doi: 10.3390/su131810336.
[8] L. Muhammad et al., "An improved preconditioned conjugate gradient method for unconstrained optimization problem with application in robot arm control," Eng. Rep., vol. 6, no. 12, Dec. 2024, doi: 10.1002/eng2.12968.
[9] Y. H. Dai and Y. Yuan, "A nonlinear conjugate gradient method with a strong global convergence property," SIAM J. Optim., vol. 10, no. 1, pp. 177–182, 1999.
[10] M. Al-Baali and G. Grandinetti, "On the behavior of damped quasi-Newton methods for unconstrained optimization," Iran. J. Oper. Res., vol. 3, no. 1, pp. 1–10, 2012.
[11] M. R. Hestenes and E. Stiefel, "Methods of conjugate gradients for solving linear systems," J. Res. Nat. Bur. Stand., vol. 49, no. 6, p. 409, Mar. 1952, doi: 10.6028/jres.049.044.
[12] R. Fletcher and C. M. Reeves, "Function minimization by conjugate gradients," Comput. J., vol. 7, no. 2, pp. 149–154, 1964.
[13] E. Polak and G. Ribiére, "Note sur la convergence de directions conjuguées," Rev. Française Informat. Recherche Opérationnelle, vol. 3, no. 16, pp. 35–43, 1969.
[14] L. Ibrahim, M. A. Sadiq, and S. G. Shareef, "A new conjugate gradient coefficient for unconstrained optimization based on Dai-Liao," Sci. J. Univ. Zakho, vol. 7, no. 1, pp. 34–36, Mar. 2019, doi: 10.25271/sjuoz.2019.7.1.525.
[15] Y. Liu and S. C. Cui, "Efficient generalized conjugate gradients algorithms, part 1: Theory," J. Optim. Theory Appl., vol. 69, no. 1, pp. 129–137, 1991.
[16] Y. Liu and S. C. Cui, "Efficient generalized conjugate gradients algorithms, part 1: Theory," J. Optim. Theory Appl., vol. 69, no. 1, pp. 129–137, 1991.
[17] Griewank, "The global convergence of partitioned BFGS on problems with convex decompositions and Lipschitzian gradients," Math. Program, vol. 50, no. 1–3, pp. 141–175, Mar. 1991, doi: 10.1007/BF01594933.
[18] D. A. Tarzanagh and M. R. Peyghami, "A new regularized limited memory BFGS-type method based on modified secant conditions for unconstrained optimization problems," J. Global Optim., vol. 63, no. 4, pp. 709–728, Dec. 2015, doi: 10.1007/s10898-015-0310-7.
[19] X. Li, G. Yu, Z. Wang, and W. Zhang, "Global convergence of BFGS and PRP methods under a modified weak Wolfe–Powell line search," Appl. Math. Model, vol. 47, pp. 811–825, Jul. 2018.
[20] P. Wolfe, "Convergence conditions for ascent methods. II: Some corrections," SIAM Rev., vol. 13, no. 2, pp. 185–188, Apr. 1971.
[21] Y. Dai et al., "Convergence properties of nonlinear conjugate gradient methods," SIAM J. Optim., vol. 10, no. 2, pp. 345–358, Mar. 2000, doi: 10.1137/S1052623494268443.
[22] Y. H. Dai, J. Y. Han, G. H. Liu, D. F. Sun, H. X. Yin, and Y. Yuan, "Convergence properties of nonlinear conjugate gradient methods," SIAM J. Optim., vol. 10, no. 2, pp. 345–358, 2000.
[23] Bongartz, A. R. Conn, N. Gould, and Ph. L. Toint, "CUTE: Constrained and unconstrained testing environment," ACM Trans. Math. Softw, vol. 21, no. 1, pp. 123–160, Mar. 1995, doi: 10.1145/200979.201043.
[24] Bongartz, A. R. Conn, N. Gould, and P. L. Toint, "CUTE: Constrained and unconstrained testing environment," ACM Trans. Math. Softw, vol. 21, no. 1, pp. 123–160, 1995, doi: 10.1145/200979.201043.
[25] J. J. Moré, B. S. Garbow, and K. E. Hillstrom, "Testing unconstrained optimization software," ACM Trans. Math. Softw, vol. 7, no. 1, pp. 17–41, Mar. 1981.
[26] A. Hassan, I. A. R. Moghrabi, A. L. Ibrahim, and H. N. Jabbar, "Improved conjugate gradient methods for unconstrained minimization problems and training recurrent neural network," Eng. Rep., vol. 7, no. 2, Feb. 2025, doi: 10.1002/eng2.70019.