Combined methods for solving degenerate unconstrained optimization problems

  • Viktor Zadachyn Simon Kuznets Kharkiv National University of Economics
  • Maxim Bebiya V. N. Karazin Kharkiv National University
Keywords: unconditional optimization, degenerate minimum point, optimality conditions, Newton’s modified method

Abstract

UDC 519.853.6 : 519.613.2

We present constructive second- and fourth-order methods for solving degenerate unconstrained optimization problems.  The fourth-order method applied in the present work is a combination of the Newton method and the method that uses fourth-order derivatives.  Our approach is based on the decomposition of $\mathbb{R}^n$ into the direct sum of the kernel of a Hessian matrix and its orthogonal complement.  The fourth-order method is applied to the kernel of the Hessian matrix, whereas the Newton method is applied to its orthogonal complement.  This method proves to be efficient in the case of a one-dimensional kernel of the Hessian matrix.  In order to get the second-order method, Newton's method is combined with the steepest-descent method.  We study the efficiency of these methods and analyze their convergence rates.  We also propose a new adaptive combined quasi-Newton-type method (ACQNM) based on the use of the second- and fourth-order methods in the degenerate case.  The efficiency of ACQNM is demonstrated by analyzing an example of some most common test functions.

References

N. Andrei, A collection of 75 unconstrained optimization test functions, Research Institute for Informatics, Technical Report, 6, 1–9 (2018).

N. Andrei, Modern numerical nonlinear optimization}, Springer, Cham (2022).

K. N. Belash, A. A. Tret'yakov, Methods for solving degenerate problems, Comput. Math. and Math. Phys., 28, № 4, 90–94 (1988).

K. Ghazali, J. Sulaiman, Y. Dasril, D. Gabda, Newton-SOR iteration for solving large-scale unconstrained optimization problems with an arrowhead Hessian matrices, J. Phys.: Conf. Ser., 1358, № 1, 1–10 (2019).

G. Wang, Y. Wei, S. Qiao, Generalized inverses: theory and computations, Springer Nature, Singapore (2018).

I. Goodfellow, Y. Bengio, A. Courville, Deep learning, MIT Press (2016).

B. A. Hassan, M. A. Al Kahya, A new class of quasi-Newton updating formulas for unconstrained optimization, J. Interdiscip. Math., 24, № 8, 2355–2366 (2021).

X. Han, J. Zhang, J. Chen, New hybrid conjugate gradient algorithm for unconstrained optimization, Bull. Iran. Math. Soc., 43, № 6, 2067–2084 (2017).

J.-P. Penot, Higher-order optimality conditions and higher-order tangents sets, SIAM J. Optim., 27, № 4, 2508–2527 (2017).

B. Jimenez, V. Novo, Higher-order optimality conditions for strict local minima, Ann. Oper. Res., 157, 183–192 (2008).

L. Li, M. Qin, H. Wang, A regularized Newton method with correction for unconstrained convex optimization, Open J. Optim., 68, № 1, 44–52 (2016).

N. Andrei, Diagonal approximation of the Hessian by finite differences for unconstrained optimization, J. Optim. Theory and Appl., 185, № 3, 859–879 (2020).

X. Li, B. Wang, W. Hu, A modified nonmonotone BFGS algorithm for unconstrained optimization, J. Inequal. and Appl., 183, 1–18 (2017).

N. G. Maratos, M. A. Moraitis, Some results on the Sign recurrent neural network for unconstrained minimization, Neurocomputing, 287, 1–25 (2018).

D. Mehta, T. Chen, T. Tang, J. D. Hauenstein, The loss surface of deep linear networks viewed through the algebraic geometry lens; arXiv preprint arXiv:1810.07716 (2018).

T. D. Niri, M. M. Hosseini, M. Heydari, An efficient improvement of the Newton method for solving nonconvex optimization problems, Comput. Methods Different. Equat., 7, № 1, 69-85 (2019).

W. Quapp, Searching minima of an N-dimensional surface: a robust valley following method, Comput. and Math. Appl., 41, 407–414 (2001).

G. Ma, H. Lin, W. Jin, D. Han, Two modified conjugate gradient methods for unconstrained optimization with applications in image restoration problems, J. Appl. Math. and Comput., 68, № 6, 4733–4758 (2022).

A. R. Sankar, V. N. Balasubramanian, Are saddles good enough for deep learning?; arXiv preprint arXiv:1706.02052 (2017).

C. Shen, X. Chen, Y. Liang, A regularized Newton method for degenerate unconstrained optimization problems, Optim. Lett., 6, 1913–1933 (2012).

J. Lu, Matrix decomposition and applications, Amazon Digital Services LLC (2022).

E. Szczepanik, A. Prusinska, A. Tret'yakov, The p-factor method for nonlinear optimization, Schedae Inform., 21, 141–157 (2012).

D. di Serafino, G. Toraldo, M. Viola, Using gradient directions to get global convergence of Newton-type methods, Research Institute for Informatics, Technical Report, 6, 1–9 (2018).

N. Andrei, Modern numerical nonlinear optimization, Springer, Cham (2022).

K. N. Belash, A. A. Tret'yakov, Methods for solving degenerate problems, Comput. Math. and Math. Phys., 28, № 4, 90–94 (1988).

K. Ghazali, J. Sulaiman, Y. Dasril, D. Gabda, Newton-SOR iteration for solving large-scale unconstrained optimization problems with an arrowhead Hessian matrices, J. Phys.: Conf. Ser., 1358, № 1, 1–10 (2019).

G. Wang, Y. Wei, S. Qiao, Generalized inverses: theory and computations, Springer Nature, Singapore (2018).

I. Goodfellow, Y. Bengio, A. Courville, Deep learning, MIT Press (2016).

B. A. Hassan, M. A. Al Kahya, A new class of quasi-Newton updating formulas for unconstrained optimization, J. Interdiscip. Math., 24, № 8, 2355–2366 (2021).

X. Han, J. Zhang, J. Chen, New hybrid conjugate gradient algorithm for unconstrained optimization, Bull. Iran. Math. Soc., 43, № 6, 2067–2084 (2017).

J.-P. Penot, Higher-order optimality conditions and higher-order tangents sets, SIAM J. Optim., 27, № 4, 2508–2527 (2017).

B. Jimenez, V. Novo, Higher-order optimality conditions for strict local minima, Ann. Oper. Res., 157, 183–192 (2008).

L. Li, M. Qin, H. Wang, A regularized Newton method with correction for unconstrained convex optimization, Open J. Optim., 68, № 1, 44–52 (2016).

N. Andrei, Diagonal approximation of the Hessian by finite differences for unconstrained optimization, J. Optim. Theory and Appl., 185, № 3, 859–879 (2020).

X. Li, B. Wang, W. Hu, A modified nonmonotone BFGS algorithm for unconstrained optimization, J. Inequal. and Appl., 183, 1–18 (2017).

N. G. Maratos, M. A. Moraitis, Some results on the Sign recurrent neural network for unconstrained minimization, Neurocomputing, 287, 1–25 (2018).

D. Mehta, T. Chen, T. Tang, J. D. Hauenstein, The loss surface of deep linear networks viewed through the algebraic geometry lens; arXiv preprint arXiv:1810.07716 (2018).

T. D. Niri, M. M. Hosseini, M. Heydari, An efficient improvement of the Newton method for solving nonconvex optimization problems, Comput. Methods Different. Equat., 7, № 1, 69-85 (2019).

W. Quapp, Searching minima of an N-dimensional surface: a robust valley following method, Comput. and Math. Appl., 41, 407–414 (2001).

G. Ma, H. Lin, W. Jin, D. Han, Two modified conjugate gradient methods for unconstrained optimization with applications in image restoration problems, J. Appl. Math. and Comput., 68, № 6, 4733–4758 (2022).

A. R. Sankar, V. N. Balasubramanian, Are saddles good enough for deep learning?; arXiv preprint arXiv:1706.02052 (2017).

C. Shen, X. Chen, Y. Liang, A regularized Newton method for degenerate unconstrained optimization problems, Optim. Lett., 6, 1913–1933 (2012).

J. Lu, Matrix decomposition and applications, Amazon Digital Services LLC (2022).

E. Szczepanik, A. Prusinska, A. Tret'yakov, The p-factor method for nonlinear optimization, Schedae Inform., 21, 141–157 (2012).

D. di Serafino, G. Toraldo, M. Viola, Using gradient directions to get global convergence of Newton-type methods, Appl. Math. and Comput., 409, Article № 125612 (2021).

V. M. Zadachyn, Higher-order optimality conditions for degenerate unconstrained optimization problems, J. Optim., Different. Equat. and Appl., 30, № 1, 88–97(2022); DOI: 10.15421/142204.

V. M. Zadachyn, Modified Newton and quasi-Newtonian-type methods with pseudo-inversions for solving degenerate problems, Ph. D. Thesis, Lomonosov Moscow State University, Moscow, CA (1988) (in Russian); https://search.rsl.ru/ru/record/01000049990.

V. I. Meleshko, V. M. Zadachin, Factorizations and pseudo-inversions of singular perturbed matrices with nonfixed signs, Izv. Vyss. Uchebn. Zaved. Mat., 11, 42–50 (1987).

E. G. Birgin, J. M. Martinez, The use of quadratic regularization with a cubic descent condition for unconstrained optimization, SIAM J. Optim., 27, № 2, 1049–1074 (2017).

E. G. Birgin, J. M. Martinez, Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, Comput. Optim. and Appl., 73, 707–753 (2019).

S. Javed, A. Khan, Efficient regularized Newton-type algorithm for solving convex optimization problem, J. Appl. Math. and Comput., 68, № 4, 2343–2363 (2022).

H. Zhang, Q. Ni, A new regularized quasi-Newton method for unconstrained optimization, Optim. Lett., 12, № 7, 1639–1658 (2018).Appl. Math. and Comput., 409, Article № 125612 (2021).

V. M. Zadachyn, Higher-order optimality conditions for degenerate unconstrained optimization problems, J. Optim., Different. Equat. and Appl., 30, № 1, 88–97(2022); DOI: 10.15421/142204. DOI: https://doi.org/10.15421/142204

V. M. Zadachyn, Modified Newton and quasi-Newtonian-type methods with pseudo-inversions for solving degenerate problems, Ph. D. Thesis, Lomonosov Moscow State University, Moscow, CA (1988) (in Russian); https://search.rsl.ru/ru/record/01000049990.

V. I. Meleshko, V. M. Zadachin, Factorizations and pseudo-inversions of singular perturbed matrices with nonfixed signs, Izv. Vyss. Uchebn. Zaved. Mat., 11, 42–50 (1987).

E. G. Birgin, J. M. Martinez, The use of quadratic regularization with a cubic descent condition for unconstrained optimization, SIAM J. Optim., 27, № 2, 1049–1074 (2017).

E. G. Birgin, J. M. Martinez, Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, Comput. Optim. and Appl., 73, 707–753 (2019).

S. Javed, A. Khan, Efficient regularized Newton-type algorithm for solving convex optimization problem, J. Appl. Math. and Comput., 68, № 4, 2343–2363 (2022).

H. Zhang, Q. Ni, A new regularized quasi-Newton method for unconstrained optimization, Optim. Lett., 12, № 7, 1639–1658 (2018).

Published
02.06.2024
How to Cite
ZadachynV., and BebiyaM. “Combined Methods for Solving Degenerate Unconstrained Optimization Problems”. Ukrains’kyi Matematychnyi Zhurnal, Vol. 76, no. 5, June 2024, pp. 695 -18, doi:10.3842/umzh.v76i5.7395.
Section
Research articles