Volume 29, No 3, 2022, P. 24-44
UDC 519.8
V. N. Krutikov, P. S. Stanimirovi$\acute{c}$, O. N. Indenko, E. M. Tovbis, and L. A. Kazakovtsev
Optimization of subgradient method parameters on the base of rank-two correction of metric matrices
Abstract:
We establish a relaxation subgradient method (RSM) that includes parameter optimization utilizing metric rank-two correction matrices with a structure analogous to quasi-Newtonian (QN) methods. The metric matrix transformation consists of suppressing orthogonal and amplifying collinear components of the minimal length subgradient vector. The problem of constructing a metric matrix is formulated as a problem of solving an involved system of inequalities. Solving such system is based on a new learning algorithm. An estimate for its convergence rate is obtained depending on the parameters of the subgradient set. A new RSM has been developed and investigated on this basis. Computational experiments on complex large-scale functions confirm the effectiveness of the proposed algorithm.
Tab. 4, bibliogr. 32.
Keywords: convex optimization, nonsmooth optimization, relaxation subgradient method.
DOI: 10.33048/daio.2022.29.739
Vladimir N. Krutikov 1
Predrag S. Stanimirovi$\acute{c}$ 2
Oksana N. Indenko 1
Elena M. Tovbis 3
Lev A. Kazakovtsev 3
1. Kemerovo State University,
6 Krasnaya Street, 650043 Kemerovo, Russia
2. Faculty of Sciences and Mathematics, University of Niš,
33 Višegradska Street, 18000 Niš, Serbia
3. Reshetnev Siberian State University of Science and Technology,
31 Krasnoyarskiy Rabochiy Avenue, 660031 Krasnoyarsk, Russia
e-mail: krutikovvn@rambler.ru, pecko@pmf.ni.ac.rs, oksana230805@mail.ru, sibstu2006@rambler.ru, levk@bk.ru
Received May 10, 2022
Revised May 10, 2022
Accepted May 12, 2022
References
[1] N. Z. Shor, Application of the gradient descent method for solving network transportation problems, in Proc. Scientific Seminar on Theoretic and Applied Problems of Cybernetics and Operations Research, Issue 1 (Nauch. sovet po kibernetike AN USSR, Kyev, 1962), pp. 9–17 [Russian].
[2] B. T. Polyak, A general method for solving extremal problems, Dokl. Akad. Nauk SSSR 174 (1), 33–36 (1967) [Russian].
[3] B. T. Polyak, Introduction to Optimization (Nauka, Moscow, 1983) [Russian].
[4] P. Wolfe, Note on a method of conjugate subgradients for minimizing nondifferentiable functions, Math. Program. 7 (1), 380–383 (1974).
[5] E. G. Gol’shtein, A. S. Nemirovskii, and Yu. E. Nesterov, Level method, its generalizations and applications, Ekon. Mat. Metody 31 (3), 164–180 (1983) [Russian].
[6] Yu. E. Nesterov, Universal gradient methods for convex optimization problems, Math. Program., Ser. A, 152, 381–404 (2015).
[7] A. V. Gasnikov and Yu. E. Nesterov, Universal method for stochastic composite optimization (Cornell Univ., Ithaca, NY, 2016) (Cornell Univ. Libr. e-Print Archive, arXiv:1604.05275).
[8] H. Ouyang and A. Gray, Stochastic smoothing for nonsmooth minimizations: Accelerating SGD by exploiting structure, in Proc. 29th Int. Conf. Machine Learning, Edinburgh, Scotland, June 26–July 1, 2012 (Omnipress, Madison, WI, 2012), pp. 33–40.
[9] D. Boob, Q. Deng, and G. Lan, Stochastic first-order methods for convex and nonconvex functional constrained optimization // Math. Program. 2022. [in print].
Available at doi.org/10.1007/s10107-021-01742-y (accessed June 17, 2022).
[10] G. Lan, First-Order and Stochastic Optimization Methods for Machine Learning (Springer, Cham, 2020).
[11] S. Ghadimi and G. Lan, Accelerated gradient methods for nonconvex nonlinear and stochastic programming, Math. Program. 156 (1–2), 59–99 (2016).
[12] C. Fang, C. J. Li, Z. Lin, and T. Zhang, Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator, in Advances in Neural Information Processing Systems 31 (32nd Annual Conf., Montréal, Canada, Dec. 3–8, 2018) (Curran Associates, Red Hook, NY, 2018), pp. 687–697.
[13] A. S. Nemirovskii and D. B. Yudin, Complexity of Problems and Efficiency of Methods in Optimization (Nauka, Moscow, 1979) [Russian].
[14] N. Z. Shor, Minimization Methods for Non-differentiable Functions and Their Applications (Nauk. Dumka, Kyev, 1979) [Russian].
[15] H. Cao, Y. Song, and K. Khan, Convergence of subtangent-based relaxations of non-linear programs, Processes 7 (4), 221 (2019).
[16] B. T. Polyak, Minimization of nonsmooth functionals, Zh. Vychisl. Mat. Mat. Fiz. 9 (3), 509–521 (1969) [Russian] [Comput. Math. Math. Phys. 9 (3), 14–29 (1969)].
[17] V. N. Krutikov, N. S. Samoilenko, and V. V. Meshechkin, On the properties of the method of minimization for convex functions with relaxation on the distance to extremum, Avtom. Telemekh., No. 1, 126–137 (2019) [Russian] [Autom. Remote Control 80 (1), 102–111 (2019)].
[18] V. F. Demyanov and L. V. Vasilyev, Non-differentiable Optimization (Nauka, Moscow, 1981) [Russian].
[19] C. Lemarechal, An extension of Davidon methods to non-differentiable problems, Math. Program. Study 3, 95–109 (1975).
[20] V. N. Krutikov and T. V. Petrova, Relaxation method of minimization with space extension in the subgradient direction, Ekon. Mat. Metody 39 (1), 106–119 (2003) [Russian].
[21] V. N. Krutikov and T. A. Gorskaya, A family of subgradient relaxation methods with rank 2 correction of metric matrices, Ekon. Mat. Metody 45 (4), 37–80 (2009) [Russian].
[22] V. A. Skokov, Note on minimization methods employing space stretching, Kibern. Sist. Anal., No. 4, 115–117 (1974) [Russian] [Cybern. Syst. Anal. 10 (4), 689–692 (1974)].
[23] V. N. Krutikov and N. S. Samoilenko, On the convergence rate of the subgradient method with metric variation and its applications in neural network approximation schemes, Vestn. Tomsk. Gos. Univ., Ser. Mat. Mekh., No. 55, 22–37 (2018) [Russian].
[24] J. Nocedal and S. J. Wright, Numerical Optimization (Springer, New York, 2006).
[25] M. Avriel, Nonlinear Programming: Analysis and Methods (Dover Publ., Mineola, 2003).
[26] E. A. Nurminskii and D. Tien, Method of conjugate subgradients with constrained memory, Avtom. Telemekh., No. 4, 67–80 (2014) [Russian] [Autom. Remote Control 75 (4), 646–656 (2014)].
[27] Ya. Z. Tsypkin, Basics of Theory of Learning Systems (Nauka, Moscow, 1970) [Russian].
[28] E. L. Zhukovskii and R. Sh. Liptser, A recurrence method for computing the normal solutions of linear algebraic equations, Zh. Vychisl. Mat. Mat. Fiz. 12 (4), 843–857 (1972) [Russian] [Comput. Math. Math. Phys. 12 (4), 1–18 (1972)].
[29] V. N. Krutikov, L. A. Kazakovtsev, and V. L. Kazakovtsev, Nonsmooth regularization in radial artificial neural networks, IOP Conf. Ser.: Materials Science and Engineering 450 (4), ID 042010. 7 p. (2018).
[30] V. N. Krutikov, L. A. Kazakovtsev, G. Sh. Shkaberina, and V. L. Kazakovtsev, New method of training two-layer sigmoid neural networks using regularization, IOP Conf. Ser.: Materials Science and Engineering 537 (4), ID 042055. 6 p. (2019).
[31] R. J. Tibshirani, Regression shrinkage and selection via the Lasso, J. Royal Stat. Soc., Ser. B (Methodological) 58, 267–288 (1996).
[32] R. Frostig, R. Ge, S. M. Kakade, and A. Sidford, Un-regularizing: Approximate proximal point and faster stochastic algorithms for empirical risk minimization, Proc. Mach. Learn. Res. 37, 2540–2548 (2015).
|