Text
A Regularized Newton Method for lq -Norm Composite Optimization Problems
This paper is concerned with lq (0 < q < 1 ) -norm regularized minimization problems with a twice continuously differentiable loss function. For this class of nonconvex and nonsmooth composite problems, many algorithms have been proposed to solve them, most of which are of the first-order type. In this work, we propose a hybrid of the proximal gradient method and the subspace regularized Newton method, called HpgSRN. The whole iterate sequence produced by HpgSRN is proved to have a finite length and to converge to an L -type stationary point under a mild curve-ratio condition and the Kurdyka–Łojasiewicz property of the cost function; it converges linearly if a further Kurdyka–Łojasiewicz property of exponent 1/2 holds. Moreover, a superlinear convergence rate for the iterate sequence is also achieved under an additional local error bound condition. Our convergence results do not require the isolatedness and strict local minimality properties of the lq -stationary point. Numerical comparisons with ZeroFPR, a hybrid of proximal gradient method and quasi-Newton method for the forward-backward envelope of the cost function, proposed in [A. Themelis, L. Stella, and P. Patrinos, SIAM J. Optim., 28 (2018), pp. 2274–2303] for the
-norm regularized linear and logistic regressions on real data, indicate that HpgSRN not only requires much less computing time but also yields comparable or even better sparsities and objective function values.
Barcode | Tipe Koleksi | Nomor Panggil | Lokasi | Status | |
---|---|---|---|---|---|
art147397 | null | Artikel | Gdg9-Lt3 | Tersedia namun tidak untuk dipinjamkan - No Loan |
Tidak tersedia versi lain