Text
Risk-Based Robust Statistical Learning by Stochastic Difference-of-Convex Value-Function Optimization
This paper proposes the use of a variant of the conditional value-at-risk (CVaR) risk measure, called the interval conditional value-at-risk (In-CVaR), for the treatment of outliers in statistical learning by excluding the risks associated with the left and right tails of the loss. The risk-based robust learning task is to minimize the In-CVaR risk measure of a random functional that is the composite of a piecewise affine loss function with a potentially nonsmooth difference-of-convex statistical learning model. With the optimization formula of CVaR, the objective function of the minimization problem is the difference of two convex functions each being the optimal objective value of a univariate convex stochastic program. An algorithm that combines sequential sampling and convexification is developed, and its subsequential almost-sure convergence to a critical point is established. Numerical experiments demonstrate the effectiveness of the In-CVaR–based estimator computed by the sampling-based algorithm for robust regression and classification. Overall, this research extends the traditional approaches for treating outliers by allowing nonsmooth and nonconvex statistical learning models, employing a population risk-based objective, and applying a sampling-based algorithm with the stationarity guarantee for solving the resulting nonconvex and nonsmooth stochastic program.
Barcode | Tipe Koleksi | Nomor Panggil | Lokasi | Status | |
---|---|---|---|---|---|
art146036 | null | Artikel | Gdg9-Lt3 | Tersedia namun tidak untuk dipinjamkan - No Loan |
Tidak tersedia versi lain