Text
Learning Deep Models : Critical Points and Local Openness
With the increasing popularity of nonconvex deep models, developing a unifying theory for studying the optimization problems that arise from training these models becomes very significant. Toward this end, we present in this paper a unifying landscape analysis framework that can be used when the training objective function is the composite of simple functions. Using the local openness property of the underlying training models, we provide simple sufficient conditions under which any local optimum of the resulting optimization problem is globally optimal. We first completely characterize the local openness of the symmetric and nonsymmetric matrix multiplication mapping. Then we use our characterization to (1) provide a simple proof for the classical result of Burer-Monteiro and extend it to noncontinuous loss functions; (2) show that every local optimum of two-layer linear networks is globally optimal. Unlike many existing results in the literature, our result requires no assumption on the target data matrix Y, and input data matrix X; (3) develop a complete characterization of the local/global optima equivalence of multilayer linear neural networks (we provide various counterexamples to show the necessity of each of our assumptions); and (4) show global/local optima equivalence of overparameterized nonlinear deep models having a certain pyramidal structure. In contrast to existing works, our result requires no assumption on the differentiability of the activation functions and can go beyond “full-rank” cases.
Barcode | Tipe Koleksi | Nomor Panggil | Lokasi | Status | |
---|---|---|---|---|---|
art142316 | null | Artikel | Gdg9-Lt3 | Tersedia namun tidak untuk dipinjamkan - No Loan |
Tidak tersedia versi lain