Text
Learning to Accelerate Evolutionary Search for Large-Scale Multiobjective Optimization
Most existing evolutionary search strategies are not so efficient when directly handling the decision space of large-scale multiobjective optimization problems (LMOPs). To enhance the efficiency of tackling LMOPs, this article proposes an accelerated evolutionary search (AES) strategy. Its main idea is to learn a gradient-descent-like direction vector (GDV) for each solution via the specially trained feedforward neural network, which may be the learnt possibly fastest convergent direction to reproduce new solutions efficiently. To be specific, a multilayer perceptron (MLP) with only one hidden layer is constructed, in which the number of neurons in the input and output layers is equal to the dimension of the decision space. Then, to get appropriate training data for the model, the current population is divided into two subsets based on the nondominated sorting, and each poor solution in one subset with worse convergence will be paired to an elitist solution in another subset with the minimum angle to it, which is considered most likely to guide it with rapid convergence. Next, this MLP is updated via backpropagation with gradient descent by using the above elaborately prepared dataset. Finally, an accelerated large-scale multiobjective evolutionary algorithm (ALMOEA) is designed by using AES as a reproduction operator. Experimental studies validate the effectiveness of the proposed AES when handling the search space of LMOPs with dimensionality ranging from 1000 to 10000. When compared with six state-of-the-art evolutionary algorithms, the experimental results also show the better efficiency and performance of the proposed optimizer in solving various LMOPs.
Barcode | Tipe Koleksi | Nomor Panggil | Lokasi | Status | |
---|---|---|---|---|---|
art145146 | null | Artikel | Gdg9-Lt3 | Tersedia namun tidak untuk dipinjamkan - No Loan |
Tidak tersedia versi lain