Text
Adaptive Multifactorial Evolutionary Optimization for Multitask Reinforcement Learning
Evolutionary computation has largely exhibited its potential to complement conventional learning algorithms in a variety of machine learning tasks, especially those related to unsupervised (clustering) and supervised learning. It has not been until lately when the computational efficiency of evolutionary solvers has been put in prospective for training reinforcement learning models. However, most studies framed so far within this context have considered environments and tasks conceived in isolation, without any exchange of knowledge among related tasks. In this manuscript we present A-MFEA-RL, an adaptive version of the well-known MFEA algorithm whose search and inheritance operators are tailored for multitask reinforcement learning environments. Specifically, our approach includes crossover and inheritance mechanisms for refining the exchange of genetic material, which rely on the multilayered structure of modern deep-learning-based reinforcement learning models. In order to assess the performance of the proposed approach, we design an extensive experimental setup comprising multiple reinforcement learning environments of varying levels of complexity, over which the performance of A-MFEA-RL is compared to that furnished by alternative nonevolutionary multitask reinforcement learning approaches. As concluded from the discussion of the obtained results, A-MFEA-RL not only achieves competitive success rates over the simultaneously addressed tasks, but also fosters the exchange of knowledge among tasks that could be intuitively expected to keep a degree of synergistic relationship.
Barcode | Tipe Koleksi | Nomor Panggil | Lokasi | Status | |
---|---|---|---|---|---|
art142258 | null | Artikel | Gdg9-Lt3 | Tersedia namun tidak untuk dipinjamkan - No Loan |
Tidak tersedia versi lain