Text
Enhanced Rolling Horizon Evolution Algorithm With Opponent Model Learning: Results for the Fighting Game AI Competition
The Fighting Game AI Competition (FTGAIC) provides a challenging benchmark for two-player video game artificial intelligence. The challenge arises from the large action space, diverse styles of characters and abilities, and the real-time nature of the game. In this article, we propose a novel algorithm that combines the rolling horizon evolution algorithm (RHEA) with opponent model learning. The approach is readily applicable to any two-player video game. In contrast to conventional RHEA, an opponent model is proposed and is optimized by supervised learning with cross-entropy and reinforcement learning with policy gradient and Q-learning respectively, based on history observations from opponent. The model is learned during the live gameplay. With the learned opponent model, the extended RHEA is able to make more realistic plans based on what the opponent is likely to do. This tends to lead to better results. We compared our approach directly with the bots from the FTGAIC 2018 competition and found our method to significantly outperform all of them for all three characters. Furthermore, our proposed bot with the policy gradient based opponent model is the only one without using Monte Carlo tree search among the top five bots in the 2019 competition in which it achieved second place, while using much less domain knowledge than the winner.
Barcode | Tipe Koleksi | Nomor Panggil | Lokasi | Status | |
---|---|---|---|---|---|
art146020 | null | Artikel | Gdg9-Lt3 | Tersedia namun tidak untuk dipinjamkan - No Loan |
Tidak tersedia versi lain