Developing a Reinforcement Learning based Chess Engine
DOI:
https://doi.org/10.55632/pwvas.v95i2.990Keywords:
Chess Engine, Reinforcement Learning, Universal Chess Interface (UCI), Forsyth-Edwards Notation (FEN), Convolutional Neural NetworkAbstract
Traditionally, chess engines use handcrafted evaluation functions based on human strategy. Recently, machine learning has been used as an alternative to direct position scoring. However, this typically involves training a model on human matches. Reinforcement learning has been shown to be a viable machine learning approach that, when combined with self play, can train a neural network for chess position evaluation without the need for human domain knowledge. This paper discusses our implementation of a reinforcement learning based chess engine, trained using self play.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Proceedings of the West Virginia Academy of Science applies the Creative Commons Attribution-NonCommercial (CC BY-NC) license to works we publish. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.