Developing a Reinforcement Learning based Chess Engine

Authors

DOI:

https://doi.org/10.55632/pwvas.v95i2.990

Keywords:

Chess Engine, Reinforcement Learning, Universal Chess Interface (UCI), Forsyth-Edwards Notation (FEN), Convolutional Neural Network

Abstract

Traditionally, chess engines use handcrafted evaluation functions based on human strategy. Recently, machine learning has been used as an alternative to direct position scoring. However, this typically involves training a model on human matches. Reinforcement learning has been shown to be a viable machine learning approach that, when combined with self play, can train a neural network for chess position evaluation without the need for human domain knowledge. This paper discusses our implementation of a reinforcement learning based chess engine, trained using self play.   

Author Biography

Weidong Liao, Shepherd University

Associate Professor of Computer and Information Sciences

Downloads

Published

2023-04-18

How to Cite

Liao, W., & Moseman, A. . (2023). Developing a Reinforcement Learning based Chess Engine. Proceedings of the West Virginia Academy of Science, 95(2). https://doi.org/10.55632/pwvas.v95i2.990

Issue

Section

Meeting Abstracts-Oral