X-Git-Url: https://git.auder.net/?p=rpsls-bot.git;a=blobdiff_plain;f=README.md;h=2bf8a5fe802fff30cc3010993016f71a3a8be4ef;hp=c72ae2de7d5f0985aad909f487b8b488d601b5fc;hb=HEAD;hpb=6fac12b0c92df43cf7c8d26c1c5b480b3f8940fb diff --git a/README.md b/README.md index c72ae2d..2bf8a5f 100644 --- a/README.md +++ b/README.md @@ -6,8 +6,26 @@ The rules are given by Sheldon in episode 8 of season 2 of TBBT (The Big Bang Th --- -[Online demo](https://auder.net/rpsls/") +[Online demo](https://auder.net/rpsls/) Winning should be difficult after a few dozens of rounds, because it's hard to play at random. -Setting "draw is lost" and/or increasing number of inputs can improve bot level. +Setting "winner bot" and/or increasing memory can improve bot level. + +--- + +## Technical details + +Each potential choice is linked to all outputs in a (neural) network, for +each input in memory. We thus have size of memory x (number of choice)^2 links. +To select a move, the bot computes the sum of all links weights from an activated choice +(that is to say, the value of a memory cell) to each output. +The output with biggest weights sum wins: the move is played. + +The reward is then determined from human move: -1 for a loss, 0 for a draw +(except if "winner bot" is selected, in which case a draw = a loss) and 1 for a win. +Weights on the active links are updated positively or negatively depending on reward sign. +All weights are initialized to zero, and since some time is required for learning +the first moves in the game would be quite random. + +See RPS\_network\_2.svg file for an illustration with memory=2 and simple RPS.