I have not read this particular book, but my approach would be as follows. Suppose White wins. Then, every position that White missed should receive a positive credit, while every position that Black went through should receive a negative credit. If you repeat this argument, whenever you have a set of moves that make up the game, you must add a certain number of points to all positions from the winner and remove a certain number of points from all positions from the loser. You do this for a bunch of computer and computer games.
Now you have a data set consisting of a bunch of check items and corresponding points. Now you can calculate functions on these positions and train your favorite regression, for example, LMS.
An improvement to this approach would be to train the regressor and then make a few more games where each movement is arbitrarily performed in accordance with the predicted assessment of this move (i.e. movements that lead to positions with higher scores have a higher probability). Then you update these estimates and re-train the regression, etc.
source share