Differences between Monte Carlo and Markov methods?

I want to develop a RISK board game that will include AI for computer players. Moreoveor, I read two articles about this and realized that I needed to learn about the Monte Carlo Simulation and Markov Chains methods. And I thought that I needed to use these methods together, but I think that they are different methods related to calculating the probabilities of transition states. So can someone explain what are the important differences, advantages and disadvantages between them. Finally, how would you prefer to implement the AI ​​for the RISK game? I will be grateful for every answer and thank you anyway.

EDIT 1:

Article 1 → Sharon Blatt. Risky business: a comprehensive look at gaming risk. Rose-Halman Institute of Technology Journal of Undergraduate Mathematics, 3 (2), 2002.

Here is the link → Sharon Blatt

Article 2 → Intelligent Artificial Risk Player

Here is the link → Intelligent Artificial Turntable

EDIT 2: here you can find simple specific probabilities about the outcome of a battle in a risk game and use brute force algorithm. So, there is a tree diagram in which all possible states are determined. My question is: do I use Monte Carlo or the Markov chain on this tree?

Link for RISK AnalysisRISK Analysis

+6
source share
2 answers

Ok, so I looked through the articles to understand what they are doing. Here is my overview of the conditions you requested:

The Markov chain is just a model of how your system goes from state to state. Developing a Markov model from scratch can sometimes be difficult, but once you have one in your hand, they are relatively easy to use and relatively easy to understand. The basic idea is that your game will have certain states associated with it; that within the game you will move from state to state; and, critically, that this movement from state to state is based on probability and that you know these probabilities.

Once you have this information, you can present it all as a graph, where the nodes are states and the arcs between the states (labeled probabilities) are transitions. You can also think of it as a matrix that satisfies certain constraints or several other more exotic data structures.

The short article deals with the Markov chain method, but ... and this is important - he uses this approach only as a quick means of assessing what will happen if Army A attacks the territory with the defense of Army B. This is a nice use of technology, but it’s not an AI for risk, it’s just a module in the AI ​​that helps you figure out the likely outcome of an attack.

Monte Carlo methods, by contrast, are evaluators. When you have a model of something, whether it is a Markov model or any other, you often find yourself in a position where you want to evaluate something. (It often happens that this may be something that you can, if you squint too much, as an integral of something that is very difficult to work with in this format.) Monte Carlo methods simply randomize the results into an estimate of what will happen.

Monte Carlo methods are not, in my opinion, AI methods. They are a very versatile technique that proves useful for AI, but they are not AI as such. (The same can be said of Markov models, but the statement is weaker because Markov models are so useful for planning in art that the whole philosophy of AI is built around technology. Markov models are also used in other places.)

So they are. You also asked which one I would use if I had to implement a risk AI. Well, none of them will be sufficient. Monte Carlo, as I said, is not an AI technique, it is a common mathematical tool. And Markov models, although they could theoretically represent the entire game of the Risk game, will ultimately be very cumbersome: you will need to represent each state of the game, which means any possible configuration of armies in the territories and any possible configuration of cards in hand, etc. (I hush up a lot of details here: There are many other difficulties with this approach.)

The core of Wolf's theses is neither the Markov approach, nor the Monte Carlo approach; he actually describes it as a function of evaluation. This is the essence of the AI ​​problem: how to determine which action is best. The Monte Carlo method in Blatt's article describes a method for determining the expected result of an action, but this is not the same as figuring out which action is better. Moreover, in Wolf’s statement there is a low key conclusion that looking ahead is difficult because the game trees are getting so big, and this has made him (I think) focus on the rating function.

So my real advice would be: Read search tree approaches such as minimax, alpha-beta pruning, and especially the expected minimax. You can find good treatment for these early Russell and Norwig, or even on Wikipedia. Try to understand why these methods work in general, but are cumbersome for risk. This will lead you to a discussion of board evaluation methods. Then go back and look at Wolf's thesis, focusing on his action assessment function. And finally, focus on how he tries to automatically learn a good rating function.

This is a lot of work. But risk is not a simple game to develop AI for.

(If you just want to find out the expected results of this attack, I would say that you want to go to Monte Carlo. It is clean, very intuitive and very easy to implement. It's not big - make sure you have enough tests to get good result.)

+8
source

Markov chains are just a set of transitions and their probabilities, not counting the memory of past events.

Monte Carlo simulations are repeated samples of random walks over a set of probabilities.

You can use both methods, using the Markov chain to model probabilities, and then Monte Carlo simulations to study the expected results.

For risk, I don’t think I will use Markov chains because I don’t see an advantage. Monte Carlo analysis of your possible moves can help you come up with a pretty good AI combined with a suitable fitness function.

I know that this is hushed up a lot, but I hope this helps.

+5
source

Source: https://habr.com/ru/post/944458/


All Articles