Ok, I'm going to be as detailed as possible here.
Imagine a user looking at a set of options that he can choose. Each time he chooses, he gets, say, 4 different options. There are many other options that may appear in these 4 slots. Each of them has a certain definite and known probability of occurrence. Not all options are equally likely, and some parameters require others to be selected earlier - in a complex tree of interdependence. (I already defined it)
When the user selects one of 4, he is offered another choice of 4 options. The option pool is determined again and may depend on what the user has previously selected.
Among all the possible βoptionsβ that may ever appear, there are a few select ones that are special, call them KEY options.
When the program starts, the user is presented with the first 4 options. For each of these 4, the program must calculate the total probability that the user will βreachβ all KEY options during the period (variables) of N options.
eg. if there are 4 options, the probability of reaching any of them is 1, since they all appear at the beginning.
If anyone can advise me with what logic I should start with, I would be very grateful. I was thinking about counting all the possible selection sequences and counting, as a result of which the KEY options were selected within N steps, but the problem is that the probability is not uniform for all of them, and the option pool changes as the user selects and accumulates their options.
I am having difficulty implementing well-defined probabilities and option dependencies in an algorithm that can give a reasonable overall probability. Thus, the user knows each time which of 4 puts him in the best position in order to eventually get KEY options.
Any ideas?
EDIT: here is an example:
They say that there are 7 options in the pool. option1, ..., option7 option7 requires option6; option6 requires option4 and option5; option1 thru 5 does not require anything and can appear immediately, with the corresponding probabilities option1.p, ..., option5.p; the KEY option is, say, option7; the user gets 4 random (but weighted) choices among 1-5, and the program should say something like: "if you select (first), you have a ##% chance of getting option7 in most N attempts." similar for the other 3 options.
Naturally, for some minimum N it is impossible to get option7, but for some large N this is accurate. N can be selected, but fixed.
EDIT: So, the point here is NOT randomly selected by the user. Point - the program offers a selection option to maximize the likelihood that in the end, after N steps, the user will be offered all key parameters.
In the above example; let's say we choose N = 4, so the program should tell us which of the first 4 options appeared (any 4 of option 1-5), which, when selected, gives the best chance of getting option7. since for option7 you need option6, and for that you need options4 and option5, it is clear that you must choose either option4 or option5 in the first set of options. one of them will surely appear, of course. Say we get this for the first choice {option3, option5, option2, option4}. Then the program says: if you choose option 3, you will never get option7 in 4 stages. p is 0; if you chose option5, you can get option7, p = ....; ... option2, p = 0; ... option4, p = ...;
Whatever we choose, for the next 4 options p is calculated. Obviously, if we chose option3 or option2, each subsequent choice has exactly 0 chance to get us to option7. But for option4 and option5, p> 0;
Clearer? I do not know how to get these probabilities p.