The most effective way, if you do not need a lot of saved state, is to do what you hinted at: create a Markov chain. Associated with each state is the array of probabilities of reaching the next state. This gives you complete control over the process and is quite compact. (Note that you use it by generating a random number from 0 to 1 and performing a binary search on aggregate probabilities.)
, - . ,
launcher_bias = 0.8*launcher_bias + 0.2*(1.0 - (last_item == launcher))
rocket_bias = 0.8*rocket_bias + 0.2*(last_item == launcher)
( 1 , 0,7 - , 0 0,7), , , . , , . , , , .