This vaguely reminds me of a reverse propaganda strategy that is often found on a neural network. I will draw two strategies, the first of which will be erroneous:
- Calculate the cost of your path candidate P, which we will call
c(P) . - Calculate the cost of the shortest path S, which we will call
c(S) . - Decrease each edge weight
w(p) ∈ P by (c(P) - c(S) - epsilon) / |P| , where epsilon is some kind of vanishingly small constant that you want your path to be less than c (S), and |P| is the number of edges in P
Of course, the problem is that you can significantly reduce the cost of the path S (or some other path) more than reduce the cost of P ! This suggests that this will require an iterative approach by which you begin to move forward and reduce the cost of a given weight relative to the shortest path that you recount at each step. It is much more expensive, but, fortunately, the shortest path algorithms have nice dynamic software solutions!
So, the modified algorithm looks something like this (suppose i = 0 ):
- Calculate the cost of the first steps
i your path of candidate P, which we will call c(p_0...p_i) . - Calculate the cost of the shortest path S, which we will call
c(S) , and the cost of its first components i , which we will denote by the symbol c(s_0...s_i) . - Reduce the weight of the edge
w(p_n) by c(p_0...p_i) - c(s_0...s_i) - epsilon , where epsilon is some kind of vanishingly small constant that you want your path to be less than c ( S). - Repeat from step 1, increasing
i by 1.
Where P = S to start with, if epsilon is 0, you should leave the original path untouched. Otherwise, you should reduce by no more than epsilon * |P| beyond the perfect update.
Optimization of this algorithm will require efficient calculation of c(s_0...s_i+1) from c(s_0...s_i) , but this remains as an exercise for the reader; -)
source share