According to iehrlich's comment (thanks to btw), the term “planning” may be misleading, and it might be a more appropriate description: given the N * N matrix, find the permutation of the rows that will give the largest diagonal sum.
I have a set of N jobs and N processors. All processors may differ from each other. For each pair (job, processor), I have the performance of this job running on this processor. Performance is measured in the IPC (instructions per cycle).
I am trying to find a schedule (1 to 1 distribution) that maximizes the total amount of IPC. I can do this by going through all the possible graphs, with O (N!), Which is not viable.
Then I tried to use the O (N ^ 2) “Stable Match” algorithm, using IPC to sort workload and processor preferences. It works very fast and returns a decent schedule, but not the most optimal.
My questions:
1) I really expected a stable matching algorithm to return the optimal assignment. Can someone explain why this fails? Until now, I am aware of the existence of connections between different (work, processor) pairs. I also tried the "stable matching with indifference" algorithm with no luck. It should be mentioned that the algorithm does not fail due to my implementation. I am looking for a more theoretical answer about why the algorithm itself cannot solve this problem.
2) Do you know which algorithm I can use for this? Is there another one?
source share