In the first search for the width, operations Ξ(n*m) performed, where n and m are the width and height of your matrix.
You need to print the numbers Ξ(n*m) , so you cannot get faster than from a theoretical point of view.
I assume that you are not interested in discussing discussions using the cache and such optimizations.
Please note that this solution works in more interesting cases. For example, imagine the same question, but there may be different βsourcesβ:
00000 01000 00000 00000 00010
Using BFS, you will get the following path to the closest source with the same complexity:
21234 10123 21223 32212 32101
However, with one source, there is another solution that may have slightly higher performance in practice (although the complexity remains the same).
Before that, let's observe the following property.
Property : if the source is in (a, b), then the point (x, y) has the following Manhattan distance:
d(x, y) = abs(x - a) + abs(y - b)
This should be fairly easy to prove. So another algorithm:
for r in rows for c in cols d(r, c) = abc(r - a) + abs(c - b)
which is very short and light.
If you do not write and test it, there is no easy way to compare the two algorithms. Assuming an efficient implementation of a limited queue (with an array), you have the following basic operations for a cell:
- BFS: insert / delete a queue, visit each node 5 times (four times in neighbors and once from a queue)
- Direct formula: two subtractions and two
if s
It will really depend on the compiler and its optimizations, as well as on the specific processor architecture and memory, to say which will work better.
Nevertheless, I would advise you to go with what seems simpler to you. However, note that when using multiple sources in the second solution, you will need several passes in the array (or several distance calculations in one pass), and this will certainly have worse performance than BFS for a sufficiently large number of sources.