twenty! quite large, more than 2 ^ 61. Fortunately, there is a better way to solve small problems: (EDIT) dynamic programming. While preserving the optimal solutions for each subtask, we pay some memory in order to get very big time savings.
Here is sample code in Python. When implementing the code below in another language, you probably want to assign vertices 0, ..., n-1 and implement sets as bit vectors.
def node_closure(G):
smallest = {frozenset(): []}
def find_smallest(S):
if S in smallest:
return smallest[S]
else:
candidates = [[v] + find_smallest(S - frozenset([v]) - G[v]) for v in S]
return min(candidates, key=len)
return find_smallest(frozenset(G))
This problem has a reduction in NP hardness from the installed coating, which retains objective value. This means that if P = NP, the best guarantee you can get for an algorithm with polynomial time is that it always produces a solution that does not exceed O(log n)times the optimal one.
x1, ..., xm - , , S1, ..., Sn - , , {x1, ..., xm}. , , x1, ..., xm, S1, ..., Sn, R, R Si i, j, Si xj, xj Si. node : node , , R; node, , , xj, .
( , ! - .)