Iterative deepening A*: Difference between revisions

From HandWiki
(over-write)
 
(No difference)

Latest revision as of 15:12, 6 February 2024

Short description: Heuristic pathfinding algorithm
Iterative deepening A*
ClassSearch algorithm
Data structureTree, Graph
Worst-case space complexity[math]\displaystyle{ O(d) }[/math]

Iterative deepening A* (IDA*) is a graph traversal and path search algorithm that can find the shortest path between a designated start node and any member of a set of goal nodes in a weighted graph. It is a variant of iterative deepening depth-first search that borrows the idea to use a heuristic function to conservatively estimate the remaining cost to get to the goal from the A* search algorithm. Since it is a depth-first search algorithm, its memory usage is lower than in A*, but unlike ordinary iterative deepening search, it concentrates on exploring the most promising nodes and thus does not go to the same depth everywhere in the search tree. Unlike A*, IDA* does not utilize dynamic programming and therefore often ends up exploring the same nodes many times.

While the standard iterative deepening depth-first search uses search depth as the cutoff for each iteration, the IDA* uses the more informative [math]\displaystyle{ f(n) = g(n) + h(n) }[/math], where [math]\displaystyle{ g(n) }[/math] is the cost to travel from the root to node [math]\displaystyle{ n }[/math] and [math]\displaystyle{ h(n) }[/math] is a problem-specific heuristic estimate of the cost to travel from [math]\displaystyle{ n }[/math] to the goal.

The algorithm was first described by Richard Korf in 1985.[1]

Description

Iterative-deepening-A* works as follows: at each iteration, perform a depth-first search, cutting off a branch when its total cost [math]\displaystyle{ f(n) = g(n) + h(n) }[/math] exceeds a given threshold. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold.[1]

As in A*, the heuristic has to have particular properties to guarantee optimality (shortest paths). See Properties below.

Pseudocode

path              current search path (acts like a stack)
node              current node (last node in current path)
g                 the cost to reach current node
f                 estimated cost of the cheapest path (root..node..goal)
h(node)           estimated cost of the cheapest path (node..goal)
cost(node, succ)  step cost function
is_goal(node)     goal test
successors(node)  node expanding function, expand nodes ordered by g + h(node)
ida_star(root)    return either NOT_FOUND or a pair with the best path and its cost
 
procedure ida_star(root)
    bound := h(root)
    path := [root]
    loop
        t := search(path, 0, bound)
        if t = FOUND then return (path, bound)
        if t = ∞ then return NOT_FOUND
        bound := t
    end loop
end procedure

function search(path, g, bound)
    node := path.last
    f := g + h(node)
    if f > bound then return f
    if is_goal(node) then return FOUND
    min := ∞
    for succ in successors(node) do
        if succ not in path then
            path.push(succ)
            t := search(path, g + cost(node, succ), bound)
            if t = FOUND then return FOUND
            if t < min then min := t
            path.pop()
        end if
    end for
    return min
end function

Properties

Like A*, IDA* is guaranteed to find the shortest path leading from the given start node to any goal node in the problem graph, if the heuristic function h is admissible,[1] that is

[math]\displaystyle{ h(n) \le h^*(n) }[/math]

for all nodes n, where h* is the true cost of the shortest path from n to the nearest goal (the "perfect heuristic").[2]

IDA* is beneficial when the problem is memory constrained. A* search keeps a large queue of unexplored nodes that can quickly fill up memory. By contrast, because IDA* does not remember any node except the ones on the current path, it requires an amount of memory that is only linear in the length of the solution that it constructs. Its time complexity is analyzed by Korf et al. under the assumption that the heuristic cost estimate h is consistent, meaning that

[math]\displaystyle{ h(n) \le \mathrm{cost}(n, n') + h(n') }[/math]

for all nodes n and all neighbors n' of n; they conclude that compared to a brute-force tree search over an exponential-sized problem, IDA* achieves a smaller search depth (by a constant factor), but not a smaller branching factor.[3]

Recursive best-first search is another memory-constrained version of A* search that can be faster in practice than IDA*, since it requires less regenerating of nodes.[2]:282–289

Applications

Applications of IDA* are found in such problems as planning.[4] Solving the Rubik's Cube is an example of a planning problem that is amenable to solving with IDA*.[5]

References

  1. 1.0 1.1 1.2 Korf, Richard E. (1985). "Depth-first Iterative-Deepening: An Optimal Admissible Tree Search". Artificial Intelligence 27: 97–109. doi:10.1016/0004-3702(85)90084-0. http://www.cse.sc.edu/~mgv/csce580f09/gradPres/korf_IDAStar_1985.pdf. 
  2. 2.0 2.1 Bratko, Ivan (2001). Prolog Programming for Artificial Intelligence. Pearson Education. 
  3. Korf, Richard E.; Reid, Michael; Edelkamp, Stefan (2001). "Time complexity of iterative-deepening-A∗". Artificial Intelligence 129 (1–2): 199–218. doi:10.1016/S0004-3702(01)00094-7. 
  4. Bonet, Blai; Geffner, Héctor C. (2001). "Planning as heuristic search". Artificial Intelligence 129 (1–2): 5–33. doi:10.1016/S0004-3702(01)00108-4. 
  5. Richard Korf (1997). "Finding Optimal Solutions to Rubik's Cube Using Pattern Databases". http://www-compsci.swan.ac.uk/~csphil/CS335/korfrubik.pdf.