Richard Korf

Richard Korf

PROFESSOR EMERITUS
COMPUTER SCIENCE

Engineering VI - Room 277

Email: korf@cs.ucla.edu
Phone: (310) 206-5383
Fax: (310) 794-5056

Websites

Open the bio in a new tab

Richard Korf is a Professor Emeritus of computer science at the University of California, Los Angeles. He received his B.S. from M.I.T. in 1977, and his M.S. and Ph.D. from Carnegie-Mellon University in 1980 and 1983, respectively, all in computer science. From 1983 to 1985, he served as Herbert M. Singer Assistant Professor of Computer Science at Columbia University. His research is in the areas of problem-solving, heuristic search, and planning in artificial intelligence. He is the author of "Learning to Solve Problems by Searching for Macro-Operators" (Pitman, 1985). He served on the editorial boards of Artificial Intelligence, and the Journal of Applied Intelligence. Dr. Korf is the recipient of a 1985 IBM Faculty Development Award, a 1986 NSF Presidential Young Investigator Award, the first UCLA Computer Science Department Distinguished Teaching Award in 1989, the first UCLA School of Engineering Student's Choice Award for Excellence in Teaching in 1996, the Lockheed Martin Excellence in Teaching Award in 2005, the Artificial Intelligence Classic Paper Award in 2016, and a career award from the Symposium on Combinatorial Search in 2018. He is a Fellow of the American Association for Artificial Intelligence.
RESEARCH AND INTERESTS
  • Problem solving, planning, and heuristic search in artificial intelligence
  • Parallel Processing
  • Search Algorithms
NOTABLE PUBLICATIONS
  • Korf, R.E. Linear-time disk-based implicit graph search, Journal of the A.C.M., Vol. 55, No. 6, December 2008, pp. 26-1 to 26-40.Abstract. Many search algorithms are limited by the amount of memory available. Magnetic disk storage is over two orders of magnitude cheaper than semiconductor memory, and individual disks can hold up to a terabyte. We augment memory with magnetic disks to perform brute-force and heuristic searches that are orders of magnitude larger than any previous such searches. The main difficulty is detecting duplicate nodes, which is normally done with a hash table. Due to long disk latencies, however, randomly accessed hash tables are infeasible on disk, and are replaced by a mechanism we call delayed duplicate detection. In contrast to previous work, we perform delayed duplicate detection without sorting, which runs in time linear in the number of nodes in practice. Using this technique, we performed the first complete breadth-first searches of the 2x7, 3x5, 4x4, and 2x8 sliding-tile Puzzles, verifying the radius of the 4x4 puzzle and determining the radius of the others. We also performed the first complete breadth-first searches of the four-peg Towers of Hanoi problem with up to 22 discs, discovering a surprising anomaly regarding the radii of these problems. In addition, we performed the first heuristic searches of the four-peg Towers of Hanoi problem with up to 31 discs, verifying a conjectured optimal solution length to these problems. We also performed partial breadth-first searches of Rubik's Cube to depth ten in the face-turn metric, and depth eleven in the quarter-turn metric, confirming previous results.
  • Korf, R.E., Minimizing disk I/O in two-bit breadth-first search, Proceedings of the National Conference on Artificial Intelligence (AAAI-08), Chicago, IL, July, 2008, pp. 317-324.Abstract: We present a breadth-first search algorithm, two-bit breadth-first search (TBBFS), which requires only two bits for each state in the problem space. TBBFS can be parallelized in several ways, and can store its data on magnetic disk. Using TBBFS, we perform complete breadth-first searches of the original pancake problem with 14 and 15 pancakes, and the burned pancake problem with 11 and 12 pancakes, determining the diameter of these problem spaces for the first time. We also performed a complete breadth-first search of the subspace of Rubik's Cube determined by the edge cubies.
  • Analyzing the performance of pattern database heuristics, Proceedings of the National Conference on Artificial Intelligence (AAAI-07), Vancouver, B.C., July, 2007, pp. 1164-1170.Abstract: We introduce a model for predicting the performance of IDA* using pattern database heuristics, as a function of the branching factor of the problem, the solution depth, and the size of the pattern databases. While it is known that the larger the pattern database, the more efficient the search, we provide a quantitative analysis of this relationship. In particular, we show that for a single goal state, the number of nodes expanded by IDA* is a fraction of $(\log_bs+1)/s$ of the nodes expanded by a brute-force search, where $b$ is the branching factor, and $s$ is the size of the pattern database. We also show that by taking the maximum of at least two pattern databases, the number of node expansions decreases linearly with $s$ compared to a brute-force search. We compare our theoretical predictions with empirical performance data on Rubik's Cube. Our model is conservative, and overestimates the actual number of node expansions.
  • (with Ariel Felner) Recent progress in heuristic search: A case study of the four-peg towers of hanoi problem, Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-07), Hyderabad, India, Jan. 2007, pp. 2334-2329.Abstract: We integrate a number of new and recent advances in heuristic search, and apply them to the four-peg Towers of Hanoi problem. These include frontier search, disk-based search, parallel processing, multiple, compressed, disjoint, and additive pattern database heuristics, and breadth-first heuristic search. New ideas include pattern database heuristics based on multiple goal states, a method to reduce coordination among multiple parallel threads, and a method for reducing the number of heuristic calculations. We perform the first complete breadth-first searches of the 21 and 22-disc four-peg Towers of Hanoi problems, and extend the verification of ``presumed optimal solutions'' to this problem from 24 to 31 discs.
  • (with Weixiong Zhang, Ignacio Thayer, and Heath Hohwald), Frontier search, Journal of the Association for Computing Machinery, Vol. 52, No. 5, Sept. 2005, pp. 715-748.Abstract: The critical resource that limits the application of best-first search is memory. We present a new class of best-first search algorithms that reduce the space complexity. The key idea is to store only the Open list of generated nodes, but not the Closed list of expanded nodes. The solution path can be recovered by a divide-and-conquer technique, either as a bidirectional or unidirectional search. For many problems, frontier search dramatically reduces the memory required by best-first search. We apply frontier search to breadth-first search of sliding-tile puzzles and the 4-peg Towers of Hanoi problem, Dijkstra's algorithm on a grid with random edge costs, and the A* algorithm on the Fifteen Puzzle, the four-peg Towers of Hanoi Problem, and optimal sequence alignment in computational biology.
  • (with Peter Schultze) Large-scale, parallel breadth-first search, National Conference on Artificial Intelligence (AAAI-05), Pittsburgh, PA, July, 2005, pp. 1380-1385.Abstract: Recently, best-first search algorithms have been introduced that store their nodes on disk, to avoid their inherent memory limitation. We introduce several improvements to the best of these, including parallel processing, to reduce their storage and time requirements. We also present a linear-time algorithm for bijectively mapping permutations to integers in lexicographic order. We use breadth-first searches of sliding-tile puzzles as testbeds. On the 3x5 Fourteen Puzzle, we reduce both the storage and time needed by a factor of 3.5 on two processors. We also performed the first complete breadth-first search of the 4x4 Fifteen Puzzle, with over 10^(13)$ states.
  • Best-first frontier search with delayed duplicate detection, Proceedings of the National Conference on Artificial Intelligence (AAAI-04), San Jose, CA, July 2004, pp. 650-657.Abstract: Best-first search is limited by the memory needed to store the Open and Closed lists, primarily to detect duplicate nodes. Magnetic disks provide vastly more storage, but random access of a disk is extremely slow. Instead of checking generated nodes immediately against existing nodes in a hash table, delayed duplicate detection (DDD) appends them to a file, then periodically removes the duplicate nodes using only sequential disk accesses. Frontier search saves storage in a best-first search by storing only the Open list and not the Closed list. The main contributions of this paper are to provide a scalable implementation of DDD, to combine it with frontier search, and to extend it to more general best-first searches such as A*. We illustrate these ideas by performing complete breadth-first searches of sliding-tile puzzles up to the 3x5 Fourteen Puzzle. For the 4-peg Towers of Hanoi problem, we perform complete searches with up to 20 disks, searching a space of over a trillion nodes, and discover a surprising anomaly concerning the problem-space diameter of the 15 and 20-disk problems. We also verify the presumed optimal solution lengths for up to 24 disks. In addition, we implement A* with DDD on the Fifteen Puzzle. Finally, we present a scalable implementation of DDD based on hashing rather than sorting.
  • Optimal Rectangle Packing: New Results, Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS04), Whistler, British Columbia, June 2004, pp. 142-149.Abstract: We present new results on the problem of finding an enclosing rectangle of minimum area that will contain a given a set of rectangles. Many simple scheduling tasks can be modelled by this NP-complete problem. We present a new lower bound on the amount of wasted space in a partial solution, a new dominance condition that prunes many partial solutions, and extend our algorithms to packing unoriented rectangles. For our experiments, we consider the set of squares of size 1x1, 2x2,...,NxN, and find the smallest rectangle that can contain them for a given value of N. While previously we solved this problem up to N=22, we extend this to N=25. Overall, our new program is over an order of magnitude faster than our previous program running on the same machine. We also show that for the larger problems, our optimal algorithm is faster than one that finds the best slicing solution, a popular approximation algorithm. In addition, we solve an open problem dating to 1966, concerning packing the set of consecutive squares up to 24x24 in a square of size 70x70.
  • An improved algorithm for optimal bin packing, Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, August 2003, pp. 1252-1258.Abstract: Given a set of numbers, and a set of bins of fixed capacity, the NP-complete problem of bin packing is to find the minimum number of bins needed to contain the numbers, such that the sum of the numbers assigned to each bin does not exceed the bin capacity. We present two improvements to our previous bin-completion algorithm. The first speeds up the constant factor per node generation, and the second prunes redundant parts of the search tree. The resulting algorithm appears to be asymptotically faster than our original algorithm. On problems with 90 elements, it runs over 14 times faster. Furthermore, the ratios of node generations and running times both increase with increasing problem size.
  • Search, in the Encyclopedia of Cognitive Science, Macmillan, London, December 2002.
  • Search techniques, in the Encyclopedia of Information Systems, Hossein Bidgoli (Ed.), Academic Press, San Diego, CA, Aug, 2002, pp. 31-43.
  • (with Ariel Felner) Disjoint pattern database heuristics, Artificial Intelligence, Vol. 134, No. 1-2, Jan. 2002, pp. 9-22.Abstract: We describe a new technique for designing more accurate admissible heuristic evaluation functions, based on pattern databases. While many heuristics, such as Manhattan distance, compute the cost of solving individual subgoals independently, pattern databases consider the cost of solving multiple subgoals simultaneously. Existing work on pattern databases allows combining values from different pattern databases by taking their maximum. If the subgoals can be divided into disjoint subsets so that each operator only affects subgoals in one subset, then we can add the pattern-database values for each subset, resulting in a more accurate admissible heuristic function. We used this technique to improve performance on the Fifteen Puzzle by a factor of over 2000, and to find optimal solutions to 50 random instances of the Twenty-Four Puzzle.
  • Artificial Intelligence: Search, in the International Encyclopedia of the Social and Behavioral Sciences, N.J. Smelser and P.B. Baltes, (Eds.) Pergamon, Oxford, November 2001, pp. 796-798.
  • (with Michael Reid and Stefan Edelkamp) Time complexity of Iterative-Deepening-A*, Artificial Intelligence, Vol 129, No. 1-2, June 2001, pp. 199-218.Abstract: We analyze the time complexity of iterative-deepening-A* (IDA*). We first show how to calculate the exact number of nodes at a given depth of a regular search tree, and the asymptotic brute-force branching factor. We then use this result to analyze IDA* with a consistent, admissible heuristic function. Previous analyses relied on an abstract analytic model, and characterized the heuristic function in terms of its accuracy, but do not apply to concrete problems. In contrast, our analysis allows us to accurately predict the performance of IDA* on actual problems such as the sliding-tile puzzles and Rubik's Cube. The heuristic function is characterized by the distribution of heuristic values over the problem space. Contrary to conventional wisdom, our analysis shows that the asymptotic heuristic branching factor is the same as the brute-force branching factor. Thus, the effect of a heuristic function is to reduce the effective depth of search by a constant, relative to a brute-force search, rather than reducing the effective branching factor.
  • Sliding-tile puzzles and Rubik's Cube in AI research, IEEE Intelligent Systems, November, 1999, pp. 8-12.
  • Korf, R.E., Heuristic search, in the Encyclopedia of Cognitive Science, R. Wilson (Ed.), MIT Press, Cambridge, MA, 1999, pp. 372-373.
  • Artificial intelligence search algorithms, CRC Handbook of Algorithms and Theory of Computation, M.J. Atallah (Ed.), CRC Press, Boca Raton, FL, 1998, pp. 36-1 to 36-20.
  • A complete anytime algorithm for number partitioning, Artificial Intelligence, Vol. 106, No. 2, December 1998, pp. 181-203.Abstract: Given a set of numbers, the two-way number partitioning problem is to divide them into two subsets, so that the sum of the numbers in each subset are as nearly equal as possible. The problem is NP-complete. Based on a polynomial-time heuristic due to Karmarkar and Karp, we present a new algorithm, called Complete Karmarkar Karp (CKK), that optimally solves the general number-partitioning problem, and significantly outperforms the best previously-known algorithms for large problem instances. For numbers with twelve significant digits or less, CKK can optimally solve two-way partitioning problems of arbitrary size in practice. For numbers with greater precision, CKK first returns the Karmarkar-Karp solution, then continues to find better solutions as time allows. Over seven orders of magnitude improvement in solution quality is obtained in less than an hour of running time. Rather than building a single solution one element at a time, or modifying a complete solution, CKK constructs subsolutions, and combines them together in all possible ways. This approach may be effective for other NP-hard problems as well.
  • Finding optimal solutions to Rubik's Cube using pattern databases, Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), Providence, RI, July, 1997, pp. 700-705. Reprinted in Games in AI Research, Jaap ven den Herik and Hiroyuki Iida, Universiteit Maastricht, 1999, pp. 129-141.Abstract: We have found the first optimal solutions to random instances of Rubik's Cube. The median optimal solution length appears to be 18 moves. The algorithm used is iterative-deepening-A* (IDA*), with a lower-bound heuristic function based on large memory-based lookup tables, or ``pattern databases'' (Culberson and Schaeffer 1996). These tables store the exact number of moves required to solve various subgoals of the problem, in this case subsets of the individual movable cubies. We characterize the effectiveness of an admissible heuristic function by its expected value, and hypothesize that the overall performance of the program obeys a relation in which the product of the time and space used equals the size of the state space. Thus, the speed of the program increases linearly with the amount of memory available. As computer memories become larger and cheaper, we believe that this approach will become increasingly cost-effective.
  • (with L.A. Taylor) Finding optimal solutions to the twenty-four puzzle, Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), Portland, OR, Aug. 1996, pp. 1202-1207.Abstract: We have found the first optimal solutions to random instances of the Twenty-Four Puzzle, the 5x5 version of the well-known sliding-tile puzzles. Our new contribution to this problem is a more powerful admissible heuristic function. We present a general theory for the automatic discovery of such heuristics, which is based on considering multiple subgoals simultaneously. In addition, we apply a technique for pruning duplicate nodes in depth-first search using a finite-state machine. Finally, we observe that as heuristic search problems are scaled up, more powerful heuristic functions become both necessary and cost-effective.
  • Improved limited discrepancy search, Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), Portland, OR, Aug. 1996, pp. 286-291.Abstract: We present an improvement to Harvey and Ginsberg's limited discrepancy search algorithm, which eliminates much of the redundancy in the original, by generating each path from the root to the maximum search depth only once. For a complete binary tree of depth $d$, this reduces the asymptotic complexity from $O(\frac{d+2}{2}2^d)$ to $O(2^d)$. The savings is much less in a partial tree search, or in a heavily pruned tree. The overhead of the improved algorithm on a complete $b$-ary tree is only a factor of $b/(b-1)$ compared to depth-first search. While this constant factor is greater on a heavily pruned tree, this improvement makes limited discrepancy search a viable alternative to depth-first search, whenever the entire tree may not be searched. Finally, we present both positive and negative empirical results on the utility of limited discrepancy search, for the problem of number partitioning.
  • (with Max Chickering) Best-first minimax search, Artificial Intelligence, Vol 84, No. 1-2, July 1996, pp. 299-337.Abstract: We present a simple selective minimax search algorithm for two-player games. It always expands next the node at the end of the current best line of play, which is the node that determines the minimax value of the root. The algorithm requires no information other than a static evaluation function, and its time overhead per node is similar to that of alpha-beta minimax. On random game trees, best-first minimax outperforms alpha-beta, giving both the same amount of computation. In the game of Othello, using the evaluation function from Bill, one of the world's best programs, best-first minimax also outplays alpha-beta. We also present an implementation of the algorithm that reduces its space complexity from exponential to linear in the search depth, at the cost of increased time complexity. Finally, we present a hybrid best-first extension algorithm that combines alpha-beta and best-first minimax, and performs significantly better than either pure algorithm in both domains.
  • (with Weixiong Zhang), A study of complexity transitions on the traveling salesman problem, Artificial Intelligence Vol. 81, No. 1-2, March 1996, pp. 223-239.Abstract: The traveling salesman problem (TSP) is one of the best-known combinatorial optimization problems. Branch-and-Bound (BnB) is the best method for finding an optimal solution of the TSP. Previous research has shown that there exists a transition in the average computational complexity of BnB on random trees. We show experimentally that when the intercity distances of the asymmetric TSP are drawn uniformly from {0,1,2,...,r}, the complexity of BnB experiences an easy-hard transition as r increases. We also observe easy-hard-easy complexity transitions when asymmetric intercity distances are chosen from a log-normal distribution. This transition pattern is similar to one previously observed on the symmetric TSP. We then explain these different trnasition patterns by showing that the control parameter that determines the complexity is the number of distinct intercity distances.
  • (with Weixiong Zhang), Performance of linear-space search algorithms, Artificial Intelligence, Vol. 79, No. 2, Dec. 1995, pp. 241-292.Abstract: Search algorithms that use space linear in the search depth are widely employed in practice to solve difficult problems optimally, such as planning and scheduling. In this paper, we study the average-case performance of linear-space search algorithms, including depth-first branch-and-bound (DFBnB), iterative-deepening (ID), and recursive best-first search (RBFS). To facilitate our analyses, we use a random tree T(b,d) that has mean branching factor b, depth d, and node costs that are the sum of the costs of the edges from the root to the nodes. We prove that the expected number of nodes expanded by DFBnB on a random tree is no mmore than bd times the expected number of nodes expanded by best-first search (BFS) on the same tree, which usually requires space that is exponential in depth d. We also show that DFBnB is asymptotically optimal when BFS runs in exponential time, and ID and RBFS are asymptotically optimal when the edge costs of T(b,d) are integers. If bp is the expected number of children of a node whose costs are the same as that of their parent, then the expected number of nodes expanded by these three linear-space algorithms is exponential when bp<1, at most O(d^4) when pb=1, and at mnost quadratic when bp>1. In addition, we study the heuristic branching factor of T(b,d) and the effective branching factor of BFS, DFBnB, ID, and RBFS on T(b,d). Furthermore, we use our analytic results to explain a surprising anomaly in the performance of these algorithms, and to predict the existence of a complexity transition in the asymmetric traveling salesman problem.
  • Heuristic evaluation functions in artificial intelligence search algorithms, Minds and Machines, Vol. 5, No. 4, Nov. 1995, pp. 489-498.Abstract: We consider a special case of heuristics, namely numeric heuristic evaluation functions, and their use in artificial intelligence search algorithms. The problems they are applied to fall into three general classes: single-agent path-finding problems, two-player games, and constraint-satisfaction problems. In a single-agent path-finding problem, such as the Fifteen Puzzle or the travelling salesman problem, a single agent searches for a shortest path from an initial state to a goal state. Two-player games, such as chess and checkers, involve an adversarial relationship between two players, each trying to win the game. In a constraint-satisfaction, problem, such as the 8-Queens problem, the task is to find a state that satisfies a set of constraints. All of these problems are computationally intensive, and heuristic evaluation functions are used to reduce the amount of computation required to solve them. In each case we explain the nature of the evaluation functions used, how they are used in search algorithms, and how they can be automatically learned or acquired.
  • Space-efficient search algorithms, Computing Surveys, Vol. 27, No. 3, Sept., 1995, pp. 337-339.
  • (with Toru Ishida), Moving Target Search: A realtime search for changing goals, IEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 17, No. 6, June, 1995, pp. 609-619.Abstract: We consider the case of heuristic search where the goal may change during the course of the search. For example, the goal may be a target that actively avoids the problem solver. We present a moving-target search algorithm (MTS) to solve this problem. We prove that if the average speed of the target is slower than that of the problem solver, then the problem solver is guaranteed to eventually reach the target in a connected problem space. The original MTS algorithm was constructed with the minimum operations necessary to guarantee its completeness, and hence is not very efficient. To improve its efficiency, we introduce ideas from the area of resource-bounded planning into MTS, including (1) \em commitment to goals, and (2) deliberation for selecting plans. Experimental results demonstrate that the improved MTS is 10 to 20 times more efficient than the original MTS in uncertain situations.
  • Linear-space best-first search, Artificial Intelligence, Vol. 62, No. 1, July 1993, pp. 41-78.Abstract: Best-first search is a general heuristic search algorithm that always expands next a frontier node of lowest cost. It includes as special cases breadth-first search, Dijkstra's single-source shortest-path algorithm, and the A* algorithm. Its applicability, however, is limited by its exponential memory requirement. Previous approaches to this problem, such as iterative deepening, do not expand nodes in best-first order if the cost function can decrease along a path. We present a linear-space best-first search algorithm (RBFS) that always explores new nodes in best-first order, regardless of the cost function, and expands fewer nodes than iterative deepening with a nondecreasing cost function. On the sliding-tile puzzles, RBFS with a nonmonotonic weighted evaluation function dramatically reduces computation time with only a small penalty in solution cost. In general, RBFS reduces the space complexity of best-first search from exponential to linear, at the cost of only a constant factor in time complexity in our experiments.
  • (with Curt Powley and Chris Ferguson), Depth-first heuristic search on a SIMD machine, Artificial Intelligence, Vol. 60, No. 2, April, 1993, pp. 199-242.Abstract: We present a parallel implementation of Iterative-Deepening-A*, a depth-first heuristic search, on the single-instruction, multiple-data (SIMD) Connection Machine. Heuristic search of an irregular tree represents a new application of SIMD machines. The main technical challenge is load balancing, and we explore three different techniques in combination. We also use a simple method for dynamically determining when to stop searching and start load balancing. We achieve an efficiency of 69%, for a speedup of 5685 on 8K processors, an efficiency of 64%, for a speedup of 10,435 on 16K processors, and an efficiency of 53%, for a speedup of 17,300 on 32K processors on the Fifteen Puzzle. On hard problem instances, we achieved efficiencies as high as 80%, for a speedup of 26,215 on 32K processors. Our analysis shows that work only needs to increase as PlogP to maintain constant efficiency, where P is the number of processors. This high degree of scalability was confirmed empirically for the range of 16 to 32,768 (32K) processors.
  • Search, in the Encyclopedia of Artificial Intelligence, Second Edition}, Stuart Shaprio (Ed.), John Wiley, New York, 1992, pp. 1460-1467.
  • Heuristics, in the Encyclopedia of Artificial Intelligence, Second Edition, Stuart Shaprio (Ed.), John Wiley, New York, 1992, pp. 611-615.
  • A simple solution to pursuit games, Proceedings of the 11th International Workshop on Distributed Artificial Intelligence, Glen Arbor, Mich., Feb. 1992, pp. 183-194.Abstract: We present a simple solution to a class of pursuit games that have become popular domains for the study of cooperative behavior in distributed artificial intelligence (DAI). In these games, a group of predators tries to surround and capture a prey. In our solution, each predator independently tries to minimize its distance to the prey, and maximize its distance to the nearest other predator. We use this example to argue that problems that appear to require complex cooperation may be solvable with much simpler approaches. In general, we view coordination as a property that emerges out of the interaction of simple agents greedily optimizing their utility functions subject to environmental constraints.
  • (with Curt Powley) Single-agent parallel window search, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 5, May 1991, pp. 466-477.Abstract: We apply parallel window search to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A* (IDA*) on the same problem but with different cost thresholds. This approeach is limited by teh time to perform the goal iteration. To overcome this disadvantage, we then consider node ordering. We discuss how global node ordering by minimum h among nodes wiht equal f=g+h values can reduce the time complexity of serial IDA* by reducing the time of the goal iteration. This approach is limited by the time to performe the iterations prior to the goal iteration. Finally, we combine the two ideas of parallel window search and node ordering to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, which we call simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.
  • Multi-player alpha-beta pruning, Artificial Intelligence, Vol. 48, No.1, February, 1991, pp. 99-111.Abstract: We consider the generalization of minimax search with alpha-beta pruning to non-cooperative, perfect-information games with more than two players. The minimax algorithm was generalized in 1986 by Luckhardt and Irani to the maxn algorithm applied to vectors of n-tuples representing the evaluations of each of the players. If we assume an upper bound on the sum of the evaluations for each player, and a lower bound on each individual evaluation, then shallow alpha-beta pruning is possible, but not deep pruning. In the best-case, the asymptotic branching factor is reduced to (1+sqrt(4b-3))/2. In the average case, however, pruning does not reduce the asymptotic branching factor. Thus alpha-beta pruning is found to be effective only in the special case of two-player games. In addition, we show that it is an optimal directional algorithm for two players.
  • Depth-limited search for real-time problem solving, The Journal of Real-Time Systems, Vol. 2, No. 1, May 1990, pp. 7-24.Abstract: We propose depth-limited heuristic search as a general paradigm for real-time problem solving in a dynamic environment. When combined with iterative-deepening, it provides the ability to commit to an action almost instantaneously, but allows the quality of that decision to improve as long as time is available. Once a deadline is reached, the best decision arrived at is executed. We illustrate the paradigm in three different settings, corresponding to single-agent search, two-player games, and multi-agent problem solving. First we review two-player minimax search with alpha-beta pruning. Minimax can be extended to the {\it maxn} algorithm for more than two players, which admits a much weaker form of alpha-beta pruning. Finally, we explore real-time search algorithms for single-agent problems. Minimax is specialized to {\it minimin}, which allows a very powerful {\it alpha pruning} algorithm. In addition, {\it real-time-A*} allows backtracking while still guaranteeing a solution and making locally optimal decisions.
  • Real-time heuristic search, Artificial Intelligence, Vol. 42, No. 2-3, March 1990, pp. 189-211.Abstract: We apply the two-player game assumptions of limited search horizon and commitment to moves in constant time, to single-agent heuristic search problems. We present a variation of minimax lookahead search, and an analog to alpha-beta pruning that significantly improves the efficiency of the algorithm. Paradoxically, the search horizon reachable with this algorithm increases with increasing branching factor. In addition, we present a new algorithm, called Real-Time-A*, for interleaving planning and execution. We prove that the algorithm makes locally optimal decisions and is guaranteed to find a solution. We also present a learning version of this algorithm that improves its performance over successive problem solving trials by learning more accurate heuristic values, and prove that the learned values converge to their exact values along every optimal path. These algorithms effectively solve significantly larger problems than have previously been solvable using heuristic evaluation functions.
  • (with C. Ferguson) Distributed tree search and its application to alpha-beta pruning, Proceedings of the National Conference on Artificial Intelligence (AAAI-88), Minneapolis, MN, August, 1988, pp. 128-132.Abstract: We propose a parallel tree search algorithm based on the idea of tree-decomposition in which different processors search different parts of the tree. This generic algorithm effectively searches irregular trees using an arbitrary number of processors without shared memory or centralized control. The algorithm is independent of the particular type of tree search, such as single-agent or two-player game, and independent of any particular processor allocation strategy. Uniprocessor depth-first and breadth-first search are special cases of this generic algorithm. The algorithm has been implemented for alpha-beta search in the game of Othello on a 32-node Hypercube multiprocessor. The number of node evaluations grows approximately linearly with the number of processors $P$, resulting in an overall speedup for alpha-beta with random node ordering of $P^{.75}$. Furthermore we present a novel processor allocation strategy, called Bound-and-Branch, for parallel alpha-beta search that achieves linear speedup in the case of perfect node ordering. Using this strategy, an actual speedup of 12 is obtained with 32 processors.
  • (with J. Pearl) Search techniques, in Annual Review of Computer Science, Vol. 2, Annual Reviews Inc., Palo Alto, Ca., 1987, pp. 451-467.
  • Planning as search: A quantitative approach, Artificial Intelligence, Vol. 33, No. 1, 1987, pp. 65-88. Reprinted in Readings in Planning J. Allen, J. Hendler, and A. Tate (Eds.), Morgan Kaufmann, 1990, pp. 566-577.Abstract: We present the thesis that planning can be viewed as problem solving search using subgoals, macro-operators, and abstraction as knowledge sources. Our goal is to quantify problem solving performance using these sources of knowledge. New results include the identification of subgoal distance as a fundamental measure of problem difficulty, a multiplicative time-space tradeoff for macro-operators, and an analysis of abstraction which concludes that abstraction hierarchies can reduce exponential problems to linear complexity.
  • Depth-first iterative-deepening: An optimal admissible tree search. Artificial Intelligence, Vol. 27, No. 1, pp. 97-109, 1985. Reprinted in Expert Systems, A Software Methodology for Modern Applications, P.G. Raeth (Ed.), IEEE Computer Society Press, Washington, 1990.Abstract: The complexities of various search algorithms are considered in terms of time, space, and cost of solution path. It is known that breadth-first search requires too much space and depth-first search can use too much time and doesn't always find a cheapest path. A depth-first iterative-deepening algorithm is shown to be asymptotically optimal along all three dimensions for exponential tree searches. The algorithm has been used successfully in chess programs, has been effectively combined with bi-directional search, and has been applied to best-first heuristic search as well. This heuristic depth-first iterative-deepening algorithm is the only known algorithm that is capable of finding optimal solutions to randomly generated instances of the Fifteen Puzzle within practical resource limits.
  • Learning to Solve Problems by Searching for Macro-Operators, Pitman Publishing Ltd, London, 1985.
  • Macro-operators: A weak method for learning, Artificial Intelligence, Vol. 26, No. 1, pp. 35-77, 1985.Abstract: This article explores the idea of learning efficient strategies for solving problems by searching for macro-operators. A macro-operator, or macro for short, is simply a sequence of operators chosen from the primitive operators of a problem. The technique is particularly useful for problems with non-serializable subgoals, such as Rubik's Cube, for which other weak methods fail. Both a problem-solving program and a learning program are described in detail. The performance of these program is analyzed in terms of the number of macros required to solve all problem instances, the length of the resulting solutions (expressed as the number of primitive moves), and the amount of time necessary to learn the macros. In addition, a theory of why the method works, and a characterization of the range of problems for which it is useful are presented. The theory introduces a new type of problem structure called operator decomposability. Finally, it is concluded that the macro technique is a new kind of weak method, a method for learning as opposed to problem solving.
  • Korf, R.E., Inversion of applicative programs, Proceedings of the Seventh International Joint Conference on Artificial Intelligence (IJCAI-81), Vancouver, Canada, August 1981, pp. 1007-1009.Abstract: A technique is presented for taking a program written in pure LISP and automatically deriving a program which computes the inverse function of the given program. The scheme is based on a set of rules for inverting the primitive functions plus a method of solving for variables introduced by non-invertible functions. As an example, a program to reverse a list is inverted.
  • Toward a model of representation changes, Artificial Intelligence, Vol. 14, No. 1, pp. 41-78, October 1980.Abstract: This paper presents the first steps in the development of a computer model of the process of changing representations in problem solving. The task of discovering representations that yield efficient solution strategies for problems is viewed as heuristic search in the space of representations. Two dimensions of this representation space are information structure and information quantity. Changes of representation are characterized as isomorphisms and homomorphisms, corresponding to changes of information structure and information quantity, respectively. A language for expressing representations is given. Also, a language for describing representation transformation and an interpreter for applying the transformations to representations has been developed. In addition, transformations can be automatically inverted and composed to generate new transformations. Among the example problems used to illustrate and support this model are tic-tac-toe, integer arithmetic, the Tower of Hanoi problem, the arrow puzzle, the five puzzle, the mutilated checkerboard problem, and floor plan design. The system has also been used to generate some new NP-complete problems.
  • A shape independent theory of space allocation, Environment and Planning, Vol. 4, No. 1, pp. 37-50, July 1977.Abstract: A theory of space allocation in architectural design is presented. The theory is completely independent of the shapes of the spaces. The problem is broken down into four hierarchical levels of abstraction. The top level is the number of spaces. The second level consists of the adjacencies between the spaces, represented as abstract graphs. The third level is concerned with the different planar embeddings or geometries of the adjacency graphs. The bottom level is represented by labelled bubble diagrams. At each level, the number of design alternatives is finite and it is shown how they can be systematically enumerated.
EDUCATION
PhD (1983) Carnegie-Mellon University
AWARDS AND RECOGNITION
  • AAAI Fellow