Note: Supplemental materials are not guaranteed with Rental or Used book purchases.
- ISBN: 9780415621540 | 0415621542
- Cover: Hardcover
- Copyright: 2/10/2014
Algorithms are usually classified based on their convergence order linear, quadratic, etc and their convergence rate, defined as a bound as the number of iterates goes to infinity. The rate is calculated with respect to the number of steps (iterates) an algorithm takes to converge, with no thought on how much effort is applied at each iterate. This volume demonstrates that the use of partial information, refined at each new iteration of the algorithm, can increase the efficiency of the algorithm, while guaranteeing convergence to the optimal solution, provided that the rate of refinement is carefully chosen. Based on approximate algorithms for problem solving, the authors present a new theory, which serves as a general framework for the problem, and consider general and particularly Markov Decision processes. A more suitable definition of the convergence rate with respect to the computational effort is derived for the proposed class of approximate iterative algorithms. In addition, an optimal refinement strategy is derived, which maximizes the rate of convergence with respect to the computation effort of this class of algorithms. Hence, a class of algorithms is derived that makes optimal use of the computational resources available. With a given, fixed amount of computation time, the proposed class of algorithms gets closer to the solution than any other algorithm, including the classical (exact) algorithm. Many numerical examples are included for illustration. This volume is intended for mathematicians, engineers and computer scientists, who work on learning processes in numerical analysis and are involved with optimization, optimal control, decision analysis and machine learning.