I’ve been doing some reading for the upcoming ai-class.com course and have some philosophy to share as a result.
Searches are a mainstay of optimization algorithms and a good search needs a good estimator. At every step of the way, the algorithm knows the real cost to get to that point and needs estimates of the cost of each possible next step. But, in general, what policy suites an estimation algorithm best: optimism or pessimism? (Note: I ruled out “realistic” as it implies we already know the answer to the problem.)
An optimistic estimator underestimates the true cost of a solution. The final, optimal solution will always cost more than the optimistic estimate. If we, as humans, were to give optimistic estimates, we’d always be wrong and underfunding our endeavors. Sounds bad, right? It turns out though, that this optimistic estimate always leads us to the optimal lowest real cost. Why? Because with an unattainable low estimate, an algorithm keeps searching. Eventually, it finds the lowest cost and concludes all possibilities beyond that cost more and “settles” for the optimal point.
A pessimistic estimator overestimates the real cost. When we humans overestimate then find a solution that costs less, we’re heroes. When an algorithm follows a pessimistic estimator, it stops when it finds a solution that is lower than the estimate. This may or may not be the actual optimal location. Dangerously, pessimism teaches us that pessimism works and trains us to double our estimates and to only trust those that overestimate. It rarely shows us an optimal solution and we never really find out what we’re missing.
So, what are we to do? Both options are usually wrong though the pessimistic one might actually find a high cost solution that matches its estimate, and being correct sure feels good. It’ no wonder people question the results of optimization algorithms – the algorithms are just trying to find the best answer while the people are just trying to be right.