Hill climbing is a numerical analysis optimization method that uses a type of mathematics called local search. It starts with a random solution and slowly improves it to find a better one. If the change improves the solution, incremental adjustments are made to the new solution until no further improvements are possible.

Hill climbing, for instance, can be applied to the travelling salesman dilemma. It is simple to generate a solution that visits all cities, but it will likely be inferior to the ideal solution. The algorithm begins with such a solution and then makes little adjustments, such as rearranging the order in which two cities are visited. Eventually, they will undoubtedly discover a significantly shorter route.

Problems

Hill climbing finds the best solution for convex problems. For other issues, it will only find local optima, solutions that any nearby configurations can't improve. Local optima are occasionally the global optimum (the search space). The simplex algorithm uses hill-climbing for linear programming and the binary search algorithm to solve convex problems. 

To avoid getting stuck in local optima: 

  • one could use restarts (also called repeated local search) or 
  • more complex methods based on iterations (like iterated local search) or 
  • memory (like reactive search optimization) or 
  • memory-less stochastic modifications (like simulated annealing).

Algorithm

The algorithm is often the first choice for optimizing algorithms because it is easy to use. Artificial intelligence uses it to get from a starting node to a goal state. In similar algorithms, we can choose the next and starting nodes differently. Even though more advanced algorithms like simulated annealing or tabu search might work better, hill climbing can work just as well in some situations. When there isn't much time to search, like in real-time systems, hill climbing can often give a better result than other algorithms. A few steps usually lead to a reasonable solution. Bubble sort is a hill-climbing algorithm because exchanging two neighbouring elements minimizes the number of out-of-order pairs.

Variants

In simple hill climbing, the nearest node is selected. However, all successors are evaluated in the steepest ascent hill climbing, and the one closest to the solution is selected. Both versions fail if there is no closer node, which might occur if there are non-solution local maxima in the search space. Like best-first search, steepest ascent hill climbing attempts are all possible extensions of the current path rather than just one.

Before selecting how to move, stochastic hill climbing only investigates some nearby neighbours. Instead, it randomly chooses a neighbour and decides whether to move to that neighbour or examine another. In addition, Coordinate descent does a line search down a single coordinate direction at each iteration's current location. Some implementations of coordinate descent select a random coordinate direction for each iteration.

Furthermore, random-restart hill climbing is a meta-algorithm constructed on top of the hill climbing algorithm and referred to as Shotgun hill climbing. In many instances, random-restart hill climbing is a remarkably effective algorithm. Spending CPU time exploring the area is frequently more efficient than optimizing from an initial condition.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE