Random optimization (RO) refers to a class of numerical optimization techniques we can use to solve optimization problems without knowing their gradient. Thus, functions that are neither continuous nor differentiable are nonetheless amenable to RO's use. For example, direct-search, derivative-free, and black-box are all names for the same set of optimization methods.

Matyas is credited with coining the term random optimization due to his early exposition of the concept alongside fundamental mathematical analysis. RO moves to better spots in the search space using a normal distribution around the current position.

A limit-proof showed Matyas that the primary form of RO converges to the optimum of a simple unimodal function after infinite repetitions. However, this demonstration is not applicable in the real world because we may perform only a finite number of iterations. In reality, such a theoretical limit-proof will also demonstrate that a simple random sampling of the search space will invariably produce a sample arbitrarily close to the optimal.

The researchers also conduct mathematical calculations to demonstrate that convergence to an area surrounding the optimal is unavoidable given certain moderate conditions for RO variations employing alternative probability distributions for sampling. Dorea estimates the number of iterations necessary to approach the optimal solution. The empirical tests performed by Sarma on two real-world problems show that the optimum is approached very slowly and that the methods cannot find an explanation of sufficient fitness unless the process is initiated initially close enough to the optimum.

Why use Randomized Optimization?

Even if the solution to the preceding One-Max example was not immediately apparent, it could calculate the fitness value for all potential state vectors, x, and then choose the optimal vector. However, we can only sometimes accomplish this in a fair amount of time for more intricate tasks. Randomized optimization eliminates this difficulty.

Typically, randomised optimization algorithms begin with an initial "best" state vector (or population of many state vectors) and then generate a new state vector at random (frequently a neighbour of the current "best" state). The new vector becomes the "best" state vector if the new form is better. This method is repeated until a better state vector is found or a specified number of attempts are exhausted.

A random optimization technique may not find the best optimization solution. However, the algorithm will produce a "good" solution to the problem if a sufficient number of tries are made to find a better state at each stage. Time spent finding the optimum optimization solution vs answer quality is a trade-off.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE