Go to the first, previous, next, last section, table of contents.


The problem of multidimensional minimization requires finding a point @math{x} such that the scalar function,

takes a value which is lower than at any neighboring point. For smooth functions the gradient @math{g = \nabla f} vanishes at the minimum. In general there are no bracketing methods available for the minimization of @math{n}-dimensional functions. All algorithms proceed from an initial guess using a search algorithm which attempts to move in a downhill direction. A one-dimensional line minimisation is performed along this direction until the lowest point is found to a suitable tolerance. The search direction is then updated with local information from the function and its derivatives, and the whole process repeated until the true @math{n}-dimensional minimum is found.

Several minimization algorithms are available within a single framework. The user provides a high-level driver for the algorithms, and the library provides the individual functions necessary for each of the steps. There are three main phases of the iteration. The steps are,

Each iteration step consists either of an improvement to the line-mimisation in the current direction or an update to the search direction itself. The state for the minimizers is held in a gsl_multimin_fdfminimizer struct.

Go to the first, previous, next, last section, table of contents.