Plan

I.    Introduction. Convexity         Ch.1, 2.

II.     Search Methods for Unconstrained Optimization

Line search methods            Ch.3
Trust region methods           Ch.4
Conjugate gradient method  Ch.5
Quasi-Newton methods.       Ch.6
Derivative-free methods.      Ch.9

III.     Search Methods for Constrained Optimization
Necessary conditions,
Lagrange multipliyers, Duality.       Ch.12
Linear Programming.                     Ch.13
Nonlinear Optimization.                 Ch.15
Quadratic programming.                Ch.16
Penalty, Augmented Lagrangian.    Ch.17

IV.    Review of  Stochastic methods, Genetic algorithms, Minimax.

V.     Projects presentation

Beautiful and practical optimization theory is developing since the sixties when computers become available; every new generation of computers allowed for solving new types of problems and called for new optimization methods. The theory aims at reliable methods for search of extremum of a function of several variables by an intelligent arrangement of its evaluations (measurements). This theory is vitally important for modern engineering and planning that incorporate optimization at every step of a decision-making process.

This course discusses various search methods, such as  Conjugate Gradients, QuasiNewton Method, methods for constrained optimization, including Linear and Quadratic Programming, and others. We will also briefly review genetic algorithms that mimic evolution and stochastic algorithms that account for uncertainties of mathematical models.

The course work includes several homework assignments that ask to implement the studied methods and a final project, that will be orally presented in the class.