Many important problems may be expressed in terms of nonlinear multivariate unconstrained optimization. The basic unconstrained optimization problem is to minimize a real-valued function f(x) over all vectors Many techniques for solving these types of problems are available if f is twice continuously differentiable. Two broad classes of algorithms for the unconstrained minimization problem are trust-region algorithms and line-search algorithms.; These two classes may be combined by performing a line search in the direction proposed by the solution to the trust-region subproblem. We develop three combination methods which require that a sufficient decrease condition is met at each step. The first of the new algorithms uses a backtracking line search based on the Armijo condition. The other two use a more sophisticated search based on the Wolfe conditions. In all three algorithms the line-search is used to control the trust-region radius. We present strong first and second order convergence theorems for these new methods, an analysis of their asymptotic convergence properties and the results of numerical experiments using the new algorithms.; It is possible to use Wolfe-condition based algorithms to define quasi-Newton methods which use the BFGS update. These quasi-Newton methods are robust and efficient.
展开▼