Command Reference : User-Defined Optimization : The Optimize Command
  
The Optimize Command
 
Specifying the Method and Objective
Identifying the Control
Starting Values
Specifying Gradients
Calculating the Hessian
Numeric Derivatives
Iteration and Convergence
Advanced Optimization Options
Trust Region
Step Method
Scale
Objective Accuracy
Status Functions
Error Handling
The syntax for the optimize command is:
optimize(options) subroutine_name(arguments)
where subroutine_name is the name of the defined subroutine in your program (or included programs). The full set of options is provided in optimize,
By default, EViews will assume that the first argument of the subroutine is the objective of the optimization, and that the second argument contains the controls. The default is to maximize the objective or sum of the objective values (with the sum taken over the current workfile sample, if a series).
Specifying the Method and Objective
You may control the type of optimization and which subroutine argument corresponds to the objective by providing one of the following options to the optimize command:
max [=integer]
min [=integer]
ls [=integer]
ml [=integer]
The four options correspond to different optimization types: maximization (“max”), minimization (“min”), least squares (“ls”) and maximum likelihood (“ml”). If the objective is scalar valued only “max” and “min” are allowed.
As the names suggest, “min” and “max” correspond to minimizing and maximizing the objective. If the objective is multi-valued, optimize will minimize or maximize the sum of the elements of the objective.
“ls” and “ml” are special forms of minimization and maximization that may be specified only if the multi-valued objective argument has a value for each observation. “ls” tells optimize that you wish to perform least squares estimation so the optimizer should minimize the sum-of-squares of the elements of the objective. “ml” informs optimize you wish to perform maximum likelihood estimation by maximizing the sum of the elements in the objective.
“ls” and “ml” differ from “min” and “max” in supporting an additional option for approximating the Hessian matrix (see “Calculating the Hessian”) that is used in the estimation algorithm. Indeed the only difference between the “max” and “ml” for a multi-valued objective is that “ml” supports the use of this option (“hess=opg”).
By default, the first argument of the subroutine is taken as the objective. However you may specify an alternate objective argument by providing an integer value identifier with one of the options above. For example, to identify the second argument of the subroutine as the objective in a minimization problem, you would use the option “min=2”.
Identifying the Control
By default, the second argument in the subroutine contains the controls for the optimization. You may modify this by including the coef=integer option in the optimize command, where integer is the argument identifier. For example, to identify the first argument of the subroutine as the control, you would use the option “coef=1”.
Starting Values
The values of the objects containing the control parameters at the onset of optimization are used as starting values for the optimization process. You should note that if any of the control parameters contain missing values at the onset of optimization, or if the objective function, or any analytic gradients cannot be evaluated at the initial parameter values, EViews will error and the optimization process will terminate.
Specifying Gradients
If included in the optimize command, the “grad=” option specifies which subroutine argument contains the analytic gradients for each of the coefficients. If you specify the “grad=” option, the subroutine should fill out the elements of the gradient argument with values of the analytical gradients at the current coefficient values.
If the objective argument is a scalar, the gradient argument should be a vector of length equal to the number of elements in the coefficient argument.
If the objective argument is a series, the gradient argument should be a group object containing one series per element of the coefficient argument. The series observations should contain the corresponding derivatives for each observation in the current workfile sample.
For a vector objective, the gradient argument should be a matrix with number of rows equal to the length of the objective vector, and columns equal to the number of elements in the coefficient argument.
“grad=” may not be specified if the objective is a matrix.
If “grad=” is not specified, optimize will use numeric gradients. In general, we have found that using numerical gradients performs as well as analytic gradients. Since programming the calculation of the analytic gradients into the subroutine can be complicated, omitting the “grad=” option should usually be one’s initial approach.
Calculating the Hessian
The “hess=” option tells EViews which Hessian approximation should be used in the estimation algorithm. You may employ numeric Hessian (“hess=numeric”), Broyden-Fletcher-Goldfarb-Shanno (“hess=bfgs”), or outer-product of the gradients (“hess=opg”) approximations to the Hessian (see “Hessian Approximation”).
You may not specify an analytic Hessian, though all three approximations use information from the gradients, so that there will be slight differences in the Hessian calculation depending on whether you use numeric versus analytical gradients.
The “finalh=” option allows you to save the Hessian matrix of the optimization problem at the final coefficient values as a matrix in the workfile. For least squares and maximum likelihood problems, the Hessian is commonly used in the calculation of coefficient covariances.
For OPG and numeric Hessian approximations, the final Hessian will be the same as the Hessian approximation used during optimization. For BFGS, the final Hessian will be based on the numeric Hessian, since the BFGS approximation need not converge to the true Hessian.
Numeric Derivatives
You can control the method of computing numeric derivatives for gradient or Hessian calculations using the “deriv=” option.
At the default setting of “deriv=auto”, EViews will change the number of numeric derivative evaluation points as the optimization routine progresses, switching to a larger number of points as it approaches the optimum.
When you include the “deriv=high” option, EViews will always evaluate the objective function at a larger number of points.
Iteration and Convergence
The “m=” and “c=” options set the maximum number of iterations, and the convergence criterion respectively. Note that for optimization, the number of iterations is the number of successful steps that take place, and that each iteration may involve many function evaluations, both to evaluate any required numeric derivatives and for backtracking in cases where a trial step fails to improve the objective.
Reaching the maximum number of iterations will cause an error to occur (unless the “noerr” option is set).
Advanced Optimization Options
There are several advanced options which control different aspects of the optimization procedure. In general, you should not need to worry about these settings, but they may prove useful in cases where you are experiencing estimation difficulties.
Trust Region
You may use the “trust=” option to set the initial trust region size as a proportion of the initial control values. The default trust region size is 0.25.
Smaller values of this parameter may be used to provide a more cautious start to the optimization in cases where larger steps immediately lead into an undesirable region of the objective.
Larger values may be used to reduce the iteration count in cases where the objective is well behaved but the initial values may be far from the optimum values.
See “Technical Details” for discussion.
Step Method
optimize offers several methods for determining the constrained step size which you may specify using the “step=” option. In additional to the default Marquardt method (“step=marquardt”), you may specify dogleg steps (“step=dogleg”) or a line-search determined step (“step=linesearch”).
Note that in most cases the choice of step method is less important than the selection of Hessian approximation. See “Step Method” for additional detail.
Scale
By default, the optimization procedure automatically adjusts the scale of the objective and control variables using the square root of the maximum observed value of the second derivative (curvature) of each control parameter. Scaling may be switched off using the “scale=none” option. See “Scaling” for discussion.
Objective Accuracy
The “feps=” option may be used to specify the expected relative accuracy of the objective function. The default value is 2.2e-16.
The value indicates what fraction of the observed objective value should be considered to be random noise. You may wish to increase the “feps=” value if the calculation of your objective may be relatively inaccurate.
Status Functions
To support the optimize command, EViews provides three functions that return information about the optimization process:
@optstatus provides a status code for the optimizer, both during and post-optimization.
@optiter returns the current number of iterations performed. If called post-optimization, it will return the number of iterations required for convergence.
@optmessage returns a one line text message based on status and iteration information that summarizes the current state of an optimization.
All three of these functions may be used during optimization by including them inside the optimization subroutine, or post-optimization by calling them after the optimize command.
Error Handling
The “noerr” option may be used as an option to suppress any error messages created when the optimization fails. By default, the optimization procedure will generate an error whenever the results of the optimization appear to be unreliable, such as if convergence was not met, or the gradients are non-zero at the final solution.
If noerr is specified, these errors will be suppressed. In this case, your EViews program may still test whether the optimization succeeded using the @optiter function. Note that the noerr option is useful in cases where you are deliberately stopping optimization early using the m= maximum iterations option, since otherwise this will generate an error.