Main Content

fgoalattain

Solve multiobjective goal attainment problems

Description

fgoalattain solves the goal attainment problem, a formulation for minimizing a multiobjective optimization problem.

fgoalattain finds the minimum of a problem specified by

minimizex,γ γ such that {F(x)weightγgoalc(x)0ceq(x)=0AxbAeqx=beqlbxub.

weight, goal, b, and beq are vectors, A and Aeq are matrices, and F(x), c(x), and ceq(x), are functions that return vectors. F(x), c(x), and ceq(x) can be nonlinear functions.

x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.

x = fgoalattain(fun,x0,goal,weight) tries to make the objective functions supplied by fun attain the goals specified by goal by varying x, starting at x0, with weight specified by weight.

Note

Passing Extra Parameters explains how to pass extra parameters to the objective functions and nonlinear constraint functions, if necessary.

example

x = fgoalattain(fun,x0,goal,weight,A,b) solves the goal attainment problem subject to the inequalities A*x ≤ b.

example

x = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq) solves the goal attainment problem subject to the equalities Aeq*x = beq. If no inequalities exist, set A = [] and b = [].

x = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq,lb,ub) solves the goal attainment problem subject to the bounds lb  x  ub. If no equalities exist, set Aeq = [] and beq = []. If x(i) is unbounded below, set lb(i) = -Inf; if x(i) is unbounded above, set ub(i) = Inf.

Note

If the specified input bounds for a problem are inconsistent, the output x is x0 and the output fval is [].

example

x = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq,lb,ub,nonlcon) solves the goal attainment problem subject to the nonlinear inequalities c(x) or equalities ceq(x) defined in nonlcon. fgoalattain optimizes such that c(x) ≤ 0 and ceq(x) = 0. If no bounds exist, set lb = [] or ub = [], or both.

example

x = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq,lb,ub,nonlcon,options) solves the goal attainment problem with the optimization options specified in options. Use optimoptions to set these options.

example

x = fgoalattain(problem) solves the goal attainment problem for problem, a structure described in problem.

[x,fval] = fgoalattain(___), for any syntax, returns the values of the objective functions computed in fun at the solution x.

example

[x,fval,attainfactor,exitflag,output] = fgoalattain(___) additionally returns the attainment factor at the solution x, a value exitflag that describes the exit condition of fgoalattain, and a structure output with information about the optimization process.

example

[x,fval,attainfactor,exitflag,output,lambda] = fgoalattain(___) additionally returns a structure lambda whose fields contain the Lagrange multipliers at the solution x.

example

Examples

collapse all

Consider the two-objective function

F(x)=[2+(x-3)25+x2/4].

This function clearly minimizes F1(x) at x=3, attaining the value 2, and minimizes F2(x) at x=0, attaining the value 5.

Set the goal [3,6] and weight [1,1], and solve the goal attainment problem starting at x0 = 1.

fun = @(x)[2+(x-3)^2;5+x^2/4];
goal = [3,6];
weight = [1,1];
x0 = 1;
x = fgoalattain(fun,x0,goal,weight)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 
2.0000

Find the value of F(x) at the solution.

fun(x)
ans = 2×1

    3.0000
    6.0000

fgoalattain achieves the goals exactly.

The objective function is

F(x)=[2+x-p125+x-p22/4].

Here, p_1 = [2,3] and p_2 = [4,1]. The goal is [3,6], the weight is [1,1], and the linear constraint is x1+x24.

Create the objective function, goal, and weight.

p_1 = [2,3];
p_2 = [4,1];
fun = @(x)[2 + norm(x-p_1)^2;5 + norm(x-p_2)^2/4];
goal = [3,6];
weight = [1,1];

Create the linear constraint matrices A and b representing A*x <= b.

A = [1,1];
b = 4;

Set an initial point [1,1] and solve the goal attainment problem.

x0 = [1,1];
x = fgoalattain(fun,x0,goal,weight,A,b)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    2.0694    1.9306

Find the value of F(x) at the solution.

fun(x)
ans = 2×1

    3.1484
    6.1484

fgoalattain does not meet the goals. Because the weights are equal, the solver underachieves each goal by the same amount.

The objective function is

F(x)=[2+x-p125+x-p22/4].

Here, p_1 = [2,3] and p_2 = [4,1]. The goal is [3,6], the weight is [1,1], and the bounds are 0x13, 2x25.

Create the objective function, goal, and weight.

p_1 = [2,3];
p_2 = [4,1];
fun = @(x)[2 + norm(x-p_1)^2;5 + norm(x-p_2)^2/4];
goal = [3,6];
weight = [1,1];

Create the bounds.

lb = [0,2];
ub = [3,5];

Set the initial point to [1,4] and solve the goal attainment problem.

x0 = [1,4];
A = []; % no linear constraints
b = [];
Aeq = [];
beq = [];
x = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq,lb,ub)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    2.6667    2.3333

Find the value of F(x) at the solution.

fun(x)
ans = 2×1

    2.8889
    5.8889

fgoalattain more than meets the goals. Because the weights are equal, the solver overachieves each goal by the same amount.

The objective function is

F(x)=[2+x-p125+x-p22/4].

Here, p_1 = [2,3] and p_2 = [4,1]. The goal is [3,6], the weight is [1,1], and the nonlinear constraint is x24.

Create the objective function, goal, and weight.

p_1 = [2,3];
p_2 = [4,1];
fun = @(x)[2 + norm(x-p_1)^2;5 + norm(x-p_2)^2/4];
goal = [3,6];
weight = [1,1];

The nonlinear constraint function is in the norm4.m file.

type norm4
function [c,ceq] = norm4(x)
ceq = [];
c = norm(x)^2 - 4;

Create empty input arguments for the linear constraints and bounds.

A = [];
Aeq = [];
b = [];
beq = [];
lb = [];
ub = [];

Set the initial point to [1,1] and solve the goal attainment problem.

x0 = [1,1];
x = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq,lb,ub,@norm4)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    1.1094    1.6641

Find the value of F(x) at the solution.

fun(x)
ans = 2×1

    4.5778
    7.1991

fgoalattain does not meet the goals. Despite the equal weights, F1(x) is about 1.58 from its goal of 3, and F2(x) is about 1.2 from its goal of 6. The nonlinear constraint prevents the solution x from achieving the goals equally.

Monitor a goal attainment solution process by setting options to return iterative display.

options = optimoptions('fgoalattain','Display','iter');

The objective function is

F(x)=[2+x-p125+x-p22/4].

Here, p_1 = [2,3] and p_2 = [4,1]. The goal is [3,6], the weight is [1,1], and the linear constraint is x1+x24.

Create the objective function, goal, and weight.

p_1 = [2,3];
p_2 = [4,1];
fun = @(x)[2 + norm(x-p_1)^2;5 + norm(x-p_2)^2/4];
goal = [3,6];
weight = [1,1];

Create the linear constraint matrices A and b representing A*x <= b.

A = [1,1];
b = 4;

Create empty input arguments for the linear equality constraints, bounds, and nonlinear constraints.

Aeq = [];
beq = [];
lb = [];
ub = [];
nonlcon = [];

Set an initial point [1,1] and solve the goal attainment problem.

x0 = [1,1];
x = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq,lb,ub,nonlcon,options)
                 Attainment        Max     Line search     Directional 
 Iter F-count        factor    constraint   steplength      derivative   Procedure 
    0      4              0             4                                            
    1      9             -1           2.5            1          -0.535     
    2     14     -1.712e-08        0.2813            1           0.883     
    3     19         0.1452      0.005926            1           0.883     
    4     24         0.1484     2.868e-06            1           0.883     
    5     29         0.1484     6.666e-13            1           0.883    Hessian modified  

Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    2.0694    1.9306

The positive value of the reported attainment factor indicates that fgoalattain does not find a solution satisfying the goals.

The objective function is

F(x)=[2+x-p125+x-p22/4].

Here, p_1 = [2,3] and p_2 = [4,1]. The goal is [3,6], the weight is [1,1], and the linear constraint is x1+x24.

Create the objective function, goal, and weight.

p_1 = [2,3];
p_2 = [4,1];
fun = @(x)[2 + norm(x-p_1)^2;5 + norm(x-p_2)^2/4];
goal = [3,6];
weight = [1,1];

Create the linear constraint matrices A and b representing A*x <= b.

A = [1,1];
b = 4;

Set an initial point [1,1] and solve the goal attainment problem. Request the value of the objective function.

x0 = [1,1];
[x,fval] = fgoalattain(fun,x0,goal,weight,A,b)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    2.0694    1.9306

fval = 2×1

    3.1484
    6.1484

The objective function values are higher than the goal, meaning fgoalattain does not satisfy the goal.

The objective function is

F(x)=[2+x-p125+x-p22/4].

Here, p_1 = [2,3] and p_2 = [4,1]. The goal is [3,6], the weight is [1,1], and the linear constraint is x1+x24.

Create the objective function, goal, and weight.

p_1 = [2,3];
p_2 = [4,1];
fun = @(x)[2 + norm(x-p_1)^2;5 + norm(x-p_2)^2/4];
goal = [3,6];
weight = [1,1];

Create the linear constraint matrices A and b representing A*x <= b.

A = [1,1];
b = 4;

Set an initial point [1,1] and solve the goal attainment problem. Request the value of the objective function, attainment factor, exit flag, output structure, and Lagrange multipliers.

x0 = [1,1];
[x,fval,attainfactor,exitflag,output,lambda] = fgoalattain(fun,x0,goal,weight,A,b)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    2.0694    1.9306

fval = 2×1

    3.1484
    6.1484

attainfactor = 
0.1484
exitflag = 
4
output = struct with fields:
         iterations: 6
          funcCount: 29
       lssteplength: 1
           stepsize: 4.1023e-13
          algorithm: 'active-set'
      firstorderopt: []
    constrviolation: 6.6663e-13
            message: 'Local minimum possible. Constraints satisfied....'

lambda = struct with fields:
         lower: [2x1 double]
         upper: [2x1 double]
         eqlin: [0x1 double]
      eqnonlin: [0x1 double]
       ineqlin: 0.5394
    ineqnonlin: [0x1 double]

The positive value of attainfactor indicates that the goals are not attained; you can also see this by comparing fval with goal.

The lambda.ineqlin value is nonzero, indicating that the linear inequality constrains the solution.

The objective function is

F(x)=[2+x-p125+x-p22/4].

Here, p_1 = [2,3] and p_2 = [4,1]. The goal is [3,6], and the initial weight is [1,1].

Create the objective function, goal, and initial weight.

p_1 = [2,3];
p_2 = [4,1];
fun = @(x)[2 + norm(x-p_1)^2;5 + norm(x-p_2)^2/4];
goal = [3,6];
weight = [1,1];

Set the linear constraint x1+x24.

A = [1 1];
b = 4;

Solve the goal attainment problem starting from the point x0 = [1 1].

x0 = [1 1];
[x,fval] = fgoalattain(fun,x0,goal,weight,A,b)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    2.0694    1.9306

fval = 2×1

    3.1484
    6.1484

Each component of fval is above the corresponding component of goal, indicating that the goals are not attained.

Increase the importance of satisfying the first goal by setting weight(1) to a smaller value.

weight(1) = 1/10;
[x,fval] = fgoalattain(fun,x0,goal,weight,A,b)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    2.0115    1.9885

fval = 2×1

    3.0233
    6.2328

Now the value of fval(1) is much closer to goal(1), whereas fval(2) is farther from goal(2).

Change goal(2) to 7, which is above the current solution. The solution changes.

goal(2) = 7;
[x,fval] = fgoalattain(fun,x0,goal,weight,A,b)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    1.9639    2.0361

fval = 2×1

    2.9305
    6.3047

Both components of fval are less than the corresponding components of goal. But fval(1) is much closer to goal(1) than fval(2) is to goal(2). A smaller weight is more likely to make its component nearly satisfied when the goals cannot be achieved, but makes the degree of overachievement less when the goal can be achieved.

Change the weights to be equal. The fval results have equal distance from their goals.

weight(2) = 1/10;
[x,fval] = fgoalattain(fun,x0,goal,weight,A,b)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    1.7613    2.2387

fval = 2×1

    2.6365
    6.6365

Constraints can keep the resulting fval from being equally close to the goals. For example, set an upper bound of 2 on x(2).

ub = [Inf,2];
lb = [];
Aeq = [];
beq = [];
[x,fval] = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq,lb,ub)
Local minimum possible. Constraints satisfied.

fgoalattain stopped because the size of the current search direction is less than
twice the value of the step size tolerance and constraints are 
satisfied to within the value of the constraint tolerance.
x = 1×2

    2.0000    2.0000

fval = 2×1

    3.0000
    6.2500

In this case, fval(1) meets its goal exactly, but fval(2) is less than its goal.

Input Arguments

collapse all

Objective functions, specified as a function handle or function name. fun is a function that accepts a vector x and returns a vector F, the objective functions evaluated at x. You can specify the function fun as a function handle for a function file:

x = fgoalattain(@myfun,x0,goal,weight)

where myfun is a MATLAB® function such as

function F = myfun(x)
F = ...         % Compute function values at x.

fun can also be a function handle for an anonymous function:

x = fgoalattain(@(x)sin(x.*x),x0,goal,weight);

fgoalattain passes x to your objective function and any nonlinear constraint functions in the shape of the x0 argument. For example, if x0 is a 5-by-3 array, then fgoalattain passes x to fun as a 5-by-3 array. However, fgoalattain multiplies linear constraint matrices A or Aeq with x after converting x to the column vector x(:).

To make an objective function as near as possible to a goal value (that is, neither greater than nor less than), use optimoptions to set the EqualityGoalCount option to the number of objectives required to be in the neighborhood of the goal values. Such objectives must be partitioned into the first elements of the vector F returned by fun.

Suppose that the gradient of the objective function can also be computed and the SpecifyObjectiveGradient option is true, as set by:

options = optimoptions('fgoalattain','SpecifyObjectiveGradient',true)

In this case, the function fun must return, in the second output argument, the gradient value G (a matrix) at x. The gradient consists of the partial derivative dF/dx of each F at the point x. If F is a vector of length m and x has length n, where n is the length of x0, then the gradient G of F(x) is an n-by-m matrix where G(i,j) is the partial derivative of F(j) with respect to x(i) (that is, the jth column of G is the gradient of the jth objective function F(j)).

Note

Setting SpecifyObjectiveGradient to true is effective only when the problem has no nonlinear constraints, or the problem has a nonlinear constraint with SpecifyConstraintGradient set to true. Internally, the objective is folded into the constraints, so the solver needs both gradients (objective and constraint) supplied in order to avoid estimating a gradient.

Data Types: char | string | function_handle

Initial point, specified as a real vector or real array. Solvers use the number of elements in x0 and the size of x0 to determine the number and size of variables that fun accepts.

Example: x0 = [1,2,3,4]

Data Types: double

Goal to attain, specified as a real vector. fgoalattain attempts to find the smallest multiplier γ that makes these inequalities hold for all values of i at the solution x:

Fi(x)goaliweightiγ.

Assuming that weight is a positive vector:

  • If the solver finds a point x that simultaneously achieves all the goals, then the attainment factor γ is negative, and the goals are overachieved.

  • If the solver cannot find a point x that simultaneously achieves all the goals, then the attainment factor γ is positive, and the goals are underachieved.

Example: [1 3 6]

Data Types: double

Relative attainment factor, specified as a real vector. fgoalattain attempts to find the smallest multiplier γ that makes these inequalities hold for all values of i at the solution x:

Fi(x)goaliweightiγ.

When the values of goal are all nonzero, to ensure the same percentage of underachievement or overattainment of the active objectives, set weight to abs(goal). (The active objectives are the set of objectives that are barriers to further improvement of the goals at the solution.)

Note

Setting a component of the weight vector to zero causes the corresponding goal constraint to be treated as a hard constraint rather than a goal constraint. An alternative method to setting a hard constraint is to use the input argument nonlcon.

When weight is positive, fgoalattain attempts to make the objective functions less than the goal values. To make the objective functions greater than the goal values, set weight to be negative rather than positive. To see some effects of weights on a solution, see Effects of Weights, Goals, and Constraints in Goal Attainment.

To make an objective function as near as possible to a goal value, use the EqualityGoalCount option and specify the objective as the first element of the vector returned by fun (see fun and options). For an example, see Multi-Objective Goal Attainment Optimization.

Example: abs(goal)

Data Types: double

Linear inequality constraints, specified as a real matrix. A is an M-by-N matrix, where M is the number of inequalities, and N is the number of variables (number of elements in x0). For large problems, pass A as a sparse matrix.

A encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and b is a column vector with M elements.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30,

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: single | double

Linear inequality constraints, specified as a real vector. b is an M-element vector related to the A matrix. If you pass b as a row vector, solvers internally convert b to the column vector b(:). For large problems, pass b as a sparse vector.

b encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and A is a matrix of size M-by-N.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30.

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: single | double

Linear equality constraints, specified as a real matrix. Aeq is an Me-by-N matrix, where Me is the number of equalities, and N is the number of variables (number of elements in x0). For large problems, pass Aeq as a sparse matrix.

Aeq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and beq is a column vector with Me elements.

For example, consider these inequalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20,

Specify the inequalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: single | double

Linear equality constraints, specified as a real vector. beq is an Me-element vector related to the Aeq matrix. If you pass beq as a row vector, solvers internally convert beq to the column vector beq(:). For large problems, pass beq as a sparse vector.

beq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and Aeq is a matrix of size Me-by-N.

For example, consider these equalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20.

Specify the equalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: single | double

Lower bounds, specified as a real vector or real array. If the number of elements in x0 is equal to the number of elements in lb, then lb specifies that

x(i) >= lb(i) for all i.

If numel(lb) < numel(x0), then lb specifies that

x(i) >= lb(i) for 1 <= i <= numel(lb).

If lb has fewer elements than x0, solvers issue a warning.

Example: To specify that all x components are positive, use lb = zeros(size(x0)).

Data Types: single | double

Upper bounds, specified as a real vector or real array. If the number of elements in x0 is equal to the number of elements in ub, then ub specifies that

x(i) <= ub(i) for all i.

If numel(ub) < numel(x0), then ub specifies that

x(i) <= ub(i) for 1 <= i <= numel(ub).

If ub has fewer elements than x0, solvers issue a warning.

Example: To specify that all x components are less than 1, use ub = ones(size(x0)).

Data Types: single | double

Nonlinear constraints, specified as a function handle or function name. nonlcon is a function that accepts a vector or array x and returns two arrays, c(x) and ceq(x).

  • c(x) is the array of nonlinear inequality constraints at x. fgoalattain attempts to satisfy

    c(x) <= 0 for all entries of c.

  • ceq(x) is the array of nonlinear equality constraints at x. fgoalattain attempts to satisfy

    ceq(x) = 0 for all entries of ceq.

For example,

x = fgoalattain(@myfun,x0,...,@mycon)

where mycon is a MATLAB function such as the following:

function [c,ceq] = mycon(x)
c = ...     % Compute nonlinear inequalities at x.
ceq = ...   % Compute nonlinear equalities at x.

Suppose that the gradients of the constraints can also be computed and the SpecifyConstraintGradient option is true, as set by:

options = optimoptions('fgoalattain','SpecifyConstraintGradient',true)

In this case, the function nonlcon must also return, in the third and fourth output arguments, GC, the gradient of c(x), and GCeq, the gradient of ceq(x). See Nonlinear Constraints for an explanation of how to “conditionalize” the gradients for use in solvers that do not accept supplied gradients.

If nonlcon returns a vector c of m components and x has length n, where n is the length of x0, then the gradient GC of c(x) is an n-by-m matrix, where GC(i,j) is the partial derivative of c(j) with respect to x(i) (that is, the jth column of GC is the gradient of the jth inequality constraint c(j)). Likewise, if ceq has p components, the gradient GCeq of ceq(x) is an n-by-p matrix, where GCeq(i,j) is the partial derivative of ceq(j) with respect to x(i) (that is, the jth column of GCeq is the gradient of the jth equality constraint ceq(j)).

Note

Setting SpecifyConstraintGradient to true is effective only when SpecifyObjectiveGradient is set to true. Internally, the objective is folded into the constraint, so the solver needs both gradients (objective and constraint) supplied in order to avoid estimating a gradient.

Note

Because Optimization Toolbox™ functions accept only inputs of type double, user-supplied objective and nonlinear constraint functions must return outputs of type double.

See Passing Extra Parameters for an explanation of how to parameterize the nonlinear constraint function nonlcon, if necessary.

Data Types: char | function_handle | string

Optimization options, specified as the output of optimoptions or a structure such as optimset returns.

Some options are absent from the optimoptions display. These options appear in italics in the following table. For details, see View Optimization Options.

For details about options that have different names for optimset, see Current and Legacy Option Names.

OptionDescription
ConstraintTolerance

Termination tolerance on the constraint violation, a nonnegative scalar. The default is 1e-6. See Tolerances and Stopping Criteria.

For optimset, the name is TolCon.

Diagnostics

Display of diagnostic information about the function to be minimized or solved. The choices are 'on' or 'off' (the default).

DiffMaxChange

Maximum change in variables for finite-difference gradients (a positive scalar). The default is Inf.

DiffMinChange

Minimum change in variables for finite-difference gradients (a positive scalar). The default is 0.

Display

Level of display (see Iterative Display):

  • 'off' or 'none' displays no output.

  • 'iter' displays output at each iteration, and gives the default exit message.

  • 'iter-detailed' displays output at each iteration, and gives the technical exit message.

  • 'notify' displays output only if the function does not converge, and gives the default exit message.

  • 'notify-detailed' displays output only if the function does not converge, and gives the technical exit message.

  • 'final' (default) displays only the final output, and gives the default exit message.

  • 'final-detailed' displays only the final output, and gives the technical exit message.

EqualityGoalCount

Number of objectives required for the objective fun to equal the goal goal (a nonnegative integer). The objectives must be partitioned into the first few elements of F. The default is 0. For an example, see Multi-Objective Goal Attainment Optimization.

For optimset, the name is GoalsExactAchieve.

FiniteDifferenceStepSize

Scalar or vector step size factor for finite differences. When you set FiniteDifferenceStepSize to a vector v, the forward finite differences delta are

delta = v.*sign′(x).*max(abs(x),TypicalX);

where sign′(x) = sign(x) except sign′(0) = 1. Central finite differences are

delta = v.*max(abs(x),TypicalX);

A scalar FiniteDifferenceStepSize expands to a vector. The default is sqrt(eps) for forward finite differences, and eps^(1/3) for central finite differences.

For optimset, the name is FinDiffRelStep.

FiniteDifferenceType

Type of finite differences used to estimate gradients, either 'forward' (default), or 'central' (centered). 'central' takes twice as many function evaluations, but is generally more accurate.

The algorithm is careful to obey bounds when estimating both types of finite differences. For example, it might take a backward step, rather than a forward step, to avoid evaluating at a point outside the bounds.

For optimset, the name is FinDiffType.

FunctionTolerance

Termination tolerance on the function value (a nonnegative scalar). The default is 1e-6. See Tolerances and Stopping Criteria.

For optimset, the name is TolFun.

FunValCheck

Check that signifies whether the objective function and constraint values are valid. 'on' displays an error when the objective function or constraints return a value that is complex, Inf, or NaN. The default 'off' displays no error.

MaxFunctionEvaluations

Maximum number of function evaluations allowed (a nonnegative integer). The default is 100*numberOfVariables. See Tolerances and Stopping Criteria and Iterations and Function Counts.

For optimset, the name is MaxFunEvals.

MaxIterations

Maximum number of iterations allowed (a nonnegative integer). The default is 400. See Tolerances and Stopping Criteria and Iterations and Function Counts.

For optimset, the name is MaxIter.

MaxSQPIter

Maximum number of SQP iterations allowed (a positive integer). The default is 10*max(numberOfVariables, numberOfInequalities + numberOfBounds).

MeritFunction

If this option is set to 'multiobj' (the default), use goal attainment merit function. If this option is set to 'singleobj', use the fmincon merit function.

OptimalityTolerance

Termination tolerance on the first-order optimality (a nonnegative scalar). The default is 1e-6. See First-Order Optimality Measure.

For optimset, the name is TolFun.

OutputFcn

One or more user-defined functions that an optimization function calls at each iteration. Pass a function handle or a cell array of function handles. The default is none ([]). See Output Function and Plot Function Syntax.

PlotFcn

Plots showing various measures of progress while the algorithm executes. Select from predefined plots or write your own. Pass a name, function handle, or cell array of names or function handles. For custom plot functions, pass function handles. The default is none ([]).

  • 'optimplotx' plots the current point.

  • 'optimplotfunccount' plots the function count.

  • 'optimplotfval' plots the objective function values.

  • 'optimplotconstrviolation' plots the maximum constraint violation.

  • 'optimplotstepsize' plots the step size.

Custom plot functions use the same syntax as output functions. See Output Functions for Optimization Toolbox and Output Function and Plot Function Syntax.

For optimset, the name is PlotFcns.

RelLineSrchBnd

Relative bound (a real nonnegative scalar value) on the line search step length such that the total displacement in x satisfies x(i)| ≤ relLineSrchBnd· max(|x(i)|,|typicalx(i)|). This option provides control over the magnitude of the displacements in x when the solver takes steps that are too large. The default is none ([]).

RelLineSrchBndDuration

Number of iterations for which the bound specified in RelLineSrchBnd should be active. The default is 1.

SpecifyConstraintGradient

Gradient for nonlinear constraint functions defined by the user. When this option is set to true, fgoalattain expects the constraint function to have four outputs, as described in nonlcon. When this option is set to false (the default), fgoalattain estimates gradients of the nonlinear constraints using finite differences.

For optimset, the name is GradConstr and the values are 'on' or 'off'.

SpecifyObjectiveGradient

Gradient for the objective function defined by the user. Refer to the description of fun to see how to define the gradient. Set this option to true to have fgoalattain use a user-defined gradient of the objective function. The default, false, causes fgoalattain to estimate gradients using finite differences.

For optimset, the name is GradObj and the values are 'on' or 'off'.

StepTolerance

Termination tolerance on x (a nonnegative scalar). The default is 1e-6. See Tolerances and Stopping Criteria.

For optimset, the name is TolX.

TolConSQP

Termination tolerance on the inner iteration SQP constraint violation (a positive scalar). The default is 1e-6.

TypicalX

Typical x values. The number of elements in TypicalX is equal to the number of elements in x0, the starting point. The default value is ones(numberofvariables,1). The fgoalattain function uses TypicalX for scaling finite differences for gradient estimation.

UseParallel

Indication of parallel computing. When true, fgoalattain estimates gradients in parallel. The default is false. See Parallel Computing.

Example: optimoptions('fgoalattain','PlotFcn','optimplotfval')

Problem structure, specified as a structure with the fields in this table.

Field NameEntry

objective

Objective function fun

x0

Initial point for x

goal

Goals to attain

weight

Relative importance factors of goals

Aineq

Matrix for linear inequality constraints

bineq

Vector for linear inequality constraints

Aeq

Matrix for linear equality constraints

beq

Vector for linear equality constraints
lbVector of lower bounds
ubVector of upper bounds

nonlcon

Nonlinear constraint function

solver

'fgoalattain'

options

Options created with optimoptions

You must supply at least the objective, x0, goal, weight, solver, and options fields in the problem structure.

Data Types: struct

Output Arguments

collapse all

Solution, returned as a real vector or real array. The size of x is the same as the size of x0. Typically, x is a local solution to the problem when exitflag is positive. For information on the quality of the solution, see When the Solver Succeeds.

Objective function values at the solution, returned as a real array. Generally, fval = fun(x).

Attainment factor, returned as a real number. attainfactor contains the value of γ at the solution. If attainfactor is negative, the goals have been overachieved; if attainfactor is positive, the goals have been underachieved. See goal.

Reason fgoalattain stopped, returned as an integer.

1

Function converged to a solution x

4

Magnitude of the search direction was less than the specified tolerance, and the constraint violation was less than options.ConstraintTolerance

5

Magnitude of the directional derivative was less than the specified tolerance, and the constraint violation was less than options.ConstraintTolerance

0

Number of iterations exceeded options.MaxIterations or the number of function evaluations exceeded options.MaxFunctionEvaluations

-1

Stopped by an output function or plot function

-2

No feasible point was found.

Information about the optimization process, returned as a structure with the fields in this table.

iterations

Number of iterations taken

funcCount

Number of function evaluations

lssteplength

Size of the line search step relative to the search direction

constrviolation

Maximum of the constraint functions

stepsize

Length of the last displacement in x

algorithm

Optimization algorithm used

firstorderopt

Measure of first-order optimality

message

Exit message

Lagrange multipliers at the solution, returned as a structure with the fields in this table.

lower

Lower bounds corresponding to lb

upper

Upper bounds corresponding to ub

ineqlin

Linear inequalities corresponding to A and b

eqlin

Linear equalities corresponding to Aeq and beq

ineqnonlin

Nonlinear inequalities corresponding to the c in nonlcon

eqnonlin

Nonlinear equalities corresponding to the ceq in nonlcon

Algorithms

For a description of the fgoalattain algorithm and a discussion of goal attainment concepts, see Algorithms.

Alternative Functionality

App

The Optimize Live Editor task provides a visual interface for fgoalattain.

Extended Capabilities

Version History

Introduced before R2006a