fmincon: any way to enforce linear inequality constraints at intermediate iterations?

9 views (last 30 days)
I am solving an optimization problem with the interior-point algorithm of fmincon.
My parameters have both lower and upper bounds and linear inequality constraints.
A_ineq = [1 -1 0 0 0;
-1 2 -1 0 0;
0 -1 2 -1 0;
0 0 -1 2 -1;
0 0 0 1 -1];
b_ineq=[0 0 0 0 0];
I observed that fmincon satisfies, of corrse, the bounds at all iterations but not the linear inequality constraints.
However, the linear inequality constraints are crucially important in my case. If violated, my optimization problem can not be solved.
Is there anything one can do to ensure that linear inequality constraints are satisfied at intermediate iterations?
  6 Comments
SA-W
SA-W on 2 Mar 2023
Scaling the matrix A_ineq by 1e4 or any other large value really helps in my case to (in a least-squares sense) enforce the linear constraints at intermediate iterations.
Matt J
Matt J on 6 May 2023
Edited: Matt J on 6 May 2023
I'd like to point out that it violates the theoretical assumptions of fmincon, and probably all the Optimization Toolbox solvers as well, when the domain of your objective function and constraints is not an open subset of . If you forbid evaluation outside the closed set defined by your inequality constraints, that is essentially the situation you are creating. It's not entirely clear what hazards this creates in practice, but it might mean one of the Global Optimization Toolbox solvers might be more appropriate.

Sign in to comment.

Accepted Answer

Matt J
Matt J on 5 May 2023
Edited: Matt J on 23 May 2023
Within your objective function, project the current x onto the constrained region using quadprog,
fun=@(x) myObjective(x, A_ineq,b_ineq,lb,ub);
x=fmincon(fun, x0,[],[],[],[],[],lb,ub,options)
function fval=myObjective(x, A_ineq,b_ineq,lb,ub)
C=speye(numel(x));
x=lsqlin(C,x,A_ineq,b_ineq,[],[],lb,ub); %EDIT: was quadprog by mistake
fval=...
end
  62 Comments
Matt J
Matt J on 3 Mar 2025
Edited: Matt J on 3 Mar 2025
It appears below that, as long as the interior-point method is used, you can have active constraints at the solution. Here we impose x(1)=0 using Aeq,beq instead of bounds.
However, it also appears that you must exclude equality constraints as arguments from fmincon,
fmincon(objectiveFunc, x0, Aineq,bineq, [],[], lb,ub, [], options)
(but still enforce them through lsqlin). I'm a bit foggy on why this is. It's obviously very specific to the interior-point algorithm, because things work very reliably with it, and less reliably when you switch to another algorithm, e.g sqp.
clear; close all; clc,
Aineq = [-B.coefs'; -Bdd.coefs'];
bineq = zeros(size(Aineq, 1), 1);
Aeq=[1,zeros(1,N-1)]; beq=0;
%Aeq=[];beq=[];
% Bound constraints to fix x(1) = 0
lb = -Inf(N, 1); ub = +Inf(N, 1); lb(1) = -1; ub(1) = +1;
%lb=[]; ub=[];
% Solver options
options = optimoptions("fmincon", "Algorithm","interior-point",...
"FiniteDifferenceStepSize", 1e-6, ...
"SpecifyObjectiveGradient", true, "OptimalityTolerance",1e-12,'FunctionTolerance',1e-12,...
"StepTolerance", 1e-12, ...
"Display", "off");
% Start point
x0=-(x-x(1));
% Weights for constraint violation: weight*||zp - z||^2
weights = [1,10,100,1000,1e4,1e6];
figure;
% Loop through weights, solve problem, and plot solution
for w = 1:numel(weights)
% Function handle
objectiveFunc = @(y) Objective_func(y, k, x, Aineq, bineq, Aeq,beq, lb, ub, weights(w), xq, yq);
% Call optimizer
[sol, ~, exitFlag, output, ~, ~, ~] = fmincon(objectiveFunc, x0, Aineq,bineq, [],[], lb,ub, [], options);
fprintf('output for weight %g\n\n', weights(w));
disp(output);
% Plot data and solution
subplot(2, 3, w);
f = spapi(k, x, sol);
fnplt(f); hold on;
plot(xq, yq, 'r+');
title(sprintf('weight = %g', weights(w)))
hold off
end
output for weight 1
iterations: 40 funcCount: 45 constrviolation: 0 stepsize: 1.0095e-16 algorithm: 'interior-point' firstorderopt: 1.2801e-08 cgiterations: 0 message: 'Local minimum possible. Constraints satisfied....' bestfeasible: [1x1 struct]
output for weight 10
iterations: 93 funcCount: 98 constrviolation: 0 stepsize: 1.1408e-16 algorithm: 'interior-point' firstorderopt: 2.0523e-09 cgiterations: 0 message: 'Local minimum possible. Constraints satisfied....' bestfeasible: [1x1 struct]
output for weight 100
iterations: 112 funcCount: 117 constrviolation: 0 stepsize: 5.7230e-17 algorithm: 'interior-point' firstorderopt: 7.0198e-09 cgiterations: 0 message: 'Local minimum possible. Constraints satisfied....' bestfeasible: [1x1 struct]
output for weight 1000
iterations: 119 funcCount: 131 constrviolation: 0 stepsize: 8.1106e-17 algorithm: 'interior-point' firstorderopt: 9.3932e-11 cgiterations: 9 message: 'Local minimum possible. Constraints satisfied....' bestfeasible: [1x1 struct]
output for weight 10000
iterations: 117 funcCount: 126 constrviolation: 0 stepsize: 5.3344e-17 algorithm: 'interior-point' firstorderopt: 1.6398e-07 cgiterations: 10 message: 'Local minimum possible. Constraints satisfied....' bestfeasible: [1x1 struct]
output for weight 1e+06
iterations: 150 funcCount: 197 constrviolation: 0 stepsize: 1.2570e-12 algorithm: 'interior-point' firstorderopt: 1.0282e-04 cgiterations: 59 message: 'Local minimum possible. Constraints satisfied....' bestfeasible: [1x1 struct]
function [fval, grad] = Objective_func(z, k, x, Aineq, bineq, Aeq,beq, lb, ub, weight, xq, yq)
% y are the current parameters
z=z(:); xq=xq(:); yq=yq(:); I = eye(numel(z));
% Project z onto linearly constrained region
tol = 1e-12;
options = optimoptions('lsqlin','Display', 'off', ...
FunctionTolerance=tol, StepTolerance=tol, OptimalityTolerance=tol);
[zp,~,~,exitflag] = lsqlin(I, z, Aineq, bineq, Aeq,beq, lb, ub, z, options);
% if exitflag<=0
% keyboard
% end
% Compute norms of residuals
f = spapi(k, x, zp);
dataResidual = fnval(f, xq) - yq;
projectionResidual=zp-z;
fval = norm(dataResidual).^2/2 + weight * norm(projectionResidual)^2/2;
% Compute jacobian and gradient
if nargout > 1
% Find active ctr
A = getActiveConstraints(zp,Aineq,bineq,Aeq,beq,lb,ub);
JacP = I - pinv(A) * A;
B = spapi(k, x, eye(numel(x)));
jacobianT = fnval(B, xq);
grad = (jacobianT.' * JacP).'*dataResidual(:) + weight * (JacP-I).'*projectionResidual;
end
end
function A = getActiveConstraints(zp,Aineq,bineq,Aeq,beq,lb,ub)
arguments
zp (:,1);
Aineq,bineq,Aeq,beq;
lb (:,1);
ub (:,1);
end
I=speye(numel(zp));
A=cell(4,1);
if ~isempty(Aineq)
idx=abs(Aineq * zp - bineq) < 1e-6;
A{1}=Aineq(idx,:);
end
% if ~isempty(Aeq)
% idx=abs(Aeq * zp - beq) < 1e-6;
% A{2}=Aeq(idx,:);
% end
A{2}=Aeq; %If lsqlin was successfully, equality constraints will always be active
if ~isempty(ub)
idx=abs(zp - ub) < 1e-6;
A{3}=full(I(idx,:));
end
if ~isempty(lb)
idx=abs(zp - lb) < 1e-6;
A{4}=full(-I(idx,:));
end
A=vertcat(A{:});
end

Sign in to comment.

More Answers (2)

Shubham
Shubham on 5 May 2023
Hi SA-W,
Yes, there are a few options you can try to ensure that the linear inequality constraints are satisfied at intermediate iterations of the interior-point algorithm in fmincon:
  1. Tighten the tolerances: By tightening the tolerances in the fmincon options, you can force the algorithm to take smaller steps and converge more slowly, but with higher accuracy. This may help to ensure that the linear inequality constraints are satisfied at all intermediate iterations.
  2. Use a barrier function: You can try using a barrier function to penalize violations of the linear inequality constraints. This can be done by adding a term to the objective function that grows very large as the constraints are violated. This will encourage the algorithm to stay within the feasible region defined by the constraints.
  3. Use a penalty function: Similar to a barrier function, a penalty function can be used to penalize violations of the linear inequality constraints. However, instead of growing very large, the penalty function grows linearly with the degree of violation. This can be a more computationally efficient approach than a barrier function.
  4. Use a combination of methods: You can try using a combination of the above methods to ensure that the linear inequality constraints are satisfied at all intermediate iterations. For example, you could tighten the tolerances and use a penalty function or barrier function to further enforce the constraints.
It's important to note that these methods may increase the computational cost of the optimization problem, so it's important to balance the accuracy requirements with the available resources.

Matt J
Matt J on 5 May 2023
Within your objective function, check if the linear inequalites are satsfied. If they are not, abort the function and return Inf. Otherwise, compute the output value as usual.
fun=@(x) myObjective(x, A_ineq,b_ineq);
x=fmincon(fun, x0,A_ineq,b_ineq,[],[],lb,ub,[],options)
function fval=myObjective(x, A_ineq,b_ineq)
if ~all(A_ineq*x<=b_ineq)
fval=Inf; return
end
fval=...
end

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!